Table of Contents
    InnoDB is a general-purpose storage engine that
    balances high reliability and high performance. As of MySQL 5.5, it
    is the default MySQL storage engine. In MySQL 5.6,
    issuing the CREATE TABLE statement
    without an ENGINE= clause creates an
    InnoDB table.
    Key advantages of InnoDB tables include:
Its DML operations follow the ACID model, with transactions featuring commit, rollback, and crash-recovery capabilities to protect user data.
Row-level locking and Oracle-style consistent reads increase multi-user concurrency and performance.
        InnoDB tables arrange your data on disk to
        optimize queries based on
        primary keys.
      
        To maintain data
        integrity,
        InnoDB also supports
        FOREIGN
        KEY constraints. Inserts, updates, and deletes
        are all checked to ensure they do not result in inconsistencies
        across different tables.
      
        You can freely mix InnoDB tables with tables
        from other MySQL storage engines, even within the same
        statement. For example, you can use a
        join operation to combine data
        from InnoDB and
        MEMORY tables in a single query.
      
        InnoDB has been designed for CPU efficiency
        and maximum performance when processing large data volumes.
Table 14.1 InnoDB Storage Engine Features
| Storage limits | 64TB | Transactions | Yes | Locking granularity | Row | 
| MVCC | Yes | Geospatial data type support | Yes | Geospatial indexing support | Yes[a] | 
| B-tree indexes | Yes | T-tree indexes | No | Hash indexes | No[b] | 
| Full-text search indexes | Yes[c] | Clustered indexes | Yes | Data caches | Yes | 
| Index caches | Yes | Compressed data | Yes[d] | Encrypted data[e] | Yes | 
| Cluster database support | No | Replication support[f] | Yes | Foreign key support | Yes | 
| Backup / point-in-time recovery[g] | Yes | Query cache support | Yes | Update statistics for data dictionary | Yes | 
| [a] InnoDB support for geospatial indexing is available in MySQL 5.7.5 and higher. [b] InnoDB utilizes hash indexes internally for its Adaptive Hash Index feature. [c] InnoDB support for FULLTEXT indexes is available in MySQL 5.6.4 and higher. [d] Compressed InnoDB tables require the InnoDB Barracuda file format. [e] Implemented in the server (via encryption functions), rather than in the storage engine. [f] Implemented in the server, rather than in the storage engine. [g] Implemented in the server, rather than in the storage engine. | |||||
    The InnoDB storage engine maintains its own
    buffer pool for caching data
    and indexes in main memory. By default, with the
    innodb_file_per_table setting
    enabled, each new InnoDB table and its associated
    indexes are stored in a separate file. When the
    innodb_file_per_table option is
    disabled, InnoDB stores all its tables and
    indexes in the single system
    tablespace, which may consist of several files (or raw disk
    partitions). InnoDB tables can handle large
    quantities of data, even on operating systems where file size is
    limited to 2GB.
  
    To compare the features of InnoDB with other
    storage engines provided with MySQL, see the Storage
    Engine Features table in
    Chapter 15, Alternative Storage Engines.
    For information about InnoDB enhancements and new
    features in MySQL 5.6, refer to:
        The InnoDB enhancements list in
        Section 1.4, “What Is New in MySQL 5.6”, which provides an overview of
        the features added in MySQL 5.6.
      
The Release Notes, which provide information about changes in each version.
        For InnoDB-related terms and definitions, see
        MySQL Glossary.
      
        A forum dedicated to the InnoDB storage
        engine is available here:
        MySQL
        Forums::InnoDB.
      
        InnoDB is published under the same GNU GPL
        License Version 2 (of June 1991) as MySQL. For more information
        on MySQL licensing, see
        http://www.mysql.com/company/legal/licensing/.
    MySQL has a well-earned reputation for being easy-to-use and
    delivering performance and scalability. Prior to MySQL 5.5,
    MyISAM was the default storage engine.
    In our experience, most users never changed the default settings. In
    MySQL 5.5 and higher, InnoDB is the
    default storage engine. Again, we expect most users will not change
    the default settings. But, because of InnoDB, the
    default settings deliver the benefits users expect from their RDBMS:
    ACID Transactions,
    Referential
    Integrity, and Crash
    Recovery. Let's explore how using InnoDB
    tables improves your life as a MySQL user, DBA, or developer.
In the first years of MySQL growth, early web-based applications didn't push the limits of concurrency and availability. In recent years, hard drive and memory capacity and the performance/price ratio have all gone through the roof. Users pushing the performance boundaries of MySQL care a lot about reliability and crash recovery. MySQL databases are big, busy, robust, distributed, and important.
    InnoDB addresses these top user priorities. The
    trend of storage engine usage has shifted in favor of the more
    scalable InnoDB. Thus MySQL 5.5 was the logical
    transition release to make InnoDB the default
    storage engine.
  
    MySQL continues to work on addressing use cases that formerly
    required MyISAM tables. In MySQL 5.6 and higher:
        InnoDB can perform full-text search using the
        FULLTEXT index type. See
        Section 14.2.6.3, “InnoDB FULLTEXT Indexes” for details.
      
        InnoDB now performs better with read-only or
        read-mostly workloads. Automatic optimizations apply to
        InnoDB queries in
        autocommit mode, and you
        can explicitly mark transactions as read-only with the syntax
        START TRANSACTION READ ONLY. See
        Section 8.5.3, “Optimizing InnoDB Read-Only Transactions” for details.
      
        Applications distributed on read-only media can now use
        InnoDB tables. See
        Section 14.3.2, “Configuring InnoDB for Read-Only Operation” for details.
    Starting from MySQL 5.5.5, the default storage engine for new tables
    is InnoDB. This change applies to newly created
    tables that don't specify a storage engine with a clause such as
    ENGINE=MyISAM. Given this change of default
    behavior, MySQL 5.5 might be a logical point to evaluate whether
    your tables that do use MyISAM could benefit from
    switching to InnoDB.
  
    The mysql and
    information_schema databases, that implement some
    of the MySQL internals, still use MyISAM. In
    particular, you cannot switch the grant tables to use
    InnoDB.
    If you use MyISAM tables but aren't tied to them
    for technical reasons, you'll find many things more convenient when
    you use InnoDB tables:
        If your server crashes because of a hardware or software issue,
        regardless of what was happening in the database at the time,
        you don't need to do anything special after restarting the
        database. InnoDB
        crash recovery
        automatically finalizes any changes that were committed before
        the time of the crash, and undoes any changes that were in
        process but not committed. Just restart and continue where you
        left off. This process is now much faster than in MySQL 5.1 and
        earlier.
      
        The InnoDB
        buffer pool caches table
        and index data as the data is accessed. Frequently used data is
        processed directly from memory. This cache applies to so many
        types of information, and speeds up processing so much, that
        dedicated database servers assign up to 80% of their physical
        memory to the InnoDB buffer pool.
      
If you split up related data into different tables, you can set up foreign keys that enforce referential integrity. Update or delete data, and the related data in other tables is updated or deleted automatically. Try to insert data into a secondary table without corresponding data in the primary table, and the bad data gets kicked out automatically.
If data becomes corrupted on disk or in memory, a checksum mechanism alerts you to the bogus data before you use it.
        When you design your database with appropriate
        primary key columns for
        each table, operations involving those columns are automatically
        optimized. It is very fast to reference the primary key columns
        in WHERE clauses, ORDER BY
        clauses, GROUP BY clauses, and
        join operations.
      
        Inserts, updates, deletes are optimized by an automatic
        mechanism called change
        buffering. InnoDB not only allows
        concurrent read and write access to the same table, it caches
        changed data to streamline disk I/O.
      
Performance benefits are not limited to giant tables with long-running queries. When the same rows are accessed over and over from a table, a feature called the Adaptive Hash Index takes over to make these lookups even faster, as if they came out of a hash table.
    If you have been using InnoDB for a long time,
    you already know about features like transactions and foreign keys.
    If not, read about them throughout this chapter. To make a long
    story short:
Specify a primary key for every table using the most frequently queried column or columns, or anauto-increment value if there is no obvious primary key.
Embrace the idea of joins, where data is pulled from multiple tables based on identical ID values from those tables. For fast join performance, define foreign keys on the join columns, and declare those columns with the same data type in each table. The foreign keys also propagate deletes or updates to all affected tables, and prevent insertion of data in a child table if the corresponding IDs are not present in the parent table.
Turn off autocommit. Committing hundreds of times a second puts a cap on performance (limited by the write speed of your storage device).
        Group sets of related DML
        operations into
        transactions, by
        bracketing them with START TRANSACTION and
        COMMIT statements. While you don't want to
        commit too often, you also don't want to issue huge batches of
        INSERT, UPDATE, or
        DELETE statements that run for hours without
        committing.
      
        Stop using LOCK TABLE statements.
        InnoDB can handle multiple sessions all
        reading and writing to the same table at once, without
        sacrificing reliability or high performance. To get exclusive
        write access to a set of rows, use the SELECT ... FOR
        UPDATE syntax to lock just the rows you intend to
        update.
      
        Enable the innodb_file_per_table option to
        put the data and indexes for individual tables into separate
        files, instead of in a single giant
        system tablespace.
        This setting is required to use some of the other features, such
        as table compression and
        fast truncation.
      
        Evaluate whether your data and access patterns benefit from the
        new InnoDB table
        compression feature
        (ROW_FORMAT=COMPRESSED) on the
        CREATE TABLE statement. You can compress
        InnoDB tables without sacrificing read/write
        capability.
      
        Run your server with the option
        --sql_mode=NO_ENGINE_SUBSTITUTION to prevent
        tables being created with a different storage engine if there is
        an issue with the one specified in the
        ENGINE= clause of CREATE
        TABLE.
You can compress tables and associated indexes.
You can create and drop indexes with much less performance or availability impact than before.
        Truncating a table is very fast, and can free up disk space for
        the operating system to reuse, rather than freeing up space
        within the system tablespace that only InnoDB
        could reuse.
      
        The storage layout for table data is more efficient for BLOBs
        and long text fields, with the DYNAMIC row
        format.
      
        You can monitor the internal workings of the storage engine by
        querying INFORMATION_SCHEMA tables.
      
        You can monitor the performance details of the storage engine by
        querying performance_schema tables.
      
        For InnoDB-specific tuning techniques you can
        apply in your application code, see
        Section 8.5, “Optimizing for InnoDB Tables”.
    Even before completing your upgrade from MySQL 5.1 or earlier to
    MySQL 5.5 or higher, you can preview whether your database server or
    application works correctly with InnoDB as the
    default storage engine. To set up InnoDB as the
    default storage engine with an earlier MySQL release, either specify
    on the command line
    --default-storage-engine=InnoDB, or add to your
    my.cnf file
    default-storage-engine=innodb in the
    [mysqld] section, then restart the server.
  
    Since changing the default storage engine only affects new tables as
    they are created, run all your application installation and setup
    steps to confirm that everything installs properly. Then exercise
    all the application features to make sure all the data loading,
    editing, and querying features work. If a table relies on some
    MyISAM-specific feature, you'll receive an error;
    add the ENGINE=MyISAM clause to the
    CREATE TABLE statement to avoid the error.
  
    If you did not make a deliberate decision about the storage engine,
    and you just want to preview how certain tables work when they're
    created under InnoDB, issue the command
    ALTER TABLE table_name ENGINE=InnoDB; for each
    table. Or, to run test queries and other statements without
    disturbing the original table, make a copy like so:
  
CREATE TABLE InnoDB_Table (...) ENGINE=InnoDB AS SELECT * FROM MyISAM_Table;
    Since there are so many performance enhancements in
    InnoDB in MySQL 5.5 and higher, to get a true
    idea of the performance with a full application under a realistic
    workload, install the latest MySQL server and run benchmarks.
  
Test the full application lifecycle, from installation, through heavy usage, and server restart. Kill the server process while the database is busy to simulate a power failure, and verify that the data is recovered successfully when you restart the server.
Test any replication configurations, especially if you use different MySQL versions and options on the master and the slaves.
    To know what the status of InnoDB is, whether
    you're doing what-if testing with an older MySQL or comprehensive
    testing with the latest MySQL:
        Issue the command SHOW ENGINES; to see all
        the different MySQL storage engines. Look for
        DEFAULT in the InnoDB
        line.
      
        If InnoDB is not present at all, you have a
        mysqld binary that was compiled without
        InnoDB support and you need to get a
        different one.
      
        If InnoDB is present but disabled, go back
        through your startup options and configuration file and get rid
        of any skip-innodb option.
      To determine whether your server supports
      InnoDB, use the SHOW
      ENGINES statement. (Now that InnoDB
      is the default MySQL storage engine, only very specialized
      environments might not support it.)
      Oracle recommends InnoDB as the preferred
      storage engine for typical database applications, from single-user
      wikis and blogs running on a local system, to high-end
      applications pushing the limits of performance. In MySQL
      5.6, InnoDB is the default storage
      engine for new tables.
    
      If you do not want to use InnoDB tables:
          Start the server with the
          --innodb=OFF
          or
          --skip-innodb
          option to disable the InnoDB storage
          engine.
            As of MySQL 5.6.21, the
            --skip-innodb
            option still works but it is deprecated and will return a
            warning when used. It will be removed in a future MySQL
            release. This also applies to its synonyms
            (--innodb=OFF,
            --disable-innodb, and so forth).
          Because the default storage engine is
          InnoDB, the server will not start unless
          you also use
          --default-storage-engine and
          --default-tmp-storage-engine to
          set the default to some other engine for both permanent and
          TEMPORARY tables.
        
          To prevent the server from crashing when the
          InnoDB-related
          information_schema tables are
          queried, also disable the plugins associated with those
          tables. Specify in the [mysqld] section of
          the MySQL configuration file:
        
loose-innodb-trx=0 loose-innodb-locks=0 loose-innodb-lock-waits=0 loose-innodb-cmp=0 loose-innodb-cmp-per-index=0 loose-innodb-cmp-per-index-reset=0 loose-innodb-cmp-reset=0 loose-innodb-cmpmem=0 loose-innodb-cmpmem-reset=0 loose-innodb-buffer-page=0 loose-innodb-buffer-page-lru=0 loose-innodb-buffer-pool-stats=0 loose-innodb-metrics=0 loose-innodb-ft-default-stopword=0 loose-innodb-ft-inserted=0 loose-innodb-ft-deleted=0 loose-innodb-ft-being-deleted=0 loose-innodb-ft-config=0 loose-innodb-ft-index-cache=0 loose-innodb-ft-index-table=0 loose-innodb-sys-tables=0 loose-innodb-sys-tablestats=0 loose-innodb-sys-indexes=0 loose-innodb-sys-columns=0 loose-innodb-sys-fields=0 loose-innodb-sys-foreign=0 loose-innodb-sys-foreign-cols=0
    The information in this section provides background to help you get
    the most performance and functionality from using
    InnoDB tables. It is intended for:
Anyone switching to MySQL from another database system, to explain what things might seem familiar and which might be all-new.
        Anyone moving from MyISAM tables to
        InnoDB, now that InnoDB is
        the default MySQL storage engine.
      
        Anyone considering their application architecture or software
        stack, to understand the design considerations, performance
        characteristics, and scalability of InnoDB
        tables at a detailed level.
In this section, you will learn:
        How InnoDB closely adheres to
        ACID principles.
      
        How InnoDB implements
        transactions, and how
        the inner workings of transactions compare with other database
        systems you might be familiar with.
      
        How InnoDB implements
        row-level locking
        to allow queries and DML statements to read and write the same
        table simultaneously.
      
How multi-version concurrency control (MVCC) keeps transactions from viewing or modifying each others' data before the appropriate time.
        The physical layout of InnoDB-related objects
        on disk, such as tables,
        indexes,
        tablespaces,
        undo logs, and the
        redo log.
      The ACID model is a set of
      database design principles that emphasize aspects of reliability
      that are important for business data and mission-critical
      applications. MySQL includes components such as the
      InnoDB storage engine that adhere closely to
      the ACID model, so that data is not corrupted and results are not
      distorted by exceptional conditions such as software crashes and
      hardware malfunctions. When you rely on ACID-compliant features,
      you do not need to reinvent the wheel of consistency checking and
      crash recovery mechanisms. In cases where you have additional
      software safeguards, ultra-reliable hardware, or an application
      that can tolerate a small amount of data loss or inconsistency,
      you can adjust MySQL settings to trade some of the ACID
      reliability for greater performance or throughput.
    
      The following sections discuss how MySQL features, in particular
      the InnoDB storage engine, interact with the
      categories of the ACID model:
A: atomicity.
C: consistency.
I:: isolation.
D: durability.
      The atomicity aspect of the ACID
      model mainly involves InnoDB
      transactions. Related
      MySQL features include:
      The consistency aspect of the
      ACID model mainly involves internal InnoDB
      processing to protect data from crashes. Related MySQL features
      include:
          InnoDB
          doublewrite
          buffer.
        
          InnoDB
          crash recovery.
      The isolation aspect of the ACID
      model mainly involves InnoDB
      transactions, in
      particular the isolation
      level that applies to each transaction. Related MySQL
      features include:
Autocommit setting.
          SET ISOLATION LEVEL statement.
        
          The low-level details of InnoDB
          locking. During
          performance tuning, you see these details through
          INFORMATION_SCHEMA tables.
The durability aspect of the ACID model involves MySQL software features interacting with your particular hardware configuration. Because of the many possibilities depending on the capabilities of your CPU, network, and storage devices, this aspect is the most complicated to provide concrete guidelines for. (And those guidelines might take the form of buy “new hardware”.) Related MySQL features include:
          InnoDB
          doublewrite
          buffer, turned on and off by the
          innodb_doublewrite
          configuration option.
        
          Configuration option
          innodb_flush_log_at_trx_commit.
        
          Configuration option
          sync_binlog.
        
          Configuration option
          innodb_file_per_table.
        
Write buffer in a storage device, such as a disk drive, SSD, or RAID array.
Battery-backed cache in a storage device.
          The operating system used to run MySQL, in particular its
          support for the fsync() system call.
        
Uninterruptible power supply (UPS) protecting the electrical power to all computer servers and storage devices that run MySQL servers and store MySQL data.
Your backup strategy, such as frequency and types of backups, and backup retention periods.
For distributed or hosted data applications, the particular characteristics of the data centers where the hardware for the MySQL servers is located, and network connections between the data centers.
To implement a large-scale, busy, or highly reliable database application, to port substantial code from a different database system, or to tune MySQL performance, you must understand the notions of transactions and locking as they relate to the InnoDB storage engine.
      In the InnoDB transaction model, the goal is to
      combine the best properties of a multi-versioning database with
      traditional two-phase locking. InnoDB does
      locking on the row level and runs queries as nonlocking consistent
      reads by default, in the style of Oracle. The lock information in
      InnoDB is stored so space-efficiently that lock
      escalation is not needed: Typically, several users are permitted
      to lock every row in InnoDB tables, or any
      random subset of the rows, without causing
      InnoDB memory exhaustion.
    
      In InnoDB, all user activity occurs inside a
      transaction. If autocommit mode is enabled, each SQL statement
      forms a single transaction on its own. By default, MySQL starts
      the session for each new connection with autocommit enabled, so
      MySQL does a commit after each SQL statement if that statement did
      not return an error. If a statement returns an error, the commit
      or rollback behavior depends on the error. See
      Section 14.18.4, “InnoDB Error Handling”.
    
      A session that has autocommit enabled can perform a
      multiple-statement transaction by starting it with an explicit
      START
      TRANSACTION or
      BEGIN statement
      and ending it with a COMMIT or
      ROLLBACK
      statement. See Section 13.3.1, “START TRANSACTION, COMMIT, and ROLLBACK Syntax”.
    
      If autocommit mode is disabled within a session with SET
      autocommit = 0, the session always has a transaction
      open. A COMMIT or
      ROLLBACK
      statement ends the current transaction and a new one starts.
    
      A COMMIT means that the changes
      made in the current transaction are made permanent and become
      visible to other sessions. A
      ROLLBACK
      statement, on the other hand, cancels all modifications made by
      the current transaction. Both
      COMMIT and
      ROLLBACK release
      all InnoDB locks that were set during the
      current transaction.
    
      In terms of the SQL:1992 transaction
      isolation levels, the
      default InnoDB level is
      REPEATABLE READ.
      InnoDB offers all four transaction isolation
      levels described by the SQL standard:
      READ UNCOMMITTED,
      READ COMMITTED,
      REPEATABLE READ, and
      SERIALIZABLE.
    
      A user can change the isolation level for a single session or for
      all subsequent connections with the SET
      TRANSACTION statement. To set the server's default
      isolation level for all connections, use the
      --transaction-isolation option on
      the command line or in an option file. For detailed information
      about isolation levels and level-setting syntax, see
      Section 13.3.6, “SET TRANSACTION Syntax”.
    
      In row-level
      locking, InnoDB normally uses next-key
      locking. That means that besides index records,
      InnoDB can also lock the
      gap preceding an index record to
      block insertions by other sessions where the indexed values would
      be inserted in that gap within the tree data structure. A next-key
      lock refers to a lock that locks an index record and the gap
      before it. A gap lock refers to a lock that locks only the gap
      before some index record.
    
For more information about row-level locking, and the circumstances under which gap locking is disabled, see Section 14.2.2.4, “InnoDB Record, Gap, and Next-Key Locks”.
        InnoDB implements standard row-level locking
        where there are two types of locks,
        shared
        (S) locks and
        exclusive
        (X) locks. For information
        about record, gap, and next-key lock types, see
        Section 14.2.2.4, “InnoDB Record, Gap, and Next-Key Locks”.
            A shared
            (S) lock permits the
            transaction that holds the lock to read a row.
          
            An exclusive
            (X) lock permits the
            transaction that holds the lock to update or delete a row.
        If transaction T1 holds a shared
        (S) lock on row r,
        then requests from some distinct transaction
        T2 for a lock on row r are
        handled as follows:
            A request by T2 for an
            S lock can be granted
            immediately. As a result, both T1 and
            T2 hold an S
            lock on r.
          
            A request by T2 for an
            X lock cannot be granted
            immediately.
        If a transaction T1 holds an exclusive
        (X) lock on row r,
        a request from some distinct transaction T2
        for a lock of either type on r cannot be
        granted immediately. Instead, transaction T2
        has to wait for transaction T1 to release its
        lock on row r.
        Additionally, InnoDB supports
        multiple granularity locking which permits
        coexistence of record locks and locks on entire tables. To make
        locking at multiple granularity levels practical, additional
        types of locks called
        intention locks are
        used. Intention locks are table locks in
        InnoDB that indicate which type of lock
        (shared or exclusive) a transaction will require later for a row
        in that table. There are two types of intention locks used in
        InnoDB (assume that transaction
        T has requested a lock of the indicated type
        on table t):
            Intention
            shared (IS): Transaction
            T intends to set
            S locks on individual rows in
            table t.
          
            Intention
            exclusive (IX):
            Transaction T intends to set
            X locks on those rows.
        For example, SELECT ...
        LOCK IN SHARE MODE sets an
        IS lock and
        SELECT ... FOR
        UPDATE sets an IX lock.
      
The intention locking protocol is as follows:
            Before a transaction can acquire an
            S lock on a row in table
            t, it must first acquire an
            IS or stronger lock on
            t.
          
            Before a transaction can acquire an
            X lock on a row, it must first
            acquire an IX lock on
            t.
These rules can be conveniently summarized by means of the following lock type compatibility matrix.
| X | IX | S | IS | |
|---|---|---|---|---|
| X | Conflict | Conflict | Conflict | Conflict | 
| IX | Conflict | Compatible | Conflict | Compatible | 
| S | Conflict | Conflict | Compatible | Compatible | 
| IS | Conflict | Compatible | Compatible | Compatible | 
A lock is granted to a requesting transaction if it is compatible with existing locks, but not if it conflicts with existing locks. A transaction waits until the conflicting existing lock is released. If a lock request conflicts with an existing lock and cannot be granted because it would cause deadlock, an error occurs.
        Thus, intention locks do not block anything except full table
        requests (for example, LOCK TABLES ...
        WRITE). The main purpose of
        IX and IS
        locks is to show that someone is locking a row, or going to lock
        a row in the table.
The following example illustrates how an error can occur when a lock request would cause a deadlock. The example involves two clients, A and B.
        First, client A creates a table containing one row, and then
        begins a transaction. Within the transaction, A obtains an
        S lock on the row by selecting it in
        share mode:
      
mysql>CREATE TABLE t (i INT) ENGINE = InnoDB;Query OK, 0 rows affected (1.07 sec) mysql>INSERT INTO t (i) VALUES(1);Query OK, 1 row affected (0.09 sec) mysql>START TRANSACTION;Query OK, 0 rows affected (0.00 sec) mysql>SELECT * FROM t WHERE i = 1 LOCK IN SHARE MODE;+------+ | i | +------+ | 1 | +------+ 1 row in set (0.10 sec)
Next, client B begins a transaction and attempts to delete the row from the table:
mysql>START TRANSACTION;Query OK, 0 rows affected (0.00 sec) mysql>DELETE FROM t WHERE i = 1;
        The delete operation requires an X
        lock. The lock cannot be granted because it is incompatible with
        the S lock that client A holds, so
        the request goes on the queue of lock requests for the row and
        client B blocks.
      
Finally, client A also attempts to delete the row from the table:
mysql> DELETE FROM t WHERE i = 1;
ERROR 1213 (40001): Deadlock found when trying to get lock;
try restarting transaction
        Deadlock occurs here because client A needs an
        X lock to delete the row. However,
        that lock request cannot be granted because client B already has
        a request for an X lock and is
        waiting for client A to release its S
        lock. Nor can the S lock held by A be
        upgraded to an X lock because of the
        prior request by B for an X lock. As
        a result, InnoDB generates an error for one
        of the clients and releases its locks. The client returns this
        error:
      
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
At that point, the lock request for the other client can be granted and it deletes the row from the table.
          If the LATEST DETECTED DEADLOCK section of
          InnoDB Monitor output includes a message stating,
          “TOO DEEP OR LONG SEARCH IN THE LOCK TABLE
          WAITS-FOR GRAPH, WE WILL ROLL BACK FOLLOWING
          TRANSACTION,” this indicates that the
          number of transactions on the wait-for list has reached a
          limit of 200, which is defined by
          LOCK_MAX_DEPTH_IN_DEADLOCK_CHECK. A
          wait-for list that exceeds 200 transactions is treated as a
          deadlock and the transaction attempting to check the wait-for
          list is rolled back.
        
          The same error may also occur if the locking thread must look
          at more than 1,000,000 locks owned by the transactions on the
          wait-for list. The limit of 1,000,000 locks is defined by
          LOCK_MAX_N_STEPS_IN_DEADLOCK_CHECK.
        A consistent read
        means that InnoDB uses multi-versioning to
        present to a query a snapshot of the database at a point in
        time. The query sees the changes made by transactions that
        committed before that point of time, and no changes made by
        later or uncommitted transactions. The exception to this rule is
        that the query sees the changes made by earlier statements
        within the same transaction. This exception causes the following
        anomaly: If you update some rows in a table, a
        SELECT sees the latest version of
        the updated rows, but it might also see older versions of any
        rows. If other sessions simultaneously update the same table,
        the anomaly means that you might see the table in a state that
        never existed in the database.
      
        If the transaction
        isolation level is
        REPEATABLE READ (the default
        level), all consistent reads within the same transaction read
        the snapshot established by the first such read in that
        transaction. You can get a fresher snapshot for your queries by
        committing the current transaction and after that issuing new
        queries.
      
        With READ COMMITTED isolation
        level, each consistent read within a transaction sets and reads
        its own fresh snapshot.
      
        Consistent read is the default mode in which
        InnoDB processes
        SELECT statements in
        READ COMMITTED and
        REPEATABLE READ isolation
        levels. A consistent read does not set any locks on the tables
        it accesses, and therefore other sessions are free to modify
        those tables at the same time a consistent read is being
        performed on the table.
      
        Suppose that you are running in the default
        REPEATABLE READ isolation
        level. When you issue a consistent read (that is, an ordinary
        SELECT statement),
        InnoDB gives your transaction a timepoint
        according to which your query sees the database. If another
        transaction deletes a row and commits after your timepoint was
        assigned, you do not see the row as having been deleted. Inserts
        and updates are treated similarly.
          The snapshot of the database state applies to
          SELECT statements within a
          transaction, not necessarily to
          DML statements. If you insert
          or modify some rows and then commit that transaction, a
          DELETE or
          UPDATE statement issued from
          another concurrent REPEATABLE READ
          transaction could affect those just-committed rows, even
          though the session could not query them. If a transaction does
          update or delete rows committed by a different transaction,
          those changes do become visible to the current transaction.
          For example, you might encounter a situation like the
          following:
        
SELECT COUNT(c1) FROM t1 WHERE c1 = 'xyz'; -- Returns 0: no rows match. DELETE FROM t1 WHERE c1 = 'xyz'; -- Deletes several rows recently committed by other transaction. SELECT COUNT(c2) FROM t1 WHERE c2 = 'abc'; -- Returns 0: no rows match. UPDATE t1 SET c2 = 'cba' WHERE c2 = 'abc'; -- Affects 10 rows: another txn just committed 10 rows with 'abc' values. SELECT COUNT(c2) FROM t1 WHERE c2 = 'cba'; -- Returns 10: this txn can now see the rows it just updated.
        You can advance your timepoint by committing your transaction
        and then doing another SELECT or
        START TRANSACTION WITH
        CONSISTENT SNAPSHOT.
      
This is called multi-versioned concurrency control.
In the following example, session A sees the row inserted by B only when B has committed the insert and A has committed as well, so that the timepoint is advanced past the commit of B.
             Session A              Session B
           SET autocommit=0;      SET autocommit=0;
time
|          SELECT * FROM t;
|          empty set
|                                 INSERT INTO t VALUES (1, 2);
|
v          SELECT * FROM t;
           empty set
                                  COMMIT;
           SELECT * FROM t;
           empty set
           COMMIT;
           SELECT * FROM t;
           ---------------------
           |    1    |    2    |
           ---------------------
           1 row in set
        If you want to see the “freshest” state of the
        database, use either the READ
        COMMITTED isolation level or a
        locking read:
      
SELECT * FROM t LOCK IN SHARE MODE;
        With READ COMMITTED isolation
        level, each consistent read within a transaction sets and reads
        its own fresh snapshot. With LOCK IN SHARE
        MODE, a locking read occurs instead: A
        SELECT blocks until the transaction
        containing the freshest rows ends (see
        Section 14.2.2.3, “Locking Reads (SELECT ... FOR UPDATE and SELECT ... LOCK IN SHARE MODE)”).
      
Consistent read does not work over certain DDL statements:
            Consistent read does not work over DROP
            TABLE, because MySQL cannot use a table that has
            been dropped and InnoDB destroys the
            table.
          
            Consistent read does not work over
            ALTER TABLE, because that
            statement makes a temporary copy of the original table and
            deletes the original table when the temporary copy is built.
            When you reissue a consistent read within a transaction,
            rows in the new table are not visible because those rows did
            not exist when the transaction's snapshot was taken. In this
            case, the transaction returns an error as of MySQL 5.6.6:
            ER_TABLE_DEF_CHANGED,
            “Table definition has changed, please retry
            transaction”.
        The type of read varies for selects in clauses like
        INSERT INTO ...
        SELECT, UPDATE
        ... (SELECT), and
        CREATE TABLE ...
        SELECT that do not specify FOR
        UPDATE or LOCK IN SHARE MODE:
            By default, InnoDB uses stronger locks
            and the SELECT part acts like
            READ COMMITTED, where
            each consistent read, even within the same transaction, sets
            and reads its own fresh snapshot.
          
            To use a consistent read in such cases, enable the
            innodb_locks_unsafe_for_binlog
            option and set the isolation level of the transaction to
            READ UNCOMMITTED,
            READ COMMITTED, or
            REPEATABLE READ (that is,
            anything other than
            SERIALIZABLE). In this
            case, no locks are set on rows read from the selected table.
        If you query data and then insert or update related data within
        the same transaction, the regular SELECT
        statement does not give enough protection. Other transactions
        can update or delete the same rows you just queried.
        InnoDB supports two types of
        locking reads that
        offer extra safety:
            SELECT ... LOCK IN
            SHARE MODE sets a shared mode lock on any rows
            that are read. Other sessions can read the rows, but cannot
            modify them until your transaction commits. If any of these
            rows were changed by another transaction that has not yet
            committed, your query waits until that transaction ends and
            then uses the latest values.
          
            For index records the search encounters,
            SELECT ... FOR
            UPDATE locks the rows and any associated index
            entries, the same as if you issued an
            UPDATE statement for those rows. Other
            transactions are blocked from updating those rows, from
            doing SELECT ...
            LOCK IN SHARE MODE, or from reading the data in
            certain transaction isolation levels. Consistent reads
            ignore any locks set on the records that exist in the read
            view. (Old versions of a record cannot be locked; they are
            reconstructed by applying undo
            logs on an in-memory copy of the record.)
These clauses are primarily useful when dealing with tree-structured or graph-structured data, either in a single table or split across multiple tables. You traverse edges or tree branches from one place to another, while reserving the right to come back and change any of these “pointer” values.
        All locks set by LOCK IN SHARE MODE and
        FOR UPDATE queries are released when the
        transaction is committed or rolled back.
          Locking of rows for update using SELECT FOR
          UPDATE only applies when autocommit is disabled
          (either by beginning transaction with
          START
          TRANSACTION or by setting
          autocommit to 0. If
          autocommit is enabled, the rows matching the specification are
          not locked.
        Suppose that you want to insert a new row into a table
        child, and make sure that the child row has a
        parent row in table parent. Your application
        code can ensure referential integrity throughout this sequence
        of operations.
      
        First, use a consistent read to query the table
        PARENT and verify that the parent row exists.
        Can you safely insert the child row to table
        CHILD? No, because some other session could
        delete the parent row in the moment between your
        SELECT and your INSERT,
        without you being aware of it.
      
        To avoid this potential issue, perform the
        SELECT using LOCK IN
        SHARE MODE:
      
SELECT * FROM parent WHERE NAME = 'Jones' LOCK IN SHARE MODE;
        After the LOCK IN SHARE MODE query returns
        the parent 'Jones', you can safely add the
        child record to the CHILD table and commit
        the transaction. Any transaction that tries to read or write to
        the applicable row in the PARENT table waits
        until you are finished, that is, the data in all tables is in a
        consistent state.
      
        For another example, consider an integer counter field in a
        table CHILD_CODES, used to assign a unique
        identifier to each child added to table
        CHILD. Do not use either consistent read or a
        shared mode read to read the present value of the counter,
        because two users of the database could see the same value for
        the counter, and a duplicate-key error occurs if two
        transactions attempt to add rows with the same identifier to the
        CHILD table.
      
        Here, LOCK IN SHARE MODE is not a good
        solution because if two users read the counter at the same time,
        at least one of them ends up in deadlock when it attempts to
        update the counter.
      
        To implement reading and incrementing the counter, first perform
        a locking read of the counter using FOR
        UPDATE, and then increment the counter. For example:
      
SELECT counter_field FROM child_codes FOR UPDATE; UPDATE child_codes SET counter_field = counter_field + 1;
        A SELECT ... FOR
        UPDATE reads the latest available data, setting
        exclusive locks on each row it reads. Thus, it sets the same
        locks a searched SQL UPDATE would
        set on the rows.
      
        The preceding description is merely an example of how
        SELECT ... FOR
        UPDATE works. In MySQL, the specific task of
        generating a unique identifier actually can be accomplished
        using only a single access to the table:
      
UPDATE child_codes SET counter_field = LAST_INSERT_ID(counter_field + 1); SELECT LAST_INSERT_ID();
        The SELECT statement merely
        retrieves the identifier information (specific to the current
        connection). It does not access any table.
        InnoDB has several types of record-level
        locks including record locks, gap locks, and next-key locks. For
        information about shared locks, exclusive locks, and intention
        locks, see Section 14.2.2.1, “InnoDB Lock Modes”.
Record lock: This is a lock on an index record.
Gap lock: This is a lock on a gap between index records, or a lock on the gap before the first or after the last index record.
Next-key lock: This is a combination of a record lock on the index record and a gap lock on the gap before the index record.
        Record locks always lock index records, even if a table is
        defined with no indexes. For such cases,
        InnoDB creates a hidden clustered index and
        uses this index for record locking. See
        Section 14.2.6.2, “Clustered and Secondary Indexes”.
        By default, InnoDB operates in
        REPEATABLE READ transaction
        isolation level and with the
        innodb_locks_unsafe_for_binlog
        system variable disabled. In this case,
        InnoDB uses next-key locks for searches and
        index scans, which prevents phantom rows (see
        Section 14.2.2.5, “Avoiding the Phantom Problem Using Next-Key Locking”).
      
        Next-key locking combines index-row locking with gap locking.
        InnoDB performs row-level locking in such a
        way that when it searches or scans a table index, it sets shared
        or exclusive locks on the index records it encounters. Thus, the
        row-level locks are actually index-record locks. In addition, a
        next-key lock on an index record also affects the
        “gap” before that index record. That is, a next-key
        lock is an index-record lock plus a gap lock on the gap
        preceding the index record. If one session has a shared or
        exclusive lock on record R in an index,
        another session cannot insert a new index record in the gap
        immediately before R in the index order.
      
        Suppose that an index contains the values 10, 11, 13, and 20.
        The possible next-key locks for this index cover the following
        intervals, where ( or )
        denote exclusion of the interval endpoint and
        [ or ] denote inclusion of
        the endpoint:
      
(negative infinity, 10] (10, 11] (11, 13] (13, 20] (20, positive infinity)
For the last interval, the next-key lock locks the gap above the largest value in the index and the “supremum” pseudo-record having a value higher than any value actually in the index. The supremum is not a real index record, so, in effect, this next-key lock locks only the gap following the largest index value.
The next-key locking example in the previous section shows that a gap might span a single index value, multiple index values, or even be empty.
        Gap locking is not needed for statements that lock rows using a
        unique index to search for a unique row. (This does not include
        the case that the search condition includes only some columns of
        a multiple-column unique index; in that case, gap locking does
        occur.) For example, if the id column has a
        unique index, the following statement uses only an index-record
        lock for the row having id value 100 and it
        does not matter whether other sessions insert rows in the
        preceding gap:
      
SELECT * FROM child WHERE id = 100;
        If id is not indexed or has a nonunique
        index, the statement does lock the preceding gap.
      
        A type of gap lock called an insert intention gap lock is set by
        INSERT operations prior to row
        insertion. This lock signals the intent to insert in such a way
        that multiple transactions inserting into the same index gap
        need not wait for each other if they are not inserting at the
        same position within the gap. Suppose that there are index
        records with values of 4 and 7. Separate transactions that
        attempt to insert values of 5 and 6 each lock the gap between 4
        and 7 with insert intention locks prior to obtaining the
        exclusive lock on the inserted row, but do not block each other
        because the rows are nonconflicting.
      
It is also worth noting here that conflicting locks can be held on a gap by different transactions. For example, transaction A can hold a shared gap lock (gap S-lock) on a gap while transaction B holds an exclusive gap lock (gap X-lock) on the same gap. The reason conflicting gap locks are allowed is that if a record is purged from an index, the gap locks held on the record by different transactions must be merged.
        Gap locks in InnoDB are “purely
        inhibitive”, which means they only stop other
        transactions from inserting to the gap. Thus, a gap X-lock has
        the same effect as a gap S-lock.
        Gap locking can be disabled explicitly. This occurs if you
        change the transaction isolation level to
        READ COMMITTED or enable the
        innodb_locks_unsafe_for_binlog
        system variable (which is now deprecated). Under these
        circumstances, gap locking is disabled for searches and index
        scans and is used only for foreign-key constraint checking and
        duplicate-key checking.
      
        There are also other effects of using the
        READ COMMITTED isolation
        level or enabling
        innodb_locks_unsafe_for_binlog:
        Record locks for nonmatching rows are released after MySQL has
        evaluated the WHERE condition. For
        UPDATE statements, InnoDB
        does a “semi-consistent” read, such that it returns
        the latest committed version to MySQL so that MySQL can
        determine whether the row matches the WHERE
        condition of the UPDATE.
        The so-called phantom problem occurs
        within a transaction when the same query produces different sets
        of rows at different times. For example, if a
        SELECT is executed twice, but
        returns a row the second time that was not returned the first
        time, the row is a “phantom” row.
      
        Suppose that there is an index on the id
        column of the child table and that you want
        to read and lock all rows from the table having an identifier
        value larger than 100, with the intention of updating some
        column in the selected rows later:
      
SELECT * FROM child WHERE id > 100 FOR UPDATE;
        The query scans the index starting from the first record where
        id is bigger than 100. Let the table contain
        rows having id values of 90 and 102. If the
        locks set on the index records in the scanned range do not lock
        out inserts made in the gaps (in this case, the gap between 90
        and 102), another session can insert a new row into the table
        with an id of 101. If you were to execute the
        same SELECT within the same
        transaction, you would see a new row with an
        id of 101 (a “phantom”) in the
        result set returned by the query. If we regard a set of rows as
        a data item, the new phantom child would violate the isolation
        principle of transactions that a transaction should be able to
        run so that the data it has read does not change during the
        transaction.
      
        To prevent phantoms, InnoDB uses an algorithm
        called next-key locking that combines
        index-row locking with gap locking. InnoDB
        performs row-level locking in such a way that when it searches
        or scans a table index, it sets shared or exclusive locks on the
        index records it encounters. Thus, the row-level locks are
        actually index-record locks. In addition, a next-key lock on an
        index record also affects the “gap” before that
        index record. That is, a next-key lock is an index-record lock
        plus a gap lock on the gap preceding the index record. If one
        session has a shared or exclusive lock on record
        R in an index, another session cannot insert
        a new index record in the gap immediately before
        R in the index order.
      
        When InnoDB scans an index, it can also lock
        the gap after the last record in the index. Just that happens in
        the preceding example: To prevent any insert into the table
        where id would be bigger than 100, the locks
        set by InnoDB include a lock on the gap
        following id value 102.
      
You can use next-key locking to implement a uniqueness check in your application: If you read your data in share mode and do not see a duplicate for a row you are going to insert, then you can safely insert your row and know that the next-key lock set on the successor of your row during the read prevents anyone meanwhile inserting a duplicate for your row. Thus, the next-key locking enables you to “lock” the nonexistence of something in your table.
Gap locking can be disabled as discussed in Section 14.2.2.4, “InnoDB Record, Gap, and Next-Key Locks”. This may cause phantom problems because other sessions can insert new rows into the gaps when gap locking is disabled.
        A locking read, an
        UPDATE, or a
        DELETE generally set record locks
        on every index record that is scanned in the processing of the
        SQL statement. It does not matter whether there are
        WHERE conditions in the statement that would
        exclude the row. InnoDB does not remember the
        exact WHERE condition, but only knows which
        index ranges were scanned. The locks are normally
        next-key locks that
        also block inserts into the “gap” immediately
        before the record. However, gap
        locking can be disabled explicitly, which causes next-key
        locking not to be used. For more information, see
        Section 14.2.2.4, “InnoDB Record, Gap, and Next-Key Locks”. The transaction
        isolation level also can affect which locks are set; see
        Section 13.3.6, “SET TRANSACTION Syntax”.
      
        If a secondary index is used in a search and index record locks
        to be set are exclusive, InnoDB also
        retrieves the corresponding clustered index records and sets
        locks on them.
      
Differences between shared and exclusive locks are described in Section 14.2.2.1, “InnoDB Lock Modes”.
If you have no indexes suitable for your statement and MySQL must scan the entire table to process the statement, every row of the table becomes locked, which in turn blocks all inserts by other users to the table. It is important to create good indexes so that your queries do not unnecessarily scan many rows.
        For SELECT ... FOR
        UPDATE or
        SELECT ... LOCK IN SHARE
        MODE, locks are acquired for scanned rows, and
        expected to be released for rows that do not qualify for
        inclusion in the result set (for example, if they do not meet
        the criteria given in the WHERE clause).
        However, in some cases, rows might not be unlocked immediately
        because the relationship between a result row and its original
        source is lost during query execution. For example, in a
        UNION, scanned (and locked) rows
        from a table might be inserted into a temporary table before
        evaluation whether they qualify for the result set. In this
        circumstance, the relationship of the rows in the temporary
        table to the rows in the original table is lost and the latter
        rows are not unlocked until the end of query execution.
      
        InnoDB sets specific types of locks as
        follows.
            SELECT ...
            FROM is a consistent read, reading a snapshot of
            the database and setting no locks unless the transaction
            isolation level is set to
            SERIALIZABLE. For
            SERIALIZABLE level, the
            search sets shared next-key locks on the index records it
            encounters.
          
            SELECT ... FROM ...
            LOCK IN SHARE MODE sets shared next-key locks on
            all index records the search encounters.
          
            For index records the search encounters,
            SELECT ... FROM ...
            FOR UPDATE blocks other sessions from doing
            SELECT ... FROM ...
            LOCK IN SHARE MODE or from reading in certain
            transaction isolation levels. Consistent reads will ignore
            any locks set on the records that exist in the read view.
          
            UPDATE ... WHERE
            ... sets an exclusive next-key lock on every
            record the search encounters.
          
            DELETE FROM ...
            WHERE ... sets an exclusive next-key lock on every
            record the search encounters.
          
            INSERT sets an exclusive lock
            on the inserted row. This lock is an index-record lock, not
            a next-key lock (that is, there is no gap lock) and does not
            prevent other sessions from inserting into the gap before
            the inserted row.
          
Prior to inserting the row, a type of gap lock called an insertion intention gap lock is set. This lock signals the intent to insert in such a way that multiple transactions inserting into the same index gap need not wait for each other if they are not inserting at the same position within the gap. Suppose that there are index records with values of 4 and 7. Separate transactions that attempt to insert values of 5 and 6 each lock the gap between 4 and 7 with insert intention locks prior to obtaining the exclusive lock on the inserted row, but do not block each other because the rows are nonconflicting.
            If a duplicate-key error occurs, a shared lock on the
            duplicate index record is set. This use of a shared lock can
            result in deadlock should there be multiple sessions trying
            to insert the same row if another session already has an
            exclusive lock. This can occur if another session deletes
            the row. Suppose that an InnoDB table
            t1 has the following structure:
          
CREATE TABLE t1 (i INT, PRIMARY KEY (i)) ENGINE = InnoDB;
Now suppose that three sessions perform the following operations in order:
Session 1:
START TRANSACTION; INSERT INTO t1 VALUES(1);
Session 2:
START TRANSACTION; INSERT INTO t1 VALUES(1);
Session 3:
START TRANSACTION; INSERT INTO t1 VALUES(1);
Session 1:
ROLLBACK;
The first operation by session 1 acquires an exclusive lock for the row. The operations by sessions 2 and 3 both result in a duplicate-key error and they both request a shared lock for the row. When session 1 rolls back, it releases its exclusive lock on the row and the queued shared lock requests for sessions 2 and 3 are granted. At this point, sessions 2 and 3 deadlock: Neither can acquire an exclusive lock for the row because of the shared lock held by the other.
A similar situation occurs if the table already contains a row with key value 1 and three sessions perform the following operations in order:
Session 1:
START TRANSACTION; DELETE FROM t1 WHERE i = 1;
Session 2:
START TRANSACTION; INSERT INTO t1 VALUES(1);
Session 3:
START TRANSACTION; INSERT INTO t1 VALUES(1);
Session 1:
COMMIT;
The first operation by session 1 acquires an exclusive lock for the row. The operations by sessions 2 and 3 both result in a duplicate-key error and they both request a shared lock for the row. When session 1 commits, it releases its exclusive lock on the row and the queued shared lock requests for sessions 2 and 3 are granted. At this point, sessions 2 and 3 deadlock: Neither can acquire an exclusive lock for the row because of the shared lock held by the other.
            INSERT
            ... ON DUPLICATE KEY UPDATE differs from a simple
            INSERT in that an exclusive
            next-key lock rather than a shared lock is placed on the row
            to be updated when a duplicate-key error occurs.
          
            REPLACE is done like an
            INSERT if there is no
            collision on a unique key. Otherwise, an exclusive next-key
            lock is placed on the row to be replaced.
          
            INSERT INTO T SELECT ... FROM S WHERE ...
            sets an exclusive index record lock (without a gap lock) on
            each row inserted into T. If the
            transaction isolation level is READ
            COMMITTED, or
            innodb_locks_unsafe_for_binlog
            is enabled and the transaction isolation level is not
            SERIALIZABLE,
            InnoDB does the search on
            S as a consistent read (no locks).
            Otherwise, InnoDB sets shared next-key
            locks on rows from S.
            InnoDB has to set locks in the latter
            case: In roll-forward recovery from a backup, every SQL
            statement must be executed in exactly the same way it was
            done originally.
          
            CREATE TABLE
            ... SELECT ... performs the
            SELECT with shared next-key
            locks or as a consistent read, as for
            INSERT ...
            SELECT.
          
            When a SELECT is used in the constructs
            REPLACE INTO t SELECT ... FROM s WHERE
            ... or UPDATE t ... WHERE col IN (SELECT
            ... FROM s ...), InnoDB sets
            shared next-key locks on rows from table
            s.
          
            While initializing a previously specified
            AUTO_INCREMENT column on a table,
            InnoDB sets an exclusive lock on the end
            of the index associated with the
            AUTO_INCREMENT column. In accessing the
            auto-increment counter, InnoDB uses a
            specific AUTO-INC table lock mode where
            the lock lasts only to the end of the current SQL statement,
            not to the end of the entire transaction. Other sessions
            cannot insert into the table while the
            AUTO-INC table lock is held; see
            Section 14.2.2, “The InnoDB Transaction Model and Locking”.
          
            InnoDB fetches the value of a previously
            initialized AUTO_INCREMENT column without
            setting any locks.
          
            If a FOREIGN KEY constraint is defined on
            a table, any insert, update, or delete that requires the
            constraint condition to be checked sets shared record-level
            locks on the records that it looks at to check the
            constraint. InnoDB also sets these locks
            in the case where the constraint fails.
          
            LOCK TABLES sets table locks,
            but it is the higher MySQL layer above the
            InnoDB layer that sets these locks.
            InnoDB is aware of table locks if
            innodb_table_locks = 1 (the default) and
            autocommit = 0, and the
            MySQL layer above InnoDB knows about
            row-level locks.
          
            Otherwise, InnoDB's automatic deadlock
            detection cannot detect deadlocks where such table locks are
            involved. Also, because in this case the higher MySQL layer
            does not know about row-level locks, it is possible to get a
            table lock on a table where another session currently has
            row-level locks. However, this does not endanger transaction
            integrity, as discussed in
            Section 14.2.2.8, “Deadlock Detection and Rollback”. See also
            Section 14.5.7, “Limits on InnoDB Tables”.
By default, MySQL starts the session for each new connection with autocommit mode enabled, so MySQL does a commit after each SQL statement if that statement did not return an error. If a statement returns an error, the commit or rollback behavior depends on the error. See Section 14.18.4, “InnoDB Error Handling”.
If a session that has autocommit disabled ends without explicitly committing the final transaction, MySQL rolls back that transaction.
        Some statements implicitly end a transaction, as if you had done
        a COMMIT before executing the
        statement. For details, see Section 13.3.3, “Statements That Cause an Implicit Commit”.
        InnoDB automatically detects transaction
        deadlocks and rolls back a
        transaction or transactions to break the deadlock.
        InnoDB tries to pick small transactions to
        roll back, where the size of a transaction is determined by the
        number of rows inserted, updated, or deleted.
      
        InnoDB is aware of table locks if
        innodb_table_locks = 1 (the default) and
        autocommit = 0, and the MySQL
        layer above it knows about row-level locks. Otherwise,
        InnoDB cannot detect deadlocks where a table
        lock set by a MySQL LOCK TABLES
        statement or a lock set by a storage engine other than
        InnoDB is involved. Resolve these situations
        by setting the value of the
        innodb_lock_wait_timeout system
        variable.
      
        When InnoDB performs a complete rollback of a
        transaction, all locks set by the transaction are released.
        However, if just a single SQL statement is rolled back as a
        result of an error, some of the locks set by the statement may
        be preserved. This happens because InnoDB
        stores row locks in a format such that it cannot know afterward
        which lock was set by which statement.
      
        If a SELECT calls a stored
        function in a transaction, and a statement within the function
        fails, that statement rolls back. Furthermore, if
        ROLLBACK is
        executed after that, the entire transaction rolls back.
      
For techniques to organize database operations to avoid deadlocks, see Section 14.2.2.9, “How to Cope with Deadlocks”.
This section builds on the conceptual information about deadlocks in Section 14.2.2.8, “Deadlock Detection and Rollback”. It explains how to organize database operations to minimize deadlocks and the subsequent error handling required in applications.
Deadlocks are a classic problem in transactional databases, but they are not dangerous unless they are so frequent that you cannot run certain transactions at all. Normally, you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.
        InnoDB uses automatic row-level locking. You
        can get deadlocks even in the case of transactions that just
        insert or delete a single row. That is because these operations
        are not really “atomic”; they automatically set
        locks on the (possibly several) index records of the row
        inserted or deleted.
      
You can cope with deadlocks and reduce the likelihood of their occurrence with the following techniques:
            At any time, issue the
            SHOW ENGINE
            INNODB STATUS command to determine the cause of
            the most recent deadlock. That can help you to tune your
            application to avoid deadlocks.
          
            If frequent deadlock warnings cause concern, collect more
            extensive debugging information by enabling the the
            innodb_print_all_deadlocks
            configuration option. Information about each deadlock, not
            just the latest one, is recorded in the MySQL
            error log. Disable
            this option when you are finished debugging.
          
Always be prepared to re-issue a transaction if it fails due to deadlock. Deadlocks are not dangerous. Just try again.
Keep transactions small and short in duration to make them less prone to collision.
Commit transactions immediately after making a set of related changes to make them less prone to collision. In particular, do not leave an interactive mysql session open for a long time with an uncommitted transaction.
            If you use locking
            reads (SELECT
            ... FOR UPDATE or
            SELECT ... LOCK IN SHARE
            MODE), try using a lower isolation level such as
            READ COMMITTED.
          
            When modifying multiple tables within a transaction, or
            different sets of rows in the same table, do those
            operations in a consistent order each time. Then
            transactions form well-defined queues and do not deadlock.
            For example, organize database operations into functions
            within your application, or call stored routines, rather
            than coding multiple similar sequences of
            INSERT, UPDATE, and
            DELETE statements in different places.
          
            Add well-chosen indexes to your tables. Then your queries
            need to scan fewer index records and consequently set fewer
            locks. Use EXPLAIN
            SELECT to determine which indexes the MySQL server
            regards as the most appropriate for your queries.
          
            Use less locking. If you can afford to permit a
            SELECT to return data from an
            old snapshot, do not add the clause FOR
            UPDATE or LOCK IN SHARE MODE to
            it. Using the READ
            COMMITTED isolation level is good here, because
            each consistent read within the same transaction reads from
            its own fresh snapshot.
          
            If nothing else helps, serialize your transactions with
            table-level locks. The correct way to use
            LOCK TABLES with
            transactional tables, such as InnoDB
            tables, is to begin a transaction with SET
            autocommit = 0 (not
            START
            TRANSACTION) followed by LOCK
            TABLES, and to not call
            UNLOCK
            TABLES until you commit the transaction
            explicitly. For example, if you need to write to table
            t1 and read from table
            t2, you can do this:
          
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;
Table-level locks prevent concurrent updates to the table, avoiding deadlocks at the expense of less responsiveness for a busy system.
            Another way to serialize transactions is to create an
            auxiliary “semaphore” table that contains just
            a single row. Have each transaction update that row before
            accessing other tables. In that way, all transactions happen
            in a serial fashion. Note that the InnoDB
            instant deadlock detection algorithm also works in this
            case, because the serializing lock is a row-level lock. With
            MySQL table-level locks, the timeout method must be used to
            resolve deadlocks.
      InnoDB is a
      multi-versioned storage engine:
      it keeps information about old versions of changed rows, to
      support transactional features such as concurrency and
      rollback. This information is
      stored in the tablespace in a data structure called a
      rollback segment
      (after an analogous data structure in Oracle).
      InnoDB uses the information in the rollback
      segment to perform the undo operations needed in a transaction
      rollback. It also uses the information to build earlier versions
      of a row for a consistent
      read.
    
      Internally, InnoDB adds three fields to each
      row stored in the database. A 6-byte DB_TRX_ID
      field indicates the transaction identifier for the last
      transaction that inserted or updated the row. Also, a deletion is
      treated internally as an update where a special bit in the row is
      set to mark it as deleted. Each row also contains a 7-byte
      DB_ROLL_PTR field called the roll pointer. The
      roll pointer points to an undo log record written to the rollback
      segment. If the row was updated, the undo log record contains the
      information necessary to rebuild the content of the row before it
      was updated. A 6-byte DB_ROW_ID field contains
      a row ID that increases monotonically as new rows are inserted. If
      InnoDB generates a clustered index
      automatically, the index contains row ID values. Otherwise, the
      DB_ROW_ID column does not appear in any index.
    
      Undo logs in the rollback segment are divided into insert and
      update undo logs. Insert undo logs are needed only in transaction
      rollback and can be discarded as soon as the transaction commits.
      Update undo logs are used also in consistent reads, but they can
      be discarded only after there is no transaction present for which
      InnoDB has assigned a snapshot that in a
      consistent read could need the information in the update undo log
      to build an earlier version of a database row.
    
      Commit your transactions regularly, including those transactions
      that issue only consistent reads. Otherwise,
      InnoDB cannot discard data from the update undo
      logs, and the rollback segment may grow too big, filling up your
      tablespace.
    
The physical size of an undo log record in the rollback segment is typically smaller than the corresponding inserted or updated row. You can use this information to calculate the space needed for your rollback segment.
      In the InnoDB multi-versioning scheme, a row is
      not physically removed from the database immediately when you
      delete it with an SQL statement. InnoDB only
      physically removes the corresponding row and its index records
      when it discards the update undo log record written for the
      deletion. This removal operation is called a
      purge, and it is quite fast,
      usually taking the same order of time as the SQL statement that
      did the deletion.
    
      If you insert and delete rows in smallish batches at about the
      same rate in the table, the purge thread can start to lag behind
      and the table can grow bigger and bigger because of all the
      “dead” rows, making everything disk-bound and very
      slow. In such a case, throttle new row operations, and allocate
      more resources to the purge thread by tuning the
      innodb_max_purge_lag system
      variable. See Section 14.11, “InnoDB Startup Options and System Variables” for more
      information.
      The redo log is a disk-based data structure used during crash
      recovery to correct data written by incomplete transactions.
      During normal operations, the redo log encodes requests to change
      InnoDB table data, which result from SQL
      statements or low-level API calls. Modifications that did not
      finish updating the data files before an unexpected shutdown are
      replayed automatically during initialization, and before the
      connections are accepted. For information about the role of the
      redo log in crash recovery, see Section 14.15.1, “The InnoDB Recovery Process”.
    
      By default, the redo log is physically represented on disk as a
      set of files, named ib_logfile0 and
      ib_logfile1. MySQL writes to the redo log
      files in a circular fashion. Data in the redo log is encoded in
      terms of records affected; this data is collectively referred to
      as redo. The passage of data through the redo log is represented
      by an ever-increasing LSN value.
    
Disk layout for the redo log is configured using the following options:
          innodb_log_file_size: Defines
          the size of each redo log file in bytes. By default, redo log
          files are 50331648 bytes (48MB) in size. The combined size of
          log files
          (innodb_log_file_size *
          innodb_log_files_in_group)
          cannot exceed a maximum value that is slightly less than
          512GB.
        
          innodb_log_files_in_group:
          The number of log files in the log group. The default is to
          create two files named ib_logfile0 and
          ib_logfile1.
        
          innodb_log_group_home_dir:
          The directory path to the InnoDB log files.
          If you do not specify a value, the log files are created in
          the MySQL data directory
          (datadir).
To change your initial redo log configuration, refer to Section 14.4.2, “Changing the Number or Size of InnoDB Redo Log Files”. For information about optimizing redo logging, see Section 8.5.4, “Optimizing InnoDB Redo Logging”.
InnoDB, like any other ACID-compliant database engine, flushes the redo log of a transaction before it is committed. InnoDB uses group commit functionality to group multiple such flush requests together to avoid one flush for each commit. With group commit, InnoDB issues a single write to the log file to perform the commit action for multiple user transactions that commit at about the same time, significantly improving throughput.
        For more information about performance of
        COMMIT and other transactional operations,
        see Section 8.5.2, “Optimizing InnoDB Transaction Management”.
An undo log (or rollback segment) is a storage area that holds copies of data modified by active transactions. If another transaction needs to see the original data (as part of a consistent read operation), the unmodified data is retrieved from this storage area. By default, this area is physically part of the system tablespace. However, as of MySQL 5.6.3, undo logs can reside in separate undo tablespaces. For more information, see Section 14.4.7, “Storing InnoDB Undo Logs in Separate Tablespaces”. For more information about undo logs and multi-versioning, see Section 14.2.3, “InnoDB Multi-Versioning”.
      InnoDB supports 128 undo logs, each supporting
      up to 1023 concurrent data-modifying transactions, for a total
      limit of approximately 128K concurrent data-modifying transactions
      (read-only transactions do not count against the maximum limit).
      Each transaction is assigned to one of the undo logs, and remains
      tied to that undo log for the duration. The
      innodb_undo_logs option defines
      how many undo logs are used by InnoDB.
      This section describes how InnoDB tables,
      indexes, and their associated metadata is represented at the
      physical level. This information is primarily useful for
      performance tuning and troubleshooting.
        MySQL stores its data dictionary information for tables in
        .frm files in database
        directories. Unlike other MySQL storage engines,
        InnoDB also encodes information about the
        table in its own internal data dictionary inside the tablespace.
        When MySQL drops a table or a database, it deletes one or more
        .frm files as well as the corresponding
        entries inside the InnoDB data dictionary.
        You cannot move InnoDB tables between
        databases simply by moving the .frm files.
        Every InnoDB table has a special index called
        the clustered index
        where the data for the rows is stored. Typically, the clustered
        index is synonymous with the
        primary key. To get the
        best performance from queries, inserts, and other database
        operations, you must understand how InnoDB uses the clustered
        index to optimize the most common lookup and DML operations for
        each table.
            When you define a PRIMARY KEY on your
            table, InnoDB uses it as the clustered
            index. Define a primary key for each table that you create.
            If there is no logical unique and non-null column or set of
            columns, add a new
            auto-increment
            column, whose values are filled in automatically.
          
            If you do not define a PRIMARY KEY for
            your table, MySQL locates the first
            UNIQUE index where all the key columns
            are NOT NULL and
            InnoDB uses it as the clustered index.
          
            If the table has no PRIMARY KEY or
            suitable UNIQUE index,
            InnoDB internally generates a hidden
            clustered index on a synthetic column containing row ID
            values. The rows are ordered by the ID that
            InnoDB assigns to the rows in such a
            table. The row ID is a 6-byte field that increases
            monotonically as new rows are inserted. Thus, the rows
            ordered by the row ID are physically in insertion order.
        Accessing a row through the clustered index is fast because the
        index search leads directly to the page with all the row data.
        If a table is large, the clustered index architecture often
        saves a disk I/O operation when compared to storage
        organizations that store row data using a different page from
        the index record. (For example, MyISAM uses
        one file for data rows and another for index records.)
        All indexes other than the clustered index are known as
        secondary indexes.
        In InnoDB, each record in a secondary index
        contains the primary key columns for the row, as well as the
        columns specified for the secondary index.
        InnoDB uses this primary key value to search
        for the row in the clustered index.
      
If the primary key is long, the secondary indexes use more space, so it is advantageous to have a short primary key.
        For coding guidelines to take advantage of
        InnoDB clustered and secondary indexes, see
        Section 8.3.2, “Using Primary Keys”
        Section 8.3, “Optimization and Indexes”
        Section 8.5, “Optimizing for InnoDB Tables”
        Section 8.3.2, “Using Primary Keys”.
        FULLTEXT indexes are created on text-based
        columns (CHAR,
        VARCHAR, or
        TEXT columns) to help speed up
        queries and DML operations on data contained within those
        columns, omitting any words that are defined as stopwords.
      
        A FULLTEXT index can be defined as part of a
        CREATE TABLE statement, or added
        later using ALTER TABLE or
        CREATE INDEX.
      
        Full-text searching is performed using
        MATCH() ... AGAINST syntax. For
        usage information, see Section 12.9, “Full-Text Search Functions”.
        InnoDB FULLTEXT indexes
        have an inverted index design. Inverted indexes store a list of
        words, and for each word, a list of documents that the word
        appears in. To support proximity search, position information
        for each word is also stored, as a byte offset.
        For each InnoDB FULLTEXT
        index, a set of index tables is created, as shown in the
        following example:
      
CREATE TABLE opening_lines ( id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, opening_line TEXT(500), author VARCHAR(200), title VARCHAR(200), FULLTEXT idx (opening_line) ) ENGINE=InnoDB; mysql> SELECT table_id, name, space from INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE name LIKE 'test/%'; +----------+----------------------------------------------------+-------+ | table_id | name | space | +----------+----------------------------------------------------+-------+ | 333 | test/FTS_0000000000000147_00000000000001c9_INDEX_1 | 289 | | 334 | test/FTS_0000000000000147_00000000000001c9_INDEX_2 | 290 | | 335 | test/FTS_0000000000000147_00000000000001c9_INDEX_3 | 291 | | 336 | test/FTS_0000000000000147_00000000000001c9_INDEX_4 | 292 | | 337 | test/FTS_0000000000000147_00000000000001c9_INDEX_5 | 293 | | 338 | test/FTS_0000000000000147_00000000000001c9_INDEX_6 | 294 | | 330 | test/FTS_0000000000000147_BEING_DELETED | 286 | | 331 | test/FTS_0000000000000147_BEING_DELETED_CACHE | 287 | | 332 | test/FTS_0000000000000147_CONFIG | 288 | | 328 | test/FTS_0000000000000147_DELETED | 284 | | 329 | test/FTS_0000000000000147_DELETED_CACHE | 285 | | 327 | test/opening_lines | 283 | +----------+----------------------------------------------------+-------+ 12 rows in set (0.02 sec)
        The first six tables represent the inverted index and are
        referred to as auxiliary index tables. When incoming documents
        are tokenized, the individual words (also referred to as
        “tokens”) are inserted into the index tables along
        with position information and the associated Document ID
        (DOC_ID). The words are fully sorted and
        partitioned among the six index tables based on the charactre
        set sort weight of the word's first character.
      
        The inverted index is partitioned into six auxiliary index
        tables to support parallel index creation. By default, two
        threads tokenize, sort, and insert words and associated data
        into the index tables. The number of threads is configurable
        using the
        innodb_ft_sort_pll_degree
        option. When creating FULLTEXT indexes on
        large tables, consider increasing the number of threads.
      
        Auxiliary index table names are prefixed with
        FTS_ and postfixed with
        INDEX_*. Each index table is associated with
        the indexed table by a hex value in the index table name that
        matches the table_id of the indexed table.
        For example, the table_id of the
        test/opening_lines table is
        327, for which the hex value is 0x147. As
        shown in the preceding example, the “147” hex value
        appears in the names of index tables that are associated with
        the test/opening_lines table.
      
        A hex value representing the index_id of the
        FULLTEXT index also appears in auxiliary
        index table names. For example, in the auxiliary table name
        test/FTS_0000000000000147_00000000000001c9_INDEX_1,
        the hex value 1c9 has a decimal value of 457.
        The index defined on the opening_lines table
        (idx) can be identified by querying the
        INFORMATION_SCHEMA.INNODB_SYS_INDEXES
        table for this value (457).
      
mysql> SELECT index_id, name, table_id, space from INFORMATION_SCHEMA.INNODB_SYS_INDEXES WHERE index_id=457; +----------+------+----------+-------+ | index_id | name | table_id | space | +----------+------+----------+-------+ | 457 | idx | 327 | 283 | +----------+------+----------+-------+ 1 row in set (0.00 sec)
        Index tables are stored in their own tablespace when
        innodb_file_per_table is
        enabled. If
        innodb_file_per_table is
        disabled, index tables are stored in the
        InnoDB system tablespace (space 0).
          Due to a bug introduced in MySQL 5.6.5, index tables are
          created in the InnoDB system tablespace
          (space 0) when
          innodb_file_per_table is
          enabled. The bug is fixed in MySQL 5.6.20 and MySQL 5.7.5
          (Bug#18635485).
        The other index tables shown in the preceding example are used
        for deletion handling and for storing the internal state of the
        FULLTEXT index.
            FTS_*_DELETED and
            FTS_*_DELETED_CACHE: Contain the document
            IDs (DOC_ID) for documents that are deleted but whose data
            is not yet removed from the full-text index. The
            FTS_*_DELETED_CACHE is the in-memory
            version of the FTS_*_DELETED table.
          
            FTS_*_BEING_DELETED and
            FTS_*_BEING_DELETED_CACHE: Contain the
            document IDs (DOC_ID) for documents that are deleted and
            whose data is currently in the process of being removed from
            the full-text index. The
            FTS_*_BEING_DELETED_CACHE table is the
            in-memory version of the
            FTS_*_BEING_DELETED table.
          
            FTS_*_CONFIG: Stores information about
            the internal state of the FULLTEXT index.
            Most importantly, it stores the
            FTS_SYNCED_DOC_ID, which identifies
            documents that have been parsed and flushed to disk. In case
            of crash recovery, FTS_SYNCED_DOC_ID
            values are used to identify documents that have not been
            flushed to disk so that the documents can be re-parsed and
            added back to the FULLTEXT index cache.
            To view the data in this table, query the
            INFORMATION_SCHEMA.INNODB_FT_CONFIG
            table.
        When a document is inserted, it is tokenized, and the individual
        words and associated data are inserted into the
        FULLTEXT index. This process, even for small
        documents, could result in numerous small insertions into the
        auxiliary index tables, making concurrent access to these tables
        a point of contention. To avoid this problem,
        InnoDB uses a FULLTEXT
        index cache to temporarily cache index table insertions for
        recently inserted rows. This in-memory cache structure holds
        insertions until the cache is full and then batch flushes them
        to disk (to the auxiliary index tables). You can query the
        INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE
        table to view tokenized data for recently inserted rows.
      
The caching and batch flushing behavior avoids frequent updates to auxiliary index tables, which could result in concurrent access issues during busy insert and update times. The batching technique also avoids multiple insertions for the same word, and minimizes duplicate entries. Instead of flushing each word individually, insertions for the same word are merged and flushed to disk as a single entry, improving insertion efficiency while keeping auxiliary index tables as small as possible.
        The innodb_ft_cache_size
        variable is used to configure the full-text index cache size (on
        a per-table basis), which affects how often the full-text index
        cache is flushed. You can also define a global full-text index
        cache size limit for all tables in a given instance using the
        innodb_ft_total_cache_size
        option.
      
The full-text index cache stores the same information as auxiliary index tables. However, the full-text index cache only caches tokenized data for recently inserted rows. The data that is already flushed to disk (to the full-text auxiliary tables) is not brought back into the full-text index cache when queried. The data in auxiliary index tables is queried directly, and results from the auxiliary index tables are merged with results from the full-text index cache before being returned.
        InnoDB uses a unique document identifier
        referred to as a Document ID (DOC_ID) to map
        words in the full-text index to document records where the word
        appears. The mapping requires an FTS_DOC_ID
        column on the indexed table. If an FTS_DOC_ID
        column is not defined, InnoDB automatically
        adds a hidden FTS_DOC_ID column when the
        full-text index is created. The following example demonstrates
        this behavior.
      
        The following table definition does not include an
        FTS_DOC_ID column:
      
CREATE TABLE opening_lines ( id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, opening_line TEXT(500), author VARCHAR(200), title VARCHAR(200) ) ENGINE=InnoDB;
        When you create a full-text index on the table using
        CREATE FULLTEXT INDEX syntax, a warning is
        returned which reports that InnoDB is
        rebuilding the table to add the FTS_DOC_ID
        column.
      
mysql> CREATE FULLTEXT INDEX idx ON opening_lines(opening_line); Query OK, 0 rows affected, 1 warning (0.19 sec) Records: 0 Duplicates: 0 Warnings: 1 mysql> SHOW WARNINGS; +---------+------+--------------------------------------------------+ | Level | Code | Message | +---------+------+--------------------------------------------------+ | Warning | 124 | InnoDB rebuilding table to add column FTS_DOC_ID | +---------+------+--------------------------------------------------+ 1 row in set (0.00 sec)
        The same warning is returned when using
        ALTER TABLE to add a full-text
        index to a table that does not have an
        FTS_DOC_ID column. If you create a full-text
        index at CREATE TABLE time and do
        not specify an FTS_DOC_ID column,
        InnoDB adds a hidden
        FTS_DOC_ID column, without warning.
      
        Defining an FTS_DOC_ID column at
        CREATE TABLE time reduces the
        time required to create a full-text index on a table that is
        already loaded with data. If an FTS_DOC_ID
        column is defined on a table prior to loading data, the table
        and its indexes do not have to be rebuilt to add the new column.
        If you are not concerned with CREATE FULLTEXT
        INDEX performance, leave out the
        FTS_DOC_ID column to have
        InnoDB create it for you.
        InnoDB creates a hidden
        FTS_DOC_ID column along with a unique index
        (FTS_DOC_ID_INDEX) on the
        FTS_DOC_ID column. If you want to create your
        own FTS_DOC_ID column, the column must be
        defined as BIGINT UNSIGNED NOT NULL and named
        FTS_DOC_ID (all upper case), as in the
        following example:
          The FTS_DOC_ID column does not need to be
          defined as an AUTO_INCREMENT column but
          AUTO_INCREMENT could make loading data
          easier.
CREATE TABLE opening_lines ( FTS_DOC_ID BIGINT UNSIGNED AUTO_INCREMENT NOT NULL, opening_line TEXT(500), author VARCHAR(200), title VARCHAR(200) ) ENGINE=InnoDB;
        If you choose to define the FTS_DOC_ID column
        yourself, you are responsible for managing the column to avoid
        empty or duplicate values. FTS_DOC_ID values
        cannot be reused, which means FTS_DOC_ID
        values must be ever increasing.
      
        Optionally, you can create the required unique
        FTS_DOC_ID_INDEX (all upper case) on the
        FTS_DOC_ID column.
      
CREATE UNIQUE INDEX FTS_DOC_ID_INDEX on opening_lines(FTS_DOC_ID);
        If you do not create the FTS_DOC_ID_INDEX,
        InnoDB creates it automatically.
        Deleting a record that has a full-text index column could result
        in numerous small deletions in the auxiliary index tables,
        making concurrent access to these tables a point of contention.
        To avoid this problem, the Document ID
        (DOC_ID) of a deleted document is logged in a
        special FTS_*_DELETED table whenever a record
        is deleted from an indexed table, and the indexed record remains
        in the full-text index. Before returning query results,
        information in the FTS_*_DELETED table is
        used to filter out deleted Document IDs. The benefit of this
        design is that deletions are fast and inexpensive. The drawback
        is that the size of the index is not immediately reduced after
        deleting records. To remove full-text index entries for deleted
        records, you must run OPTIMIZE TABLE on the
        indexed table with
        innodb_optimize_fulltext_only=ON
        to rebuild the full-text index. For more information, see
        Optimizing InnoDB Full-Text Indexes.
        InnoDB FULLTEXT indexes
        have special transaction handling characteristics due its
        caching and batch processing behavior. Specifically, updates and
        insertions on a FULLTEXT index are processed
        at transaction commit time, which means that a
        FULLTEXT search can only see committed data.
        The following example demonstrates this behavior. The
        FULLTEXT search only returns a result after
        the inserted lines are committed.
      
mysql> CREATE TABLE opening_lines (
id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
opening_line TEXT(500),
author VARCHAR(200),
title VARCHAR(200),
FULLTEXT idx (opening_line)
) ENGINE=InnoDB;
mysql> BEGIN;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO opening_lines(opening_line,author,title) VALUES
('Call me Ishmael.','Herman Melville','Moby-Dick'),
('A screaming comes across the sky.','Thomas Pynchon','Gravity\'s Rainbow'),
('I am an invisible man.','Ralph Ellison','Invisible Man'),
('Where now? Who now? When now?','Samuel Beckett','The Unnamable'),
('It was love at first sight.','Joseph Heller','Catch-22'),
('All this happened, more or less.','Kurt Vonnegut','Slaughterhouse-Five'),
('Mrs. Dalloway said she would buy the flowers herself.','Virginia Woolf','Mrs. Dalloway'),
('It was a pleasure to burn.','Ray Bradbury','Fahrenheit 451');
Query OK, 8 rows affected (0.00 sec)
Records: 8  Duplicates: 0  Warnings: 0
mysql> SELECT COUNT(*) FROM opening_lines WHERE MATCH(opening_line) AGAINST('Ishmael');
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT COUNT(*) FROM opening_lines WHERE MATCH(opening_line) AGAINST('Ishmael');
+----------+
| COUNT(*) |
+----------+
|        1 |
+----------+
1 row in set (0.00 sec)    
        You can monitor and examine the special text-processing aspects
        of InnoDB FULLTEXT indexes
        by querying the following INFORMATION_SCHEMA
        tables:
        You can also view basic information for
        FULLTEXT indexes and tables by querying
        INNODB_SYS_INDEXES and
        INNODB_SYS_TABLES.
      
See Section 14.12.4, “InnoDB INFORMATION_SCHEMA FULLTEXT Index Tables” for more information.
        All InnoDB indexes are
        B-trees where the index
        records are stored in the leaf pages of the tree. The default
        size of an index page is 16KB.
      
        When new records are inserted into an InnoDB
        clustered index,
        InnoDB tries to leave 1/16 of the page free
        for future insertions and updates of the index records. If index
        records are inserted in a sequential order (ascending or
        descending), the resulting index pages are about 15/16 full. If
        records are inserted in a random order, the pages are from 1/2
        to 15/16 full. If the fill
        factor of an index page drops below 1/2,
        InnoDB tries to contract the index tree to
        free the page.
      
        You can configure the page
        size for all InnoDB tablespaces in a
        MySQL instance by setting the
        innodb_page_size configuration
        option before creating the instance. Once the page size for an
        instance is set, you cannot change it. Supported sizes are 16KB,
        8KB, and 4KB, corresponding to the option values
        16k, 8k, and
        4k.
      
        A MySQL instance using a particular InnoDB
        page size cannot use data files or log files from an instance
        that uses a different page size.
        The change buffer is a special data structure that caches
        changes to secondary
        index pages when affected pages are not in the
        buffer pool. The
        buffered changes, which may result from
        INSERT,
        UPDATE, or
        DELETE operations (DML), are
        merged later when the pages are loaded into the buffer pool by
        other read operations.
      
Unlike clustered indexes, secondary indexes are usually non-unique, and inserts into secondary indexes happen in a relatively random order. Similarly, deletes and updates may affect secondary index pages that are not adjacently located in an index tree. Merging cached changes at a later time, when affected pages are read into the buffer pool by other operations, avoids substantial random access I/O that would be required to read-in secondary index pages from disk.
Periodically, the purge operation that runs when the system is mostly idle, or during a slow shutdown, writes the updated index pages to disk. The purge operation can write disk blocks for a series of index values more efficiently than if each value were written to disk immediately.
Change buffer merging may take several hours when there are numerous secondary indexes to update and many affected rows. During this time, disk I/O is increased, which can cause a significant slowdown for disk-bound queries. Change buffer merging may also continue to occur after a transaction is committed. In fact, change buffer merging may continue to occur after a server shutdown and restart (see Section 14.18.2, “Forcing InnoDB Recovery” for more information).
        In memory, the change buffer occupies part of the
        InnoDB buffer pool. On disk, the change
        buffer is part of the system tablespace, so that index changes
        remain buffered across database restarts.
      
        The type of data cached in the change buffer is governed by the
        innodb_change_buffering
        configuration option. For more information see,
        Section 14.3.5, “Configuring InnoDB Change Buffering”. You can
        also configure the maximum change buffer size. For more
        information, see
        Section 14.3.5.1, “Configuring the Change Buffer Maximum Size”.
The following options are available for change buffer monitoring:
            InnoDB Standard Monitor output includes
            status information for the change buffer. To view monitor
            data, issue the SHOW ENGINE INNODB STATUS
            command.
          
mysql> SHOW ENGINE INNODB STATUS\G
            Change buffer status information is located under the
            INSERT BUFFER AND ADAPTIVE HASH INDEX
            heading and appears similar to the following:
          
------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 0, seg size 2, 0 merges merged operations: insert 0, delete mark 0, delete 0 discarded operations: insert 0, delete mark 0, delete 0 Hash table size 4425293, used cells 32, node heap has 1 buffer(s) 13577.57 hash searches/s, 202.47 non-hash searches/s
For a description of each data point, see Section 14.14.3, “InnoDB Standard Monitor and Lock Monitor Output”.
            The
            INFORMATION_SCHEMA.INNODB_METRICS
            table provides most of the data points found in
            InnoDB Standard Monitor output, plus
            other data points. To view change buffer metrics and a
            description of each, issue the following query:
          
mysql> SELECT NAME, COMMENT FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME LIKE '%ibuf%'\G
            For INNODB_METRICS table usage
            information, see
            Section 14.12.6, “InnoDB INFORMATION_SCHEMA Metrics Table”.
          
            The
            INFORMATION_SCHEMA.INNODB_BUFFER_PAGE
            table provides metadata about each page in the buffer pool,
            including change buffer index and change buffer bitmap
            pages. Change buffer pages are identified by
            PAGE_TYPE. IBUF_INDEX
            is the page type for change buffer index pages, and
            IBUF_BITMAP is the page type for change
            buffer bitmap pages.
              Querying the
              INNODB_BUFFER_PAGE table can
              introduce significant performance overhead. To avoid
              impacting performance, reproduce the issue you want to
              investigate on a test instance and run your queries on the
              test instance.
            For example, you can query the
            INNODB_BUFFER_PAGE table to
            determine the approximate number of
            IBUF_INDEX and
            IBUF_BITMAP pages as a percentage of
            total buffer pool pages.
          
SELECT (SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE PAGE_TYPE LIKE 'IBUF%' ) AS change_buffer_pages, ( SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE ) AS total_pages, ( SELECT ((change_buffer_pages/total_pages)*100) ) AS change_buffer_page_percentage; +---------------------+-------------+-------------------------------+ | change_buffer_pages | total_pages | change_buffer_page_percentage | +---------------------+-------------+-------------------------------+ | 25 | 8192 | 0.3052 | +---------------------+-------------+-------------------------------+ 1 row in set (0.14 sec)
            For information about other data provided by the
            INNODB_BUFFER_PAGE table, see
            Section 21.29.16, “The INFORMATION_SCHEMA INNODB_BUFFER_PAGE Table”. For related
            usage information, see
            Section 14.12.5, “InnoDB INFORMATION_SCHEMA Buffer Pool Tables”.
          
Performance Schema provides change buffer mutex wait instrumentation for advanced performance monitoring. To view change buffer instrumentation, issue the following query:
mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%wait/synch/mutex/innodb/ibuf%'; +-------------------------------------------------------+---------+-------+ | NAME | ENABLED | TIMED | +-------------------------------------------------------+---------+-------+ | wait/synch/mutex/innodb/ibuf_bitmap_mutex | YES | YES | | wait/synch/mutex/innodb/ibuf_mutex | YES | YES | | wait/synch/mutex/innodb/ibuf_pessimistic_insert_mutex | YES | YES | +-------------------------------------------------------+---------+-------+ 3 rows in set (0.01 sec)
            For information about monitoring InnoDB
            mutex waits, see
            Section 14.13.1, “Monitoring InnoDB Mutex Waits Using Performance Schema”.
        The feature known as the
        adaptive hash
        index (AHI) lets InnoDB perform more
        like an in-memory database on systems with appropriate
        combinations of workload and ample memory for the
        buffer pool, without
        sacrificing any transactional features or reliability. This
        feature is enabled by the
        innodb_adaptive_hash_index
        option, or turned off by the
        --skip-innodb_adaptive_hash_index at server
        startup.
      
Based on the observed pattern of searches, MySQL builds a hash index using a prefix of the index key. The prefix of the key can be any length, and it may be that only some of the values in the B-tree appear in the hash index. Hash indexes are built on demand for those pages of the index that are often accessed.
        If a table fits almost entirely in main memory, a hash index can
        speed up queries by enabling direct lookup of any element,
        turning the index value into a sort of pointer.
        InnoDB has a mechanism that monitors index
        searches. If InnoDB notices that queries
        could benefit from building a hash index, it does so
        automatically.
      
        With some workloads, the
        speedup from hash index lookups greatly outweighs the extra work
        to monitor index lookups and maintain the hash index structure.
        Sometimes, the read/write lock that guards access to the
        adaptive hash index can become a source of contention under
        heavy workloads, such as multiple concurrent joins. Queries with
        LIKE operators and %
        wildcards also tend not to benefit from the AHI. For workloads
        where the adaptive hash index is not needed, turning it off
        reduces unnecessary performance overhead. Because it is
        difficult to predict in advance whether this feature is
        appropriate for a particular system, consider running benchmarks
        with it both enabled and disabled, using a realistic workload.
        The architectural changes in MySQL 5.6 and higher make more
        workloads suitable for disabling the adaptive hash index than in
        earlier releases, although it is still enabled by default.
      
        The hash index is always built based on an existing
        B-tree index on the table.
        InnoDB can build a hash index on a prefix of
        any length of the key defined for the B-tree, depending on the
        pattern of searches that InnoDB observes for
        the B-tree index. A hash index can be partial, covering only
        those pages of the index that are often accessed.
      
        You can monitor the use of the adaptive hash index and the
        contention for its use in the SEMAPHORES
        section of the output of the
        SHOW ENGINE INNODB
        STATUS command. If you see many threads waiting on an
        RW-latch created in btr0sea.c, then it
        might be useful to disable adaptive hash indexing.
      
For more information about the performance characteristics of hash indexes, see Section 8.3.8, “Comparison of B-Tree and Hash Indexes”.
        The physical row structure for an InnoDB
        table depends on the row format specified when the table was
        created. By default, InnoDB uses the
        Antelope file format and
        its COMPACT row format. The
        REDUNDANT format is available to retain
        compatibility with older versions of MySQL. When you enable the
        innodb_file_per_table setting,
        you can also make use of the newer Barracuda file format, with
        its DYNAMIC and COMPRESSED
        row formats, as explained in Section 14.8, “InnoDB Row Storage and Row Formats”
        and Section 14.6, “InnoDB Table Compression”.
      
        To check the row format of an InnoDB table,
        you can use SHOW TABLE STATUS.
        For example:
      
mysql> SHOW TABLE STATUS IN test1\G
*************************** 1. row ***************************
           Name: t1
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 0
 Avg_row_length: 0
    Data_length: 16384
Max_data_length: 0
   Index_length: 16384
      Data_free: 0
 Auto_increment: 1
    Create_time: 2014-10-31 16:02:01
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options: 
        Comment: 
1 row in set (0.00 sec)
        You can also check the row format of an
        InnoDB table by querying
        INFORMATION_SCHEMA.INNODB_SYS_TABLES.
      
mysql> SELECT NAME, ROW_FORMAT FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE NAME='test1/t1'; +----------+------------+ | NAME | ROW_FORMAT | +----------+------------+ | test1/t1 | Compact | +----------+------------+
        The COMPACT row format decreases row storage
        space by about 20% at the cost of increasing CPU use for some
        operations. If your workload is a typical one that is limited by
        cache hit rates and disk speed, COMPACT
        format is likely to be faster. If the workload is a rare case
        that is limited by CPU speed, COMPACT format
        might be slower.
      
        Rows in InnoDB tables that use
        REDUNDANT row format have the following
        characteristics:
Each index record contains a 6-byte header. The header is used to link together consecutive records, and also in row-level locking.
Records in the clustered index contain fields for all user-defined columns. In addition, there is a 6-byte transaction ID field and a 7-byte roll pointer field.
If no primary key was defined for a table, each clustered index record also contains a 6-byte row ID field.
Each secondary index record also contains all the primary key fields defined for the clustered index key that are not in the secondary index.
A record contains a pointer to each field of the record. If the total length of the fields in a record is less than 128 bytes, the pointer is one byte; otherwise, two bytes. The array of these pointers is called the record directory. The area where these pointers point is called the data part of the record.
            Internally, InnoDB stores fixed-length
            character columns such as
            CHAR(10) in a fixed-length
            format. InnoDB does not truncate trailing
            spaces from VARCHAR columns.
          
            An SQL NULL value reserves one or two
            bytes in the record directory. Besides that, an SQL
            NULL value reserves zero bytes in the
            data part of the record if stored in a variable length
            column. In a fixed-length column, it reserves the fixed
            length of the column in the data part of the record.
            Reserving the fixed space for NULL values
            enables an update of the column from NULL
            to a non-NULL value to be done in place
            without causing fragmentation of the index page.
        Rows in InnoDB tables that use
        COMPACT row format have the following
        characteristics:
Each index record contains a 5-byte header that may be preceded by a variable-length header. The header is used to link together consecutive records, and also in row-level locking.
            The variable-length part of the record header contains a bit
            vector for indicating NULL columns. If
            the number of columns in the index that can be
            NULL is N, the
            bit vector occupies
            CEILING(
            bytes. (For example, if there are anywhere from 9 to 15
            columns that can be N/8)NULL, the bit vector
            uses two bytes.) Columns that are NULL do
            not occupy space other than the bit in this vector. The
            variable-length part of the header also contains the lengths
            of variable-length columns. Each length takes one or two
            bytes, depending on the maximum length of the column. If all
            columns in the index are NOT NULL and
            have a fixed length, the record header has no
            variable-length part.
          
            For each non-NULL variable-length field,
            the record header contains the length of the column in one
            or two bytes. Two bytes will only be needed if part of the
            column is stored externally in overflow pages or the maximum
            length exceeds 255 bytes and the actual length exceeds 127
            bytes. For an externally stored column, the 2-byte length
            indicates the length of the internally stored part plus the
            20-byte pointer to the externally stored part. The internal
            part is 768 bytes, so the length is 768+20. The 20-byte
            pointer stores the true length of the column.
          
            The record header is followed by the data contents of the
            non-NULL columns.
          
Records in the clustered index contain fields for all user-defined columns. In addition, there is a 6-byte transaction ID field and a 7-byte roll pointer field.
If no primary key was defined for a table, each clustered index record also contains a 6-byte row ID field.
Each secondary index record also contains all the primary key fields defined for the clustered index key that are not in the secondary index. If any of these primary key fields are variable length, the record header for each secondary index will have a variable-length part to record their lengths, even if the secondary index is defined on fixed-length columns.
            Internally, InnoDB stores fixed-length,
            fixed-width character columns such as
            CHAR(10) in a fixed-length
            format. InnoDB does not truncate trailing
            spaces from VARCHAR columns.
          
            Internally, InnoDB attempts to store
            UTF-8
            CHAR(
            columns in N)N bytes by trimming
            trailing spaces. (With REDUNDANT row
            format, such columns occupy 3 ×
            N bytes.) Reserving the minimum
            space N in many cases enables
            column updates to be done in place without causing
            fragmentation of the index page.
      In MySQL and InnoDB, multiple threads of
      execution access shared data structures. InnoDB
      synchronizes these accesses with its own implementation of
      mutexes and
      read/write locks.
      Historically, InnoDB protected the internal
      state of a read/write lock with an InnoDB
      mutex, and the the internal state of an InnoDB
      mutex was protected by a
      Pthreads mutex, as in
      IEEE Std 1003.1c (POSIX.1c).
    
On many platforms, Atomic operations can often be used to synchronize the actions of multiple threads more efficiently than Pthreads. Each operation to acquire or release a lock can be done in fewer CPU instructions, wasting less time when threads contend for access to shared data structures. This in turn means greater scalability on multi-core platforms.
      On platforms that support
      Atomic operations,
      InnoDB now implements mutexes and read/write
      locks with the
      built-in
      functions provided by the GNU Compiler Collection (GCC) for atomic
      memory access instead of using the Pthreads approach. More
      specifically, InnoDB compiled with GCC version
      4.1.2 or later uses the atomic builtins instead of a
      pthread_mutex_t to implement
      InnoDB mutexes and read/write locks.
    
      On 32-bit Microsoft Windows, InnoDB implements
      mutexes (but not read/write locks) with hand-written assembler
      instructions. Beginning with Microsoft Windows 2000, functions for
      Interlocked
      Variable Access are available that are similar to the
      built-in functions provided by GCC. On Windows 2000 and higher,
      InnoDB makes use of the Interlocked functions,
      which support read/write locks and 64-bit platforms.
    
      Solaris 10 introduced library functions for atomic operations, and
      InnoDB uses these functions by default. When MySQL is compiled on
      Solaris 10 or later with a compiler that does not support the
      built-in
      functions provided by the GNU Compiler Collection (GCC) for atomic
      memory access, InnoDB uses the library
      functions.
    
      On platforms where the GCC, Windows, or Solaris functions for
      atomic memory access are not available, InnoDB
      uses the traditional Pthreads method of implementing mutexes and
      read/write locks.
    
      When MySQL starts, InnoDB writes a message to
      the log file indicating whether atomic memory access is used for
      mutexes, for mutexes and read/write locks, or neither. If suitable
      tools are used to build InnoDB and the target
      CPU supports the atomic operations required,
      InnoDB uses the built-in functions for
      mutexing. If, in addition, the compare-and-swap operation can be
      used on thread identifiers (pthread_t), then
      InnoDB uses the instructions for read-write
      locks as well.
    
If you are building from source, ensure that the build process properly takes advantage of your platform capabilities.
For more information about the performance implications of locking, see Section 8.11, “Optimizing Locking Operations”.
    This section provides configuration information and procedures for
    InnoDB initialization, startup, and various
    components and features of the InnoDB storage
    engine. For information about optimizing database operations for
    InnoDB tables, see
    Section 8.5, “Optimizing for InnoDB Tables”.
The first decisions to make about InnoDB configuration involve how to lay out InnoDB data files, and how much memory to allocate for the InnoDB storage engine. You record these choices either by recording them in a configuration file that MySQL reads at startup, or by specifying them as command-line options in a startup script. The full list of options, descriptions, and allowed parameter values is at Section 14.11, “InnoDB Startup Options and System Variables”.
      Two important disk-based resources managed by the
      InnoDB storage engine are its tablespace data
      files and its log files. If you specify no
      InnoDB configuration options, MySQL creates an
      auto-extending data file, slightly larger than 12MB, named
      ibdata1 and two log files named
      ib_logfile0 and
      ib_logfile1 in the MySQL data directory.
      Their size is given by the size of the
      innodb_log_file_size system
      variable. To get good performance, explicitly provide
      InnoDB parameters as discussed in the following
      examples. Naturally, edit the settings to suit your hardware and
      requirements.
    
      The examples shown here are representative. See
      Section 14.11, “InnoDB Startup Options and System Variables” for additional information
      about InnoDB-related configuration parameters.
      In some cases, database performance improves if the data is not
      all placed on the same physical disk. Putting log files on a
      different disk from data is very often beneficial for performance.
      The example illustrates how to do this. It places the two data
      files on different disks and places the log files on the third
      disk. InnoDB fills the tablespace beginning
      with the first data file. You can also use raw disk partitions
      (raw devices) as InnoDB data files, which may
      speed up I/O. See Section 14.4.3, “Using Raw Disk Partitions for the System Tablespace”.
        InnoDB is a transaction-safe (ACID compliant)
        storage engine for MySQL that has commit, rollback, and
        crash-recovery capabilities to protect user data.
        However, it cannot do so if the
        underlying operating system or hardware does not work as
        advertised. Many operating systems or disk subsystems may delay
        or reorder write operations to improve performance. On some
        operating systems, the very fsync() system
        call that should wait until all unwritten data for a file has
        been flushed might actually return before the data has been
        flushed to stable storage. Because of this, an operating system
        crash or a power outage may destroy recently committed data, or
        in the worst case, even corrupt the database because of write
        operations having been reordered. If data integrity is important
        to you, perform some “pull-the-plug” tests before
        using anything in production. On OS X 10.3 and up,
        InnoDB uses a special
        fcntl() file flush method. Under Linux, it is
        advisable to disable the write-back
        cache.
      
        On ATA/SATA disk drives, a command such hdparm -W0
        /dev/hda may work to disable the write-back cache.
        Beware that some drives or disk
        controllers may be unable to disable the write-back
        cache.
      
        With regard to InnoDB recovery capabilities
        that protect user data, InnoDB uses a file
        flush technique involving a structure called the
        doublewrite
        buffer, which is enabled by default
        (innodb_doublewrite=ON). The
        doublewrite buffer adds safety to recovery following a crash or
        power outage, and improves performance on most varieties of Unix
        by reducing the need for fsync() operations.
        It is recommended that the
        innodb_doublewrite option
        remains enabled if you are concerned with data integrity or
        possible failures. For additional information about the
        doublewrite buffer, see
        Section 14.9, “InnoDB Disk I/O and File Space Management”.
        If reliability is a consideration for your data, do not
        configure InnoDB to use data files or log
        files on NFS volumes. Potential problems vary according to OS
        and version of NFS, and include such issues as lack of
        protection from conflicting writes, and limitations on maximum
        file sizes.
      To set up the InnoDB tablespace files, use the
      innodb_data_file_path option in
      the [mysqld] section of the
      my.cnf option file. On Windows, you can use
      my.ini instead. The value of
      innodb_data_file_path should be a
      list of one or more data file specifications. If you name more
      than one data file, separate them by semicolon
      (“;”) characters:
    
innodb_data_file_path=datafile_spec1[;datafile_spec2]...
For example, the following setting explicitly creates a minimally sized system tablespace:
[mysqld] innodb_data_file_path=ibdata1:12M:autoextend
      This setting configures a single 12MB data file named
      ibdata1 that is auto-extending. No location
      for the file is given, so by default, InnoDB
      creates it in the MySQL data directory.
    
      Sizes are specified using K,
      M, or G suffix letters to
      indicate units of KB, MB, or GB.
    
      A tablespace containing a fixed-size 50MB data file named
      ibdata1 and a 50MB auto-extending file named
      ibdata2 in the data directory can be
      configured like this:
    
[mysqld] innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend
The full syntax for a data file specification includes the file name, its size, and several optional attributes:
file_name:file_size[:autoextend[:max:max_file_size]]
      The autoextend and max
      attributes can be used only for the last data file in the
      innodb_data_file_path line.
    
      If you specify the autoextend option for the
      last data file, InnoDB extends the data file if
      it runs out of free space in the tablespace. The increment is 8MB
      at a time by default. To modify the increment, change the
      innodb_autoextend_increment
      system variable.
    
If the disk becomes full, you might want to add another data file on another disk. For tablespace reconfiguration instructions, see Section 14.4.1, “Resizing the InnoDB System Tablespace”.
      InnoDB is not aware of the file system maximum
      file size, so be cautious on file systems where the maximum file
      size is a small value such as 2GB. To specify a maximum size for
      an auto-extending data file, use the max
      attribute following the autoextend attribute.
      Use the max attribute only in cases where
      constraining disk usage is of critical importance, because
      exceeding the maximum size causes a fatal error, possibly
      including a crash. The following configuration permits
      ibdata1 to grow up to a limit of 500MB:
    
[mysqld] innodb_data_file_path=ibdata1:12M:autoextend:max:500M
      InnoDB creates tablespace files in the MySQL
      data directory by default. To specify a location explicitly, use
      the innodb_data_home_dir option.
      For example, to use two files named ibdata1
      and ibdata2 but create them in the
      /ibdata directory, configure
      InnoDB like this:
    
[mysqld] innodb_data_home_dir = /ibdata innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend
        InnoDB does not create directories, so make
        sure that the /ibdata directory exists
        before you start the server. This is also true of any log file
        directories that you configure. Use the Unix or DOS
        mkdir command to create any necessary
        directories.
      
Make sure that the MySQL server has the proper access rights to create files in the data directory. More generally, the server must have access rights in any directory where it needs to create data files or log files.
      InnoDB forms the directory path for each data
      file by textually concatenating the value of
      innodb_data_home_dir to the data
      file name, adding a path name separator (slash or backslash)
      between values if necessary. If the
      innodb_data_home_dir option is
      not specified in my.cnf at all, the default
      value is the “dot” directory ./,
      which means the MySQL data directory. (The MySQL server changes
      its current working directory to its data directory when it begins
      executing.)
    
      If you specify
      innodb_data_home_dir as an empty
      string, you can specify absolute paths for the data files listed
      in the innodb_data_file_path
      value. The following example is equivalent to the preceding one:
    
[mysqld] innodb_data_home_dir = innodb_data_file_path=/ibdata/ibdata1:50M;/ibdata/ibdata2:50M:autoextend
      Sample my.cnf file for
      small systems. Suppose that you have a computer with
      512MB RAM and one hard disk. The following example shows possible
      configuration parameters in my.cnf or
      my.ini for InnoDB,
      including the autoextend attribute. The example
      suits most users, both on Unix and Windows, who do not want to
      distribute InnoDB data files and log files onto
      several disks. It creates an auto-extending data file
      ibdata1 and two InnoDB log
      files ib_logfile0 and
      ib_logfile1 in the MySQL data directory.
    
[mysqld] # You can write your other MySQL server options here # ... # Data files must be able to hold your data and indexes. # Make sure that you have enough free disk space. innodb_data_file_path = ibdata1:12M:autoextend # # Set buffer pool size to 50-80% of your computer's memory innodb_buffer_pool_size=256M innodb_additional_mem_pool_size=20M # # Set the log file size to about 25% of the buffer pool size innodb_log_file_size=64M innodb_log_buffer_size=8M # innodb_flush_log_at_trx_commit=1
Data files must be less than 2GB in some file systems. The combined size of the log files can be up to 512GB. The combined size of data files must be slightly larger than 10MB.
      When you create an InnoDB system tablespace for
      the first time, it is best that you start the MySQL server from
      the command prompt. InnoDB then prints the
      information about the database creation to the screen, so you can
      see what is happening. For example, on Windows, if
      mysqld is located in C:\Program
      Files\MySQL\MySQL Server 5.6\bin, you can
      start it like this:
    
C:\> "C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqld" --console
      If you do not send server output to the screen, check the server's
      error log to see what InnoDB prints during the
      startup process.
      You can place InnoDB options in the
      [mysqld] group of any option file that your
      server reads when it starts. The locations for option files are
      described in Section 4.2.6, “Using Option Files”.
    
      If you installed MySQL on Windows using the installation and
      configuration wizards, the option file will be the
      my.ini file located in your MySQL
      installation directory. See Section 2.3.3, “Installing MySQL on Microsoft Windows Using MySQL Installer”.
    
      If your PC uses a boot loader where the C:
      drive is not the boot drive, your only option is to use the
      my.ini file in your Windows directory
      (typically C:\WINDOWS). You can use the
      SET command at the command prompt in a console
      window to print the value of WINDIR:
    
C:\> SET WINDIR
windir=C:\WINDOWS
      To make sure that mysqld reads options only
      from a specific file, use the
      --defaults-file option as the
      first option on the command line when starting the server:
    
mysqld --defaults-file=your_path_to_my_cnf
      Sample my.cnf file for
      large systems. Suppose that you have a Linux computer
      with 2GB RAM and three 60GB hard disks at directory paths
      /, /dr2 and
      /dr3. The following example shows possible
      configuration parameters in my.cnf for
      InnoDB.
    
[mysqld] # You can write your other MySQL server options here # ... innodb_data_home_dir = # # Data files must be able to hold your data and indexes innodb_data_file_path = /db/ibdata1:2000M;/dr2/db/ibdata2:2000M:autoextend # # Set buffer pool size to 50-80% of your computer's memory, # but make sure on Linux x86 total memory usage is < 2GB innodb_buffer_pool_size=1G innodb_additional_mem_pool_size=20M innodb_log_group_home_dir = /dr3/iblogs # # Set the log file size to about 25% of the buffer pool size innodb_log_file_size=250M innodb_log_buffer_size=8M # innodb_flush_log_at_trx_commit=1 innodb_lock_wait_timeout=50 # # Uncomment the next line if you want to use it #innodb_thread_concurrency=5
        On 32-bit GNU/Linux x86, be careful not to set memory usage too
        high. glibc may permit the process heap to
        grow over thread stacks, which crashes your server. It is a risk
        if the value of the following expression is close to or exceeds
        2GB:
      
innodb_buffer_pool_size + key_buffer_size + max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) + max_connections*2MB
        Each thread uses a stack (often 2MB, but only 256KB in MySQL
        binaries provided by Oracle Corporation.) and in the worst case
        also uses sort_buffer_size + read_buffer_size
        additional memory.
Tuning other mysqld server parameters. The following values are typical and suit most users:
[mysqld]
skip-external-locking
max_connections=200
read_buffer_size=1M
sort_buffer_size=1M
#
# Set key_buffer to 5 - 50% of your RAM depending on how much
# you use MyISAM tables, but keep key_buffer_size + InnoDB
# buffer pool size < 80% of your RAM
key_buffer_size=value
      On Linux, if the kernel is enabled for large page support,
      InnoDB can use large pages to allocate memory
      for its buffer pool and additional memory pool. See
      Section 8.12.5.2, “Enabling Large Page Support”.
      You can now query InnoDB tables where the MySQL
      data directory is on read-only media, by enabling the
      --innodb-read-only configuration
      option at server startup.
      To prepare an instance for read-only operation, make sure all the
      necessary information is flushed
      to the data files before storing it on the read-only medium. Run
      the server with change buffering disabled
      (innodb_change_buffering=0) and
      do a slow shutdown.
    
To enable read-only mode for an entire MySQL instance, specify the following configuration options at server startup:
          If the instance is on read-only media such as a DVD or CD, or
          the /var directory is not writeable by
          all:
          --pid-file=
          and path_on_writeable_media--event-scheduler=disabled
This mode of operation is appropriate in situations such as:
Distributing a MySQL application, or a set of MySQL data, on a read-only storage medium such as a DVD or CD.
Multiple MySQL instances querying the same data directory simultaneously, typically in a data warehousing configuration. You might use this technique to avoid bottlenecks that can occur with a heavily loaded MySQL instance, or you might use different configuration options for the various instances to tune each one for particular kinds of queries.
Querying data that has been put into a read-only state for security or data integrity reasons, such as archived backup data.
This feature is mainly intended for flexibility in distribution and deployment, rather than raw performance based on the read-only aspect. See Section 8.5.3, “Optimizing InnoDB Read-Only Transactions” for ways to tune the performance of read-only queries, which do not require making the entire server read-only.
      When the server is run in read-only mode through the
      --innodb-read-only option,
      certain InnoDB features and components are
      reduced or turned off entirely:
          No change
          buffering is done, in particular no merges from the
          change buffer. To make sure the change buffer is empty when
          you prepare the instance for read-only operation, disable
          change buffering
          (innodb_change_buffering=0)
          and do a slow
          shutdown first.
        
There is no crash recovery phase at startup. The instance must have performed a slow shutdown before being put into the read-only state.
          Because the redo log is
          not used in read-only operation, you can set
          innodb_log_file_size to the
          smallest size possible (1 MB) before making the instance
          read-only.
        
All background threads other than I/O read threads are turned off. As a consequence, a read-only instance cannot encounter any deadlocks.
          Information about deadlocks, monitor output, and so on is not
          written to temporary files. As a consequence,
          SHOW ENGINE
          INNODB STATUS does not produce any output.
        
          If the MySQL server is started with
          --innodb-read-only but the
          data directory is still on writeable media, the root user can
          still perform DCL operations
          such as GRANT and
          REVOKE.
        
Changes to configuration option settings that would normally change the behavior of write operations, have no effect when the server is in read-only mode.
The MVCC processing to enforce isolation levels is turned off. All queries read the latest version of a record, because update and deletes are not possible.
          The undo log is not used.
          Disable any settings for the
          innodb_undo_tablespaces and
          innodb_undo_directory
          configuration options.
      This section provides performance related configuration
      information for the InnoDB buffer pool. For
      additional information, see Section 8.10.1, “The InnoDB Buffer Pool”.
A read-ahead request is an I/O request to prefetch multiple pages in the buffer pool asynchronously, in anticipation that these pages will be needed soon. The requests bring in all the pages in one extent. InnoDB uses two read-ahead algorithms to improve I/O performance:
        Linear read-ahead is a
        technique that predicts what pages might be needed soon based on
        pages in the buffer pool being accessed sequentially. You
        control when InnoDB performs a read-ahead operation by adjusting
        the number of sequential page accesses required to trigger an
        asynchronous read request, using the configuration parameter
        innodb_read_ahead_threshold.
        Before this parameter was added, InnoDB would
        only calculate whether to issue an asynchronous prefetch request
        for the entire next extent when it read in the last page of the
        current extent.
      
        The configuration parameter
        innodb_read_ahead_threshold
        controls how sensitive InnoDB is in detecting
        patterns of sequential page access. If the number of pages read
        sequentially from an extent is greater than or equal to
        innodb_read_ahead_threshold,
        InnoDB initiates an asynchronous read-ahead
        operation of the entire following extent. It can be set to any
        value from 0-64. The default value is 56. The higher the value,
        the more strict the access pattern check. For example, if you
        set the value to 48, InnoDB triggers a linear
        read-ahead request only when 48 pages in the current extent have
        been accessed sequentially. If the value is 8,
        InnoDB would trigger an asynchronous
        read-ahead even if as few as 8 pages in the extent were accessed
        sequentially. You can set the value of this parameter in the
        MySQL configuration
        file, or change it dynamically with the SET
        GLOBAL command, which requires the
        SUPER privilege.
      
        Random read-ahead is a
        technique that predicts when pages might be needed soon based on
        pages already in the buffer pool, regardless of the order in
        which those pages were read. If 13 consecutive pages from the
        same extent are found in the buffer pool, InnoDB asynchronously
        issues a request to prefetch the remaining pages of the extent.
        To enable this feature, set the configuration variable
        innodb_random_read_ahead to
        ON.
      
        The SHOW ENGINE INNODB STATUS command
        displays statistics to help you evaluate the effectiveness of
        the read-ahead algorithm. Statistics include counter information
        for the
        Innodb_buffer_pool_read_ahead,
        Innodb_buffer_pool_read_ahead_evicted,
        and Innodb_buffer_pool_read_ahead_rnd global
        status variables. This information can be useful when
        fine-tuning the
        innodb_random_read_ahead
        setting.
      
For more information about I/O performance, see Section 8.5.8, “Optimizing InnoDB Disk I/O” and Section 8.12.3, “Optimizing Disk I/O”.
        InnoDB performs certain tasks in the background, including
        flushing of
        dirty pages
        (those pages that have been changed but are not yet written to
        the database files) from the
        buffer pool.
        InnoDB flushes buffer pool pages if the percentage of dirty
        pages in the buffer pool exceeds
        innodb_max_dirty_pages_pct. As
        of MySQL 5.7.5, InnoDB flushes buffer pool pages if the
        percentage of dirty pages in the buffer pool is greater than or
        equal to
        innodb_max_dirty_pages_pct
        (Bug#13029450).
      
InnoDB uses an algorithm to estimate the required rate of flushing, based on the speed of redo log generation and the current rate of flushing. The intent is to smooth overall performance by ensuring that buffer flush activity keeps up with the need to keep the buffer pool “clean”. Automatically adjusting the rate of flushing can help to avoid sudden dips in throughput, when excessive buffer pool flushing limits the I/O capacity available for ordinary read and write activity.
        InnoDB uses its log files in a circular fashion. Before reusing
        a portion of a log file, InnoDB flushes to disk all dirty buffer
        pool pages whose redo entries are contained in that portion of
        the log file, a process known as a
        sharp checkpoint.
        If a workload is write-intensive, it generates a lot of redo
        information, all written to the log file. If all available space
        in the log files is used up, a sharp checkpoint occurs, causing
        a temporary reduction in throughput. This situation can happen
        even though
        innodb_max_dirty_pages_pct is
        not reached.
      
InnoDB uses a heuristic-based algorithm to avoid such a scenario, by measuring the number of dirty pages in the buffer pool and the rate at which redo is being generated. Based on these numbers, InnoDB decides how many dirty pages to flush from the buffer pool each second. This self-adapting algorithm is able to deal with sudden changes in the workload.
Internal benchmarking has also shown that this algorithm not only maintains throughput over time, but can also improve overall throughput significantly.
        Because adaptive flushing can significantly affect the I/O
        pattern of a workload, the
        innodb_adaptive_flushing
        configuration parameter lets you turn off this feature. The
        default value for
        innodb_adaptive_flushing
        is TRUE, enabling the adaptive flushing
        algorithm. You can set the value of this parameter in the MySQL
        option file (my.cnf or
        my.ini) or change it dynamically with the
        SET GLOBAL command, which requires the
        SUPER privilege.
      
For more information about InnoDB I/O performance, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
Rather than using a strictly LRU algorithm, InnoDB uses a technique to minimize the amount of data that is brought into the buffer pool and never accessed again. The goal is to make sure that frequently accessed (“hot”) pages remain in the buffer pool, even as read-ahead and full table scans bring in new blocks that might or might not be accessed afterward.
        Newly read blocks are inserted into the middle of the LRU list.
        All newly read pages are inserted at a location that by default
        is 3/8 from the tail of the LRU list. The
        pages are moved to the front of the list (the most-recently used
        end) when they are accessed in the buffer pool for the first
        time. Thus pages that are never accessed never make it to the
        front portion of the LRU list, and “age out” sooner
        than with a strict LRU approach. This arrangement divides the
        LRU list into two segments, where the pages downstream of the
        insertion point are considered “old” and are
        desirable victims for LRU eviction.
      
For an explanation of the inner workings of the InnoDB buffer pool and the specifics of its LRU replacement algorithm, see Section 8.10.1, “The InnoDB Buffer Pool”.
        You can control the insertion point in the LRU list, and choose
        whether InnoDB applies the same optimization to blocks brought
        into the buffer pool by table or index scans. The configuration
        parameter
        innodb_old_blocks_pct
        controls the percentage of “old” blocks in the LRU
        list. The default value of
        innodb_old_blocks_pct
        is 37, corresponding to the original fixed
        ratio of 3/8. The value range is 5 (new pages
        in the buffer pool age out very quickly) to
        95 (only 5% of the buffer pool is reserved
        for hot pages, making the algorithm close to the familiar LRU
        strategy).
      
        The optimization that keeps the buffer pool from being churned
        by read-ahead can avoid similar problems due to table or index
        scans. In these scans, a data page is typically accessed a few
        times in quick succession and is never touched again. The
        configuration parameter
        innodb_old_blocks_time
        specifies the time window (in milliseconds) after the first
        access to a page during which it can be accessed without being
        moved to the front (most-recently used end) of the LRU list. The
        default value of
        innodb_old_blocks_time is
        1000. Increasing this value makes more and
        more blocks likely to age out faster from the buffer pool.
      
        Both innodb_old_blocks_pct and
        innodb_old_blocks_time are
        dynamic, global and can be specified in the MySQL option file
        (my.cnf or my.ini) or
        changed at runtime with the SET GLOBAL
        command. Changing the setting requires the
        SUPER privilege.
      
        To help you gauge the effect of setting these parameters, the
        SHOW ENGINE INNODB STATUS command reports
        additional statistics. The BUFFER POOL AND
        MEMORY section looks like:
      
Total memory allocated 1107296256; in additional pool allocated 0 Dictionary memory allocated 80360 Buffer pool size 65535 Free buffers 0 Database pages 63920 Old database pages 23600 Modified db pages 34969 Pending reads 32 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 414946, not young 2930673 1274.75 youngs/s, 16521.90 non-youngs/s Pages read 486005, created 3178, written 160585 2132.37 reads/s, 3.40 creates/s, 323.74 writes/s Buffer pool hit rate 950 / 1000, young-making rate 30 / 1000 not 392 / 1000 Pages read ahead 1510.10/s, evicted without access 0.00/s LRU len: 63920, unzip_LRU len: 0 I/O sum[43690]:cur[221], unzip sum[0]:cur[0]
            Old database pages is the number of pages
            in the “old” segment of the LRU list.
          
            Pages made young and not
            young is the total number of “old”
            pages that have been made young or not respectively.
          
            youngs/s and
            non-young/s is the rate at which page
            accesses to the “old” pages have resulted in
            making such pages young or otherwise respectively since the
            last invocation of the command.
          
            young-making rate and
            not provides the same rate but in terms
            of overall buffer pool accesses instead of accesses just to
            the “old” pages.
          Per second averages provided in InnoDB
          Monitor output are based on the elapsed time between the
          current time and the last time InnoDB
          Monitor output was printed.
Because the effects of these parameters can vary widely based on your hardware configuration, your data, and the details of your workload, always benchmark to verify the effectiveness before changing these settings in any performance-critical or production environment.
        In mixed workloads where most of the activity is OLTP type with
        periodic batch reporting queries which result in large scans,
        setting the value of
        innodb_old_blocks_time
        during the batch runs can help keep the working set of the
        normal workload in the buffer pool.
      
        When scanning large tables that cannot fit entirely in the
        buffer pool, setting
        innodb_old_blocks_pct
        to a small value keeps the data that is only read once from
        consuming a significant portion of the buffer pool. For example,
        setting innodb_old_blocks_pct=5 restricts
        this data that is only read once to 5% of the buffer pool.
      
        When scanning small tables that do fit into memory, there is
        less overhead for moving pages around within the buffer pool, so
        you can leave
        innodb_old_blocks_pct at its
        default value, or even higher, such as
        innodb_old_blocks_pct=50.
      
        The effect of the
        innodb_old_blocks_time
        parameter is harder to predict than the
        innodb_old_blocks_pct
        parameter, is relatively small, and varies more with the
        workload. To arrive at an optimal value, conduct your own
        benchmarks if the performance improvement from adjusting
        innodb_old_blocks_pct
        is not sufficient.
      
For more information about the InnoDB buffer pool, see Section 8.10.1, “The InnoDB Buffer Pool”.
        For systems with buffer pools in the multi-gigabyte range,
        dividing the buffer pool into separate instances can improve
        concurrency, by reducing contention as different threads read
        and write to cached pages. This feature is typically intended
        for systems with a buffer
        pool size in the multi-gigabyte range. Multiple buffer
        pool instances are configured using the
        innodb_buffer_pool_instances
        configuration option, and you might also adjust the
        innodb_buffer_pool_size value.
      
When the InnoDB buffer pool is large, many data requests can be satisfied by retrieving from memory. You might encounter bottlenecks from multiple threads trying to access the buffer pool at once. You can enable multiple buffer pools to minimize this contention. Each page that is stored in or read from the buffer pool is assigned to one of the buffer pools randomly, using a hashing function. Each buffer pool manages its own free lists, flush lists, LRUs, and all other data structures connected to a buffer pool, and is protected by its own buffer pool mutex.
        To enable multiple buffer pool instances, set the
        innodb_buffer_pool_instances configuration
        option to a value greater than 1 (the default) up to 64 (the
        maximum). This option takes effect only when you set the
        innodb_buffer_pool_size to a size of 1
        gigabyte or more. The total size you specify is divided among
        all the buffer pools. For best efficiency, specify a combination
        of innodb_buffer_pool_instances
        and innodb_buffer_pool_size so
        that each buffer pool instance is at least 1 gigabyte.
      
For more information about the InnoDB buffer pool, see Section 8.10.1, “The InnoDB Buffer Pool”.
        To avoid a lengthy warmup
        period after restarting the server, particularly for instances
        with large InnoDB
        buffer pools, you can
        save the InnoDB buffer pool state at server
        shutdown and restore the buffer pool to the same state at server
        startup.
          The
          innodb_buffer_pool_dump_at_shutdown
          and
          innodb_buffer_pool_load_at_startup
          configuration options are enabled by default as of MySQL
          5.7.7, and the default value for
          innodb_buffer_pool_dump_pct
          is reduced from 100 to 25.
        After you restart a busy server, there is typically a warmup
        period with steadily increasing throughput, as disk pages that
        were in the InnoDB buffer pool are brought
        back into memory (as the same data is queried, updated, and so
        on). The ability to restore the buffer pool to the pre-shutdown
        state shortens the warmup period as it allows you to immediately
        reload disk pages that were in the buffer pool before the
        restart, rather than waiting for DML operations to access the
        corresponding rows. The I/O requests can be performed in large
        batches, making the overall I/O faster. The page loading happens
        in the background, and does not delay the database startup.
      
In addition to saving the buffer pool state at shutdown and restoring it at startup, you can also save and restore the buffer pool state at any time, while the server is running. For example, you might save the state of the buffer pool after reaching a stable throughput under a steady workload. You might restore the previous buffer pool state after running reports or maintenance jobs that bring data pages into the buffer pool that are only needed during the time period for those operations, or after some other period with a non-typical workload.
        Although the buffer pool itself could be many gigabytes in size,
        the data that InnoDB saves on disk is tiny by
        comparison. Only tablespace IDs and page IDs necessary to locate
        the appropriate pages are saved to disk. This information is
        derived from the
        INNODB_BUFFER_PAGE_LRU
        INFORMATION_SCHEMA table. By default,
        tablespace ID and page ID data is saved in a file named
        ib_buffer_pool, which is saved to the
        InnoDB data directory. The file name can be
        modified using the
        innodb_buffer_pool_filename
        configuration parameter.
      
Because the data is cached in and aged out of the buffer pool as it is with regular database operations, there is no problem if the disk pages are recently updated, or if a DML operation involves data that has not yet been loaded. The loading mechanism skips requested pages that no longer exist.
The underlying mechanism involves a background thread that is dispatched to perform the dump and load operations.
Disk pages from compressed tables are loaded into the buffer pool in their compressed form. Uncompression happens as usual when the page contents are accessed during the course of DML operations. Because decompression is a CPU-intensive process, it is more efficient for concurrency to perform the operation in one of the connection threads rather than in the single thread that performs the buffer pool restore operation.
        Before you dump pages from the buffer pool, configure the
        percentage of most-recently-used buffer pool pages that you want
        to dump by setting the
        innodb_buffer_pool_dump_pct
        option. If you plan to dump buffer pool pages while the server
        is running, you can configure the option dynamically:
      
SET GLOBAL innodb_buffer_dump_pct=40;
        If you plan to dump buffer pool pages at server shutdown, set
        innodb_buffer_pool_dump_pct in
        your configuration file.
      
[mysqld] 
      innodb_buffer_dump_pct=40
        The innodb_buffer_pool_dump_pct
        default value was changed from 100 (dump all pages) to 25 (dump
        25% of most-recently-used pages) in MySQL 5.7.7 when
        innodb_buffer_pool_dump_at_shutdown
        and
        innodb_buffer_pool_load_at_startup
        were enabled by default.
        To save the state of the InnoDB buffer pool
        at server shutdown, issue the statement:
      
SET GLOBAL innodb_buffer_pool_dump_at_shutdown=ON;
        innodb_buffer_pool_dump_at_shutdown
        is enabled by default in MySQL 5.7.7.
      
        To save the state of the InnoDB buffer pool
        while MySQL server is running, issue the statement:
SET GLOBAL innodb_buffer_pool_dump_now=ON;
        To restore the InnoDB buffer pool state at
        server startup, specify the
        --innodb_buffer_pool_load_at_startup option
        when starting the server:
      
mysqld --innodb_buffer_pool_load_at_startup=ON;
        innodb_buffer_pool_load_at_startup
        is enabled by default in MySQL 5.7.7.
      
        To restore the InnoDB buffer pool state while
        MySQL is running, issue the statement:
SET GLOBAL innodb_buffer_pool_load_now=ON;
        To display progress when saving the InnoDB
        buffer pool state to disk, use one of the following options:
      
SHOW STATUS LIKE 'Innodb_buffer_pool_dump_status';
or:
SELECT variable_value FROM information_schema.global_status WHERE variable_name = 'INNODB_BUFFER_POOL_DUMP_STATUS';
If the operation has not yet started, “not started” is returned. If the operation is complete, the completion time is printed (e.g. Finished at 110505 12:18:02). If the operation is in progress, status information is provided (e.g. Dumping buffer pool 5/7, page 237/2873).
        To display progress when loading the InnoDB
        buffer pool, use one of the following options:
      
SHOW STATUS LIKE 'Innodb_buffer_pool_load_status';
or:
SELECT variable_value FROM information_schema.global_status WHERE variable_name = 'INNODB_BUFFER_POOL_LOAD_STATUS';
If the operation has not yet started, “not started” is returned. If the operation is complete, the completion time is printed (e.g. Finished at 110505 12:23:24). If the operation is in progress, status information is provided (e.g. Loaded 123/22301 pages).
To abort a buffer pool load operation, issue the statement:
SET innodb_buffer_pool_load_abort=ON;
As of MySQL 5.7.6, you can monitor buffer pool load progress using Performance Schema.
          The following example demonstrates how to enable the
          stage/innodb/buffer pool load stage event
          instrument and related consumer tables to monitor buffer pool
          load progress.
        
For information about buffer pool dump and load procedures used in this example, see Section 14.3.3.5, “Preloading the InnoDB Buffer Pool for Faster Restart”. For information about Performance Schema stage event instruments and related consumers, see Section 22.9.5, “Performance Schema Stage Event Tables”.
              Enable the stage/innodb/buffer pool
              load instrument:
            
mysql> UPDATE setup_instruments SET ENABLED = 'YES' WHERE NAME LIKE 'stage/innodb/buffer%'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0
              Enable the stage event consumer tables, which include
              events_stages_current,
              events_stages_history, and
              events_stages_history_long.
            
mysql> UPDATE setup_consumers SET ENABLED = 'YES' WHERE NAME LIKE '%stages%'; Query OK, 3 rows affected (0.00 sec) Rows matched: 3 Changed: 3 Warnings: 0
              Dump the current buffer pool state by enabling
              innodb_buffer_pool_dump_now.
            
mysql> SET GLOBAL innodb_buffer_pool_dump_now=ON; Query OK, 0 rows affected (0.00 sec)
Check the buffer pool dump status to ensure that the operation has completed.
mysql> SHOW STATUS LIKE 'Innodb_buffer_pool_dump_status'\G
*************************** 1. row ***************************
Variable_name: Innodb_buffer_pool_dump_status
        Value: Buffer pool(s) dump completed at 150202 16:38:58
1 row in set (0.01 sec)
              Load the buffer pool by enabling
              innodb_buffer_pool_load_now:
            
mysql> SET GLOBAL innodb_buffer_pool_load_now=ON; Query OK, 0 rows affected (0.01 sec)
              Check the current status of the buffer pool load operation
              by querying the Performance Schema
              events_stages_current table.
              The WORK_COMPLETED column shows the
              number of buffer pool pages loaded. The
              WORK_ESTIMATED column provides an
              estimate of the remaining work, in pages.
            
mysql> SELECT EVENT_NAME, WORK_COMPLETED, WORK_ESTIMATED FROM events_stages_current; +-------------------------------+----------------+----------------+ | EVENT_NAME | WORK_COMPLETED | WORK_ESTIMATED | +-------------------------------+----------------+----------------+ | stage/innodb/buffer pool load | 5353 | 7167 | +-------------------------------+----------------+----------------+ 1 row in set (0.00 sec)
              The events_stages_current
              table returns an empty set if the buffer pool load
              operation has completed. In this case, you can check the
              events_stages_history table
              to view data for the completed event. For example:
            
mysql> SELECT EVENT_NAME, WORK_COMPLETED, WORK_ESTIMATED FROM events_stages_history; +-------------------------------+----------------+----------------+ | EVENT_NAME | WORK_COMPLETED | WORK_ESTIMATED | +-------------------------------+----------------+----------------+ | stage/innodb/buffer pool load | 7167 | 7167 | +-------------------------------+----------------+----------------+ 1 row in set (0.00 sec)
            You can also monitor buffer pool load progress using
            Performance Schema when loading the buffer pool at startup
            using
            innodb_buffer_pool_load_at_startup.
            In this case, the stage/innodb/buffer pool
            load instrument and related consumers must also be
            enabled at startup. For more information, see
            Section 22.2.2, “Performance Schema Startup Configuration”.
        The configuration options
        innodb_flush_neighbors and
        innodb_lru_scan_depth let you
        fine-tune certain aspects of the
        flushing process for the
        InnoDB
        buffer pool. These
        options primarily help write-intensive
        workloads. With heavy
        DML activity, flushing can fall
        behind if it is not aggressive enough, resulting in excessive
        memory use in the buffer pool; or, disk writes due to flushing
        can saturate your I/O capacity if that mechanism is too
        aggressive. The ideal settings depend on your workload, data
        access patterns, and storage configuration (for example, whether
        data is stored on HDD or
        SSD devices).
      
        For systems with constant heavy
        workloads, or workloads
        that fluctuate widely, several configuration options let you
        fine-tune the flushing
        behavior for InnoDB tables:
        innodb_adaptive_flushing_lwm,
        innodb_max_dirty_pages_pct_lwm,
        innodb_io_capacity_max, and
        innodb_flushing_avg_loops.
        These options feed into the formula used by the
        innodb_adaptive_flushing
        option.
      
        The innodb_adaptive_flushing,
        innodb_io_capacity and
        innodb_max_dirty_pages_pct
        options are limited or extended by the following options:
        innodb_adaptive_flushing_lwm,
        innodb_io_capacity_max and
        innodb_max_dirty_pages_pct_lwm:
            The InnoDB
            adaptive
            flushing mechanism is not appropriate in all cases.
            It gives the most benefit when the
            redo log is in danger
            of filling up. The
            innodb_adaptive_flushing_lwm
            option specifies a “low water mark” percentage
            of redo log capacity; when that threshold is crossed,
            InnoDB turns on adaptive flushing even if
            not specified by the
            innodb_adaptive_flushing
            option.
          
            If flushing activity falls far behind,
            InnoDB can flush more aggressively than
            specified by
            innodb_io_capacity.
            innodb_io_capacity_max
            represents an upper limit on the I/O capacity used in such
            emergency situations, so that the spike in I/O does not
            consume all the capacity of the server.
          
            InnoDB tries to flush data from the
            buffer pool so that the percentage of dirty pages does not
            exceed the value of
            innodb_max_dirty_pages_pct.
            The default value for
            innodb_max_dirty_pages_pct
            is 75.
              The
              innodb_max_dirty_pages_pct
              setting establishes a target for flushing activity. It
              does not affect the rate of flushing. For information
              about managing the rate of flushing, see
              Section 14.3.3.2, “Configuring the Rate of InnoDB Buffer Pool Flushing”.
            The
            innodb_max_dirty_pages_pct_lwm
            option specifies a “low water mark” value that
            represents the percentage of dirty pages where pre-flushing
            is enabled to control the dirty page ratio and ideally
            prevent the percentage of dirty pages from reaching
            innodb_max_dirty_pages_pct.
            A value of
            innodb_max_dirty_pages_pct_lwm=0
            disables the “pre-flushing” behavior.
Most of the options referenced above are most applicable to servers that run write-heavy workloads for long periods of time and have little reduced load time to catch up with changes waiting to be written to disk.
        innodb_flushing_avg_loops
        defines the number of iterations for which
        InnoDB keeps the previously calculated
        snapshot of the flushing state, which controls how quickly
        adaptive flushing responds to foreground load changes. Setting a
        high value for
        innodb_flushing_avg_loops means
        that InnoDB keeps the previously calculated
        snapshot longer, so adaptive flushing responds more slowly. A
        high value also reduces positive feedback between foreground and
        background work, but when setting a high value it is important
        to ensure that InnoDB redo log utilization
        does not reach 75% (the hardcoded limit at which async flushing
        starts) and that the
        innodb_max_dirty_pages_pct
        setting keeps the number of dirty pages to a level that is
        appropriate for the workload.
      
        Systems with consistent workloads, a large
        innodb_log_file_size, and small
        spikes that do not reach 75% redo log space utilization should
        use a high
        innodb_flushing_avg_loops value
        to keep flushing as smooth as possible. For systems with extreme
        load spikes or log files that do not provide a lot of space,
        consider a smaller
        innodb_flushing_avg_loops
        value. The smaller value will allow flushing to closely track
        the load and help avoid reaching 75% redo log space utilization.
      When InnoDB was developed, the memory
      allocators supplied with operating systems and run-time libraries
      were often lacking in performance and scalability. At that time,
      there were no memory allocator libraries tuned for multi-core
      CPUs. Therefore, InnoDB implemented its own
      memory allocator in the mem subsystem. This
      allocator is guarded by a single mutex, which may become a
      bottleneck.
      InnoDB also implements a wrapper interface
      around the system allocator (malloc and
      free) that is likewise guarded by a single
      mutex.
    
      Today, as multi-core systems have become more widely available,
      and as operating systems have matured, significant improvements
      have been made in the memory allocators provided with operating
      systems. New memory allocators perform better and are more
      scalable than they were in the past. The leading high-performance
      memory allocators include Hoard,
      libumem, mtmalloc,
      ptmalloc, tbbmalloc, and
      TCMalloc. Most workloads, especially those
      where memory is frequently allocated and released (such as
      multi-table joins), benefit from using a more highly tuned memory
      allocator as opposed to the internal,
      InnoDB-specific memory allocator.
    
      You can control whether InnoDB uses its own
      memory allocator or an allocator of the operating system, by
      setting the value of the system configuration parameter
      innodb_use_sys_malloc in the
      MySQL option file (my.cnf or
      my.ini). If set to ON or
      1 (the default), InnoDB uses
      the malloc and free
      functions of the underlying system rather than manage memory pools
      itself. This parameter is not dynamic, and takes effect only when
      the system is started. To continue to use the
      InnoDB memory allocator, set
      innodb_use_sys_malloc to
      0.
    
      When the InnoDB memory allocator is disabled,
      InnoDB ignores the value of the parameter
      innodb_additional_mem_pool_size.
      The InnoDB memory allocator uses an additional
      memory pool for satisfying allocation requests without having to
      fall back to the system memory allocator. When the
      InnoDB memory allocator is disabled, all such
      allocation requests are fulfilled by the system memory allocator.
    
      On Unix-like systems that use dynamic linking, replacing the
      memory allocator may be as easy as making the environment variable
      LD_PRELOAD or
      LD_LIBRARY_PATH point to the dynamic library
      that implements the allocator. On other systems, some relinking
      may be necessary. Please refer to the documentation of the memory
      allocator library of your choice.
    
      Since InnoDB cannot track all memory use when
      the system memory allocator is used
      (innodb_use_sys_malloc is
      ON), the section “BUFFER POOL AND
      MEMORY” in the output of the SHOW ENGINE INNODB
      STATUS command only includes the buffer pool statistics
      in the “Total memory allocated”. Any memory allocated
      using the mem subsystem or using
      ut_malloc is excluded.
        innodb_use_sys_malloc and
        innodb_additional_mem_pool_size
        were deprecated in MySQL 5.6.3 and are removed in MySQL 5.7.4.
      For more information about the performance implications of
      InnoDB memory usage, see
      Section 8.10, “Buffering and Caching”.
      When INSERT,
      UPDATE, and
      DELETE operations are performed on
      a table, the values of indexed columns (particularly the values of
      secondary keys) are often in an unsorted order, requiring
      substantial I/O to bring secondary indexes up to date.
      InnoDB has a
      change buffer that
      caches changes to secondary index entries when the relevant
      page is not in the
      buffer pool, thus avoiding
      expensive I/O operations by not immediately reading in the page
      from disk. The buffered changes are merged when the page is loaded
      to the buffer pool, and the updated page is later flushed to disk.
      The InnoDB main thread merges buffered changes
      when the server is nearly idle, and during a
      slow shutdown.
    
Because it can result in fewer disk reads and writes, the change buffer feature is most valuable for workloads that are I/O-bound, for example applications with a high volume of DML operations such as bulk inserts.
However, the change buffer occupies a part of the buffer pool, reducing the memory available to cache data pages. If the working set almost fits in the buffer pool, or if your tables have relatively few secondary indexes, it may be useful to disable change buffering. If the working set fits entirely within the buffer, change buffering does not impose extra overhead, because it only applies to pages that are not in the buffer pool.
      You can control the extent to which InnoDB
      performs change buffering using the
      innodb_change_buffering
      configuration parameter. You can enable or disable buffering for
      inserts, delete operations (when index records are initially
      marked for deletion) and purge operations (when index records are
      physically deleted). An update operation is a combination of an
      insert and a delete. The default
      innodb_change_buffering value is
      all.
    
      Permitted innodb_change_buffering
      values include:
          all
        
The default value: buffer inserts, delete-marking operations, and purges.
          none
        
Do not buffer any operations.
          inserts
        
Buffer insert operations.
          deletes
        
Buffer delete-marking operations.
          changes
        
Buffer both inserts and delete-marking operations.
          purges
        
Buffer the physical deletion operations that happen in the background.
      You can set the
      innodb_change_buffering parameter
      in the MySQL option file (my.cnf or
      my.ini) or change it dynamically with the
      SET
      GLOBAL command, which requires the
      SUPER privilege. Changing the setting affects
      the buffering of new operations; the merging of existing buffered
      entries is not affected.
    
For related information, see Section 14.2.6.5, “Change Buffer”. For information about configuring change buffer size, see Section 14.3.5.1, “Configuring the Change Buffer Maximum Size”.
        As of MySQL 5.6.2, the
        innodb_change_buffer_max_size
        configuration option allows you to configure the maximum size of
        the change buffer as a percentage of the total size of the
        buffer pool. By default,
        innodb_change_buffer_max_size
        is set to 25. The maximum setting is 50.
      
        You might consider increasing
        innodb_change_buffer_max_size
        on a MySQL server with heavy insert, update, and delete
        activity, where change buffer merging does not keep pace with
        new change buffer entries, causing the change buffer to reach
        its maximum size limit.
      
        You might consider decreasing
        innodb_change_buffer_max_size
        on a MySQL server with static data used for reporting, or if the
        change buffer consumes too much of the memory space that is
        shared with the buffer pool, causing pages to age out of the
        buffer pool sooner than desired.
      
        Test different settings with a representative workload to
        determine an optimal configuration. The
        innodb_change_buffer_max_size
        setting is dynamic, which allows you modify the setting without
        restarting the server.
      InnoDB uses operating system
      threads to process requests
      from user transactions. (Transactions may issue many requests to
      InnoDB before they commit or roll back.) On
      modern operating systems and servers with multi-core processors,
      where context switching is efficient, most workloads run well
      without any limit on the number of concurrent threads. Scalability
      improvements in MySQL 5.5 and up reduce the need to limit the
      number of concurrently executing threads inside
      InnoDB.
    
      In situations where it is helpful to minimize context switching
      between threads, InnoDB can use a number of
      techniques to limit the number of concurrently executing operating
      system threads (and thus the number of requests that are processed
      at any one time). When InnoDB receives a new
      request from a user session, if the number of threads concurrently
      executing is at a pre-defined limit, the new request sleeps for a
      short time before it tries again. A request that cannot be
      rescheduled after the sleep is put in a first-in/first-out queue
      and eventually is processed. Threads waiting for locks are not
      counted in the number of concurrently executing threads.
    
      You can limit the number of concurrent threads by setting the
      configuration parameter
      innodb_thread_concurrency.
      Once the number of executing threads reaches this limit,
      additional threads sleep for a number of microseconds, set by the
      configuration parameter
      innodb_thread_sleep_delay,
      before being placed into the queue.
    
      Previously, it required experimentation to find the optimal value
      for innodb_thread_sleep_delay,
      and the optimal value could change depending on the workload. In
      MySQL 5.6.3 and higher, you can set the configuration option
      innodb_adaptive_max_sleep_delay
      to the highest value you would allow for
      innodb_thread_sleep_delay, and
      InnoDB automatically adjusts
      innodb_thread_sleep_delay up or
      down depending on the current thread-scheduling activity. This
      dynamic adjustment helps the thread scheduling mechanism to work
      smoothly during times when the system is lightly loaded and when
      it is operating near full capacity.
    
      The default value for
      innodb_thread_concurrency
      and the implied default limit on the number of concurrent threads
      has been changed in various releases of MySQL and
      InnoDB. Currently, the default value of
      innodb_thread_concurrency is
      0, so that by default there is no limit on the
      number of concurrently executing threads.
    
      Note that InnoDB causes threads to sleep only
      when the number of concurrent threads is limited. When there is no
      limit on the number of threads, all contend equally to be
      scheduled. That is, if
      innodb_thread_concurrency is
      0, the value of
      innodb_thread_sleep_delay is
      ignored.
    
      When there is a limit on the number of threads (when
      innodb_thread_concurrency is >
      0), InnoDB reduces context switching overhead
      by permitting multiple requests made during the execution of a
      single SQL statement to enter
      InnoDB without observing the limit set by
      innodb_thread_concurrency.
      Since an SQL statement (such as a join) may comprise multiple row
      operations within InnoDB,
      InnoDB assigns a specified number of
      “tickets” that allow a thread to be scheduled
      repeatedly with minimal overhead.
    
      When a new SQL statement starts, a thread has no tickets, and it
      must observe
      innodb_thread_concurrency.
      Once the thread is entitled to enter InnoDB, it
      is assigned a number of tickets that it can use for subsequently
      entering InnoDB to perform row operations. If
      the tickets run out, the thread is evicted, and
      innodb_thread_concurrency
      is observed again which may place the thread back into the
      first-in/first-out queue of waiting threads. When the thread is
      once again entitled to enter InnoDB, tickets
      are assigned again. The number of tickets assigned is specified by
      the global option
      innodb_concurrency_tickets,
      which is 5000 by default. A thread that is waiting for a lock is
      given one ticket once the lock becomes available.
    
      The correct values of these variables depend on your environment
      and workload. Try a range of different values to determine what
      value works for your applications. Before limiting the number of
      concurrently executing threads, review configuration options that
      may improve the performance of InnoDB on
      multi-core and multi-processor computers, such as
      innodb_adaptive_hash_index.
    
For general performance information about MySQL thread handling, see Section 8.12.6.1, “How MySQL Uses Threads for Client Connections”.
      InnoDB uses background threads
      to service various types of I/O requests. You can configure the
      number of background threads that service read and write I/O on
      data pages, using the configuration parameters
      innodb_read_io_threads and
      innodb_write_io_threads. These
      parameters signify the number of background threads used for read
      and write requests respectively. They are effective on all
      supported platforms. You can set the value of these parameters in
      the MySQL option file (my.cnf or
      my.ini); you cannot change them dynamically.
      The default value for these parameters is 4 and
      the permissible values range from 1-64.
    
      The purpose of this change is to make InnoDB more scalable on high
      end systems. Each background thread can handle up to 256 pending
      I/O requests. A major source of background I/O is the
      read-ahead requests. InnoDB
      tries to balance the load of incoming requests in such way that
      most of the background threads share work equally. InnoDB also
      attempts to allocate read requests from the same extent to the
      same thread to increase the chances of coalescing the requests
      together. If you have a high end I/O subsystem and you see more
      than 64 ×
      innodb_read_io_threads pending
      read requests in SHOW ENGINE INNODB STATUS, you
      might gain by increasing the value of
      innodb_read_io_threads.
    
For more information about InnoDB I/O performance, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
The master thread in InnoDB is a thread that performs various tasks in the background. Most of these tasks are I/O related, such as flushing dirty pages from the buffer pool or writing changes from the insert buffer to the appropriate secondary indexes. The master thread attempts to perform these tasks in a way that does not adversely affect the normal working of the server. It tries to estimate the free I/O bandwidth available and tune its activities to take advantage of this free capacity. Historically, InnoDB has used a hard coded value of 100 IOPs (input/output operations per second) as the total I/O capacity of the server.
      The parameter innodb_io_capacity
      indicates the overall I/O capacity available to InnoDB. This
      parameter should be set to approximately the number of I/O
      operations that the system can perform per second. The value
      depends on your system configuration. When
      innodb_io_capacity is set,
      the master threads estimates the I/O bandwidth available for
      background tasks based on the set value. Setting the value to
      100 reverts to the old behavior.
    
      You can set the value of
      innodb_io_capacity to any
      number 100 or greater. The default value is
      200, reflecting that the performance of typical
      modern I/O devices is higher than in the early days of MySQL.
      Typically, values around the previous default of 100 are
      appropriate for consumer-level storage devices, such as hard
      drives up to 7200 RPMs. Faster hard drives, RAID configurations,
      and SSDs benefit from higher values.
    
      The innodb_io_capacity setting is
      a total limit for all buffer pool instances. When dirty pages are
      flushed, the innodb_io_capacity
      limit is divided equally among buffer pool instances. For more
      information, see the
      innodb_io_capacity system
      variable description.
    
      You can set the value of this parameter in the MySQL option file
      (my.cnf or my.ini) or change
      it dynamically with the SET GLOBAL command,
      which requires the SUPER privilege.
    
      The innodb_flush_sync
      configuration option, introduced in MySQL 5.7.8, causes the
      innodb_io_capacity setting to be
      ignored during bursts of I/O activity that occur at checkpoints.
      innodb_flush_sync is enabled by
      default.
    
      Formerly, the InnoDB master thread also
      performed any needed purge
      operations. In MySQL 5.6.5 and higher, those I/O operations are
      moved to other background threads, whose number is controlled by
      the innodb_purge_threads
      configuration option.
    
For more information about InnoDB I/O performance, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
Many InnoDB mutexes and rw-locks are reserved for a short time. On a multi-core system, it can be more efficient for a thread to continuously check if it can acquire a mutex or rw-lock for a while before sleeping. If the mutex or rw-lock becomes available during this polling period, the thread can continue immediately, in the same time slice. However, too-frequent polling by multiple threads of a shared object can cause “cache ping pong”, different processors invalidating portions of each others' cache. InnoDB minimizes this issue by waiting a random time between subsequent polls. The delay is implemented as a busy loop.
      You can control the maximum delay between testing a mutex or
      rw-lock using the parameter
      innodb_spin_wait_delay. The
      duration of the delay loop depends on the C compiler and the
      target processor. (In the 100MHz Pentium era, the unit of delay
      was one microsecond.) On a system where all processor cores share
      a fast cache memory, you might reduce the maximum delay or disable
      the busy loop altogether by setting
      innodb_spin_wait_delay=0. On a system with
      multiple processor chips, the effect of cache invalidation can be
      more significant and you might increase the maximum delay.
    
      The default value of
      innodb_spin_wait_delay is
      6. The spin wait delay is a dynamic global
      parameter that you can specify in the MySQL option file
      (my.cnf or my.ini) or change
      at runtime with the command SET GLOBAL
      innodb_spin_wait_delay=,
      where delaydelaySUPER privilege.
    
For performance considerations for InnoDB locking operations, see Section 8.11, “Optimizing Locking Operations”.
The purge operations (a type of garbage collection) that InnoDB performs automatically is now done in one or more separate threads, rather than as part of the master thread. This change improves scalability, because the main database operations run independently from maintenance work happening in the background.
      To control this feature, increase the value of the configuration
      option innodb_purge_threads. If
      DML action is concentrated on a single table or a few tables, keep
      the setting low so that the threads do not contend with each other
      for access to the busy tables. If DML operations are spread across
      many tables, increase the setting. Its maximum is 32.
    
      There is another related configuration option,
      innodb_purge_batch_size with a
      default value of 300 and maximum value of 5000. This option is
      mainly intended for experimentation and tuning of purge
      operations, and should not be interesting to typical users.
    
For more information about InnoDB I/O performance, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        Plan stability is a
        desirable goal for your biggest and most important queries.
        InnoDB has always computed statistics for
        each InnoDB table to help the optimizer find
        the most efficient
        query execution
        plan. Now you can make these statistics persistent, so
        that the index usage and join order for a particular query is
        less likely to change.
      
        This feature is on by default, enabled by the configuration
        option innodb_stats_persistent.
      
        You control how much sampling is done to collect the statistics
        by setting the
        innodb_stats_persistent_sample_pages
        configuration option.
      
        The configuration option
        innodb_stats_auto_recalc
        determines whether the statistics are calculated automatically
        whenever a table undergoes substantial changes (to more than 10%
        of the rows).
          Because of the asynchronous nature of automatic statistics
          recalculation (which occurs in the background), statistics may
          not be recalculated instantly after running a DML operation
          that affects more than 10% of a table, even when
          innodb_stats_auto_recalc is
          enabled. In some cases, statistics recalculation may be
          delayed by a few seconds. If up-to-date statistics are
          required immediately after changing significant portions of a
          table, run ANALYZE
          TABLE to initiate a synchronous (foreground)
          recalculation of statistics.
        If innodb_stats_auto_recalc is
        disabled, ensure the accuracy of optimizer statistics by issuing
        the ANALYZE TABLE statement for
        each applicable table after making substantial changes to
        indexed columns. You might run this statement in your setup
        scripts after representative data has been loaded into the
        table, and run it periodically after DML operations
        significantly change the contents of indexed columns, or on a
        schedule at times of low activity. When a new index is added to
        an existing table, index statistics are calculated and added to
        the innodb_index_stats table regardless of
        the value of
        innodb_stats_auto_recalc.
          To ensure statistics are gathered when a new index is created,
          either enable the
          innodb_stats_auto_recalc
          option, or run ANALYZE TABLE
          after creating each new index when the persistent statistics
          mode is enabled.
        You can set
        innodb_stats_persistent,
        innodb_stats_auto_recalc
        options at the global level before creating a table, or use the
        STATS_PERSISTENT,
        STATS_AUTO_RECALC, and
        STATS_SAMPLE_PAGES clauses on the
        CREATE TABLE and
        ALTER TABLE statements, to
        override the system-wide setting and configure persistent
        statistics for individual tables.
      
Formerly, these statistics were cleared on each server restart and after some other operations, and recomputed when the table was next accessed. The statistics are computed using a random sampling technique that could produce different estimates the next time, leading to different choices in the execution plan and thus variations in query performance.
        To revert to the previous method of collecting statistics that
        are periodically erased, run the command ALTER TABLE
        . For related information, see
        Section 14.3.11.2, “Configuring Non-Persistent Optimizer Statistics Parameters”
tbl_name
        STATS_PERSISTENT=0
          The MySQL query optimizer uses estimated
          statistics about key
          distributions to choose the indexes for an execution plan,
          based on the relative
          selectivity of the
          index. Operations such as ANALYZE
          TABLE cause InnoDB to sample
          random pages from each index on a table to estimate the
          cardinality of the
          index. (This technique is known as
          random dives.)
        
          To give you control over the quality of the statistics
          estimate (and thus better information for the query
          optimizer), you can change the number of sampled pages using
          the parameter
          innodb_stats_persistent_sample_pages,
          which can be set at runtime.
        
          innodb_stats_persistent_sample_pages
          has a default value of 20. As a general guideline, consider
          modifying this parameter when encountering the following
          issues:
              Statistics are not accurate enough and the
              optimizer chooses suboptimal plans, as shown by
              EXPLAIN output. The
              accuracy of statistics can be checked by comparing the
              actual cardinality of an index (as returned by running
              SELECT
              DISTINCT on the index columns) with the
              estimates provided in the
              mysql.innodb_index_stats persistent
              statistics table.
            
              If it is determined that statistics are not accurate
              enough, the value of
              innodb_stats_persistent_sample_pages
              should be increased until the statistics estimates are
              sufficiently accurate. Increasing
              innodb_stats_persistent_sample_pages
              too much, however, could cause
              ANALYZE TABLE to run
              slowly.
            
              ANALYZE TABLE is
              too slow. In this case
              innodb_stats_persistent_sample_pages
              should be decreased until ANALYZE
              TABLE execution time is acceptable. Decreasing
              the value too much, however, could lead to the first
              problem of inaccurate statistics and suboptimal query
              execution plans.
            
              If a balance cannot be achieved between accurate
              statistics and ANALYZE
              TABLE execution time, consider decreasing the
              number of indexed columns in the table or limiting the
              number of partitions to reduce
              ANALYZE TABLE complexity.
              The number of columns in the table's primary key is also
              important to consider, as primary key columns are appended
              to each non-unique index.
            
For related information, see Section 14.3.11.3, “Estimating ANALYZE TABLE Complexity for InnoDB Tables”.
          The persistent statistics feature relies on the internally
          managed tables in the mysql database, named
          innodb_table_stats and
          innodb_index_stats. These tables are set up
          automatically in all install, upgrade, and build-from-source
          procedures.
Table 14.2 Columns of innodb_table_stats
| Column name | Description | 
|---|---|
| database_name | Database name | 
| table_name | Table name, partition name, or subpartition name | 
| last_update | A timestamp indicating the last time that InnoDBupdated this row | 
| n_rows | The number of rows in the table | 
| clustered_index_size | The size of the primary index, in pages | 
| sum_of_other_index_sizes | The total size of other (non-primary) indexes, in pages | 
Table 14.3 Columns of innodb_index_stats
| Column name | Description | 
|---|---|
| database_name | Database name | 
| table_name | Table name, partition name, or subpartition name | 
| index_name | Index name | 
| last_update | A timestamp indicating the last time that InnoDBupdated this row | 
| stat_name | The name of the statistic, whose value is reported in the stat_valuecolumn | 
| stat_value | The value of the statistic that is named in stat_namecolumn | 
| sample_size | The number of pages sampled for the estimate provided in the stat_valuecolumn | 
| stat_description | Description of the statistic that is named in the stat_namecolumn | 
          Both the innodb_table_stats and
          innodb_index_stats tables include a
          last_update column showing when
          InnoDB last updated index statistics, as
          shown in the following example:
        
mysql> select * from innodb_table_stats \G
*************************** 1. row ***************************
           database_name: sakila
              table_name: actor
             last_update: 2014-05-28 16:16:44
                  n_rows: 200
    clustered_index_size: 1
sum_of_other_index_sizes: 1
...mysql> select * from innodb_index_stats \G
*************************** 1. row ***************************
   database_name: sakila
      table_name: actor
      index_name: PRIMARY
     last_update: 2014-05-28 16:16:44
       stat_name: n_diff_pfx01
      stat_value: 200
     sample_size: 1
     ...
          The innodb_table_stats and
          innodb_index_stats tables are ordinary
          tables and can be updated manually. The ability to update
          statistics manually makes it possible to force a specific
          query optimization plan or test alternative plans without
          modifying the database. If you manually update statistics,
          issue the FLUSH TABLE
           command to make
          MySQL reload the updated statistics.
tbl_name
          The innodb_table_stats table contains one
          row per table. The data collected is demonstrated in the
          following example.
        
          Table t1 contains a primary index (columns
          a, b) secondary index
          (columns c, d), and
          unique index (columns e,
          f):
        
CREATE TABLE t1 ( a INT, b INT, c INT, d INT, e INT, f INT, PRIMARY KEY (a, b), KEY i1 (c, d), UNIQUE KEY i2uniq (e, f) ) ENGINE=INNODB;
After inserting five rows of sample data, the table appears as follows:
mysql> SELECT * FROM t1; +---+---+------+------+------+------+ | a | b | c | d | e | f | +---+---+------+------+------+------+ | 1 | 1 | 10 | 11 | 100 | 101 | | 1 | 2 | 10 | 11 | 200 | 102 | | 1 | 3 | 10 | 11 | 100 | 103 | | 1 | 4 | 10 | 12 | 200 | 104 | | 1 | 5 | 10 | 12 | 100 | 105 | +---+---+------+------+------+------+ 5 rows in set (0.00 sec)
          To immediately update statistics, run
          ANALYZE TABLE (if
          innodb_stats_auto_recalc is
          enabled, statistics are updated automatically within a few
          seconds assuming that the 10% threshold for changed table rows
          is reached):
        
mysql> ANALYZE TABLE t1; +---------+---------+----------+----------+ | Table | Op | Msg_type | Msg_text | +---------+---------+----------+----------+ | test.t1 | analyze | status | OK | +---------+---------+----------+----------+ 1 row in set (0.02 sec)
          Table statistics for table t1 show the last
          time InnoDB updated the table statistics
          (2014-03-14 14:36:34), the number of rows
          in the table (5), the clustered index size
          (1 page), and the combined size of the
          other indexes (2 pages).
        
mysql> SELECT * FROM mysql.innodb_table_stats WHERE table_name like 't1'\G
*************************** 1. row ***************************
           database_name: test
              table_name: t1
             last_update: 2014-03-14 14:36:34
                  n_rows: 5
    clustered_index_size: 1
sum_of_other_index_sizes: 2
1 row in set (0.00 sec)      
          The innodb_index_stats table contains
          multiple rows for each index. Each row in the
          innodb_index_stats table provides data
          related to a particular index statistic which is named in the
          stat_name column and described in the
          stat_description column. For example:
        
mysql> SELECT index_name, stat_name, stat_value, stat_description
    -> FROM mysql.innodb_index_stats WHERE table_name like 't1';
+------------+--------------+------------+-----------------------------------+
| index_name | stat_name    | stat_value | stat_description                  |
+------------+--------------+------------+-----------------------------------+
| PRIMARY    | n_diff_pfx01 |          1 | a                                 |
| PRIMARY    | n_diff_pfx02 |          5 | a,b                               |
| PRIMARY    | n_leaf_pages |          1 | Number of leaf pages in the index |
| PRIMARY    | size         |          1 | Number of pages in the index      |
| i1         | n_diff_pfx01 |          1 | c                                 |
| i1         | n_diff_pfx02 |          2 | c,d                               |
| i1         | n_diff_pfx03 |          2 | c,d,a                             |
| i1         | n_diff_pfx04 |          5 | c,d,a,b                           |
| i1         | n_leaf_pages |          1 | Number of leaf pages in the index |
| i1         | size         |          1 | Number of pages in the index      |
| i2uniq     | n_diff_pfx01 |          2 | e                                 |
| i2uniq     | n_diff_pfx02 |          5 | e,f                               |
| i2uniq     | n_leaf_pages |          1 | Number of leaf pages in the index |
| i2uniq     | size         |          1 | Number of pages in the index      |
+------------+--------------+------------+-----------------------------------+
14 rows in set (0.00 sec)    
          The stat_name column shows the following
          types of statistics:
              size: Where
              stat_name=size, the
              stat_value column displays the total
              number of pages in the index.
            
              n_leaf_pages: Where
              stat_name=n_leaf_pages,
              the stat_value column displays the
              number of leaf pages in the index.
            
              n_diff_pfx:
              Where
              NNstat_name=n_diff_pfx01,
              the stat_value column displays the
              number of distinct values in the first column of the
              index. Where
              stat_name=n_diff_pfx02,
              the stat_value column displays the
              number of distinct values in the first two columns of the
              index, and so on. Additionally, where
              stat_name=n_diff_pfx,
              the NNstat_description column shows a
              comma separated list of the index columns that are
              counted.
          To further illustrate the
          n_diff_pfx
          statistic, which provides cardinality data, consider the
          NNt1 table example. As shown below, the
          t1 table is created with a primary index
          (columns a, b), a
          secondary index (columns c,
          d), and a unique index (columns
          e, f):
        
CREATE TABLE t1 ( a INT, b INT, c INT, d INT, e INT, f INT, PRIMARY KEY (a, b), KEY i1 (c, d), UNIQUE KEY i2uniq (e, f) ) ENGINE=INNODB;
After inserting five rows of sample data, the table appears as follows:
mysql> SELECT * FROM t1; +---+---+------+------+------+------+ | a | b | c | d | e | f | +---+---+------+------+------+------+ | 1 | 1 | 10 | 11 | 100 | 101 | | 1 | 2 | 10 | 11 | 200 | 102 | | 1 | 3 | 10 | 11 | 100 | 103 | | 1 | 4 | 10 | 12 | 200 | 104 | | 1 | 5 | 10 | 12 | 100 | 105 | +---+---+------+------+------+------+ 5 rows in set (0.00 sec)
          When you query the index_name,
          stat_name, stat_value,
          and stat_description where
          stat_name LIKE 'n_diff%', the following
          result set is returned:
        
mysql> SELECT index_name, stat_name, stat_value, stat_description 
    -> FROM mysql.innodb_index_stats
    -> WHERE table_name like 't1' AND stat_name LIKE 'n_diff%';
+------------+--------------+------------+------------------+
| index_name | stat_name    | stat_value | stat_description |
+------------+--------------+------------+------------------+
| PRIMARY    | n_diff_pfx01 |          1 | a                |
| PRIMARY    | n_diff_pfx02 |          5 | a,b              |
| i1         | n_diff_pfx01 |          1 | c                |
| i1         | n_diff_pfx02 |          2 | c,d              |
| i1         | n_diff_pfx03 |          2 | c,d,a            |
| i1         | n_diff_pfx04 |          5 | c,d,a,b          |
| i2uniq     | n_diff_pfx01 |          2 | e                |
| i2uniq     | n_diff_pfx02 |          5 | e,f              |
+------------+--------------+------------+------------------+
8 rows in set (0.00 sec)
          For the PRIMARY index, there are two
          n_diff% rows. The number of rows is equal
          to the number of columns in the index.
            For non-unique indexes, InnoDB appends
            the columns of the primary key.
              Where
              index_name=PRIMARY
              and
              stat_name=n_diff_pfx01,
              the stat_value is 1,
              which indicates that there is a single distinct value in
              the first column of the index (column
              a). The number of distinct values in
              column a is confirmed by viewing the
              data in column a in table
              t1, in which there is a single distinct
              value (1). The counted column
              (a) is shown in the
              stat_description column of the result
              set.
            
              Where
              index_name=PRIMARY
              and
              stat_name=n_diff_pfx02,
              the stat_value is 5,
              which indicates that there are five distinct values in the
              two columns of the index (a,b). The
              number of distinct values in columns a
              and b is confirmed by viewing the data
              in columns a and b
              in table t1, in which there are five
              distinct values: (1,1),
              (1,2), (1,3),
              (1,4) and (1,5). The
              counted columns (a,b) are shown in the
              stat_description column of the result
              set.
          For the secondary index (i1), there are
          four n_diff% rows. Only two columns are
          defined for the secondary index (c,d) but
          there are four n_diff% rows for the
          secondary index because InnoDB suffixes all
          non-unique indexes with the primary key. As a result, there
          are four n_diff% rows instead of two to
          account for the both the secondary index columns
          (c,d) and the primary key columns
          (a,b).
              Where index_name=i1
              and
              stat_name=n_diff_pfx01,
              the stat_value is 1,
              which indicates that there is a single distinct value in
              the first column of the index (column
              c). The number of distinct values in
              column c is confirmed by viewing the
              data in column c in table
              t1, in which there is a single distinct
              value: (10). The counted column
              (c) is shown in the
              stat_description column of the result
              set.
            
              Where index_name=i1
              and
              stat_name=n_diff_pfx02,
              the stat_value is 2,
              which indicates that there are two distinct values in the
              first two columns of the index (c,d).
              The number of distinct values in columns
              c an d is confirmed
              by viewing the data in columns c and
              d in table t1, in
              which there are two distinct values:
              (10,11) and (10,12).
              The counted columns (c,d) are shown in
              the stat_description column of the
              result set.
            
              Where index_name=i1
              and
              stat_name=n_diff_pfx03,
              the stat_value is 2,
              which indicates that there are two distinct values in the
              first three columns of the index
              (c,d,a). The number of distinct values
              in columns c, d, and
              a is confirmed by viewing the data in
              column c, d, and
              a in table t1, in
              which there are two distinct values:
              (10,11,1) and
              (10,12,1). The counted columns
              (c,d,a) are shown in the
              stat_description column of the result
              set.
            
              Where index_name=i1
              and
              stat_name=n_diff_pfx04,
              the stat_value is 5,
              which indicates that there are five distinct values in the
              four columns of the index (c,d,a,b).
              The number of distinct values in columns
              c, d,
              a and b is confirmed
              by viewing the data in columns c,
              d, a, and
              b in table t1, in
              which there are five distinct values:
              (10,11,1,1),
              (10,11,1,2),
              (10,11,1,3),
              (10,12,1,4) and
              (10,12,1,5). The counted columns
              (c,d,a,b) are shown in the
              stat_description column of the result
              set.
          For the unique index (i2uniq), there are
          two n_diff% rows.
              Where
              index_name=i2uniq
              and
              stat_name=n_diff_pfx01,
              the stat_value is 2,
              which indicates that there are two distinct values in the
              first column of the index (column e).
              The number of distinct values in column
              e is confirmed by viewing the data in
              column e in table
              t1, in which there are two distinct
              values: (100) and
              (200). The counted column
              (e) is shown in the
              stat_description column of the result
              set.
            
              Where
              index_name=i2uniq
              and
              stat_name=n_diff_pfx02,
              the stat_value is 5,
              which indicates that there are five distinct values in the
              two columns of the index (e,f). The
              number of distinct values in columns e
              and f is confirmed by viewing the data
              in columns e and f
              in table t1, in which there are five
              distinct values: (100,101),
              (200,102),
              (100,103), (200,104)
              and (100,105). The counted columns
              (e,f) are shown in the
              stat_description column of the result
              set.
          The size of indexes for tables, partitions, or subpartitions
          can be retrieved using the
          innodb_index_stats table. In the following
          example, index sizes are retrieved for table
          t1. For a definition of table
          t1 and corresponding index statistics, see
          Section 14.3.11.1.3, “InnoDB Persistent Statistics Tables Example”.
        
mysql> SELECT SUM(stat_value) pages, index_name,
    -> SUM(stat_value)*@@innodb_page_size size
    -> FROM mysql.innodb_index_stats WHERE table_name='t1'
    -> AND stat_name = 'size' GROUP BY index_name;
+-------+------------+-------+
| pages | index_name | size  |
+-------+------------+-------+
|     1 | PRIMARY    | 16384 |
|     1 | i1         | 16384 |
|     1 | i2uniq     | 16384 |
+-------+------------+-------+
3 rows in set (0.00 sec)    
          For partitions or subpartitions, the same query with a
          modified WHERE clause can be used to
          retrieve index sizes. For example, the following query
          retrieves index sizes for partitions of table
          t1:
        
mysql> SELECT SUM(stat_value) pages, index_name,
    -> SUM(stat_value)*@@innodb_page_size size
    -> FROM mysql.innodb_index_stats WHERE table_name like 't1#P%'
-> AND stat_name = 'size' GROUP BY index_name;     
        As of MySQL 5.6.6, optimizer statistics are persistent by
        default, enabled by the
        innodb_stats_persistent
        configuration option. This section describes configuration of
        non-persistent optimizer statistics, which is applicable when
        innodb_stats_persistent=OFF or
        when individual tables are created or altered with
        STATS_PERSISTENT=0.
        For information about persistent optimizer statistics, see
        Section 14.3.11.1, “Configuring Persistent Optimizer Statistics Parameters”.
      
        The MySQL query optimizer uses estimated
        statistics about key
        distributions to choose the indexes for an execution plan, based
        on the relative
        selectivity of the
        index. Certain operations cause InnoDB to
        sample random pages from each index on a table to estimate the
        cardinality of the
        index. (This technique is known as
        random dives.) These
        operations include the ANALYZE
        TABLE statement, the SHOW TABLE
        STATUS statement, and accessing the table for the
        first time after a restart.
      
        To give you control over the quality of the statistics estimate
        (and thus better information for the query optimizer), you can
        change the number of sampled pages using the parameter
        innodb_stats_transient_sample_pages.
        The default number of sampled pages is 8, which could be
        insufficient to produce an accurate estimate, leading to poor
        index choices by the query optimizer. This technique is
        especially important for large tables and tables used in
        joins. Unnecessary
        full table scans for
        such tables can be a substantial performance issue. See
        Section 8.2.1.20, “How to Avoid Full Table Scans” for tips on tuning
        such queries.
      
        You can set the global parameter
        innodb_stats_transient_sample_pages,
        at runtime.
          The value of
          innodb_stats_transient_sample_pages
          affects the index sampling for all InnoDB
          tables and indexes when
          innodb_stats_persistent=0.
          There are the following potentially significant impacts when
          you change the index sample size:
Small values like 1 or 2 can result in inaccurate estimates of cardinality.
                Increasing the
                innodb_stats_transient_sample_pages
                value might require more disk reads. Values much larger
                than 8 (say, 100), can cause a big slowdown in the time
                it takes to open a table or execute SHOW TABLE
                STATUS.
              
The optimizer might choose very different query plans based on different estimates of index selectivity.
        To have statistics updated when metadata statements such as
        SHOW TABLE STATUS or
        SHOW INDEX are run, or when
        accessing the
        INFORMATION_SCHEMA.TABLES or
        INFORMATION_SCHEMA.STATISTICS
        tables, execute the statement SET GLOBAL
        innodb_stats_on_metadata=ON (or 0).
          When persistent optimizer statistics were enabled by default
          in MySQL 5.6.6, the default setting for
          innodb_stats_on_metadata was
          changed to OFF. Enabling this variable may
          reduce access speed for schemas that have a large number of
          tables or indexes, and reduce stability of execution plans for
          queries that involve InnoDB tables.
        All InnoDB tables are opened, and the
        statistics are re-estimated for all associated indexes, when the
        mysql client starts with the
        --auto-rehash setting on (the
        default). To improve the start up time of the
        mysql client, you can turn auto-rehash off
        using the
        --disable-auto-rehash
        option. The auto-rehash feature enables
        automatic name completion of database, table, and column names
        for interactive users.
      
        Whatever value of
        innodb_stats_transient_sample_pages
        works best for a system, set the option and leave it at that
        value. Choose a value that results in reasonably accurate
        estimates for all tables in your database without requiring
        excessive I/O. Because the statistics are automatically
        recalculated at various times other than on execution of
        ANALYZE TABLE, it does not make
        sense to increase the index sample size, run
        ANALYZE TABLE, then decrease
        sample size again. The more accurate statistics calculated by
        ANALYZE running with a high value of
        innodb_stats_transient_sample_pages
        can be wiped away later.
      
        Smaller tables generally require fewer index samples than larger
        tables do. If your database has many large tables, consider
        using a higher value for
        innodb_stats_transient_sample_pages
        than if you have mostly smaller tables.
        ANALYZE TABLE complexity for
        InnoDB tables is dependent on:
            The number of pages sampled, as defined by
            innodb_stats_persistent_sample_pages.
          
The number of indexed columns in a table
The number of partitions. If a table has no partitions, the number of partitions is considered to be 1.
        Using these parameters, an approximate formula for estimating
        ANALYZE TABLE complexity would
        be:
      
        The value of
        innodb_stats_persistent_sample_pages
        * number of indexed columns in a table * the number of
        partitions
      
        Typically, the greater the resulting value, the greater the
        execution time for ANALYZE TABLE.
          innodb_stats_persistent_sample_pages
          defines the number of pages sampled at a global level. To set
          the number of pages sampled for an individual table, use the
          STATS_SAMPLE_PAGES option with
          CREATE TABLE or
          ALTER TABLE. For more
          information, see Section 14.3.11.1, “Configuring Persistent Optimizer Statistics Parameters”.
        
          If
          innodb_stats_persistent=OFF,
          the number of pages sampled is defined by
          innodb_stats_transient_sample_pages.
          See Section 14.3.11.2, “Configuring Non-Persistent Optimizer Statistics Parameters” for
          additional information.
        For a more in-depth approach to estimating ANALYZE
        TABLE complexity, consider the following example.
      
        In Big
        O notation, ANALYZE TABLE
        complexity is described as:
      
 O(n_sample
  * (n_cols_in_uniq_i
     + n_cols_in_non_uniq_i
     + n_cols_in_pk * (1 + n_non_uniq_i))
  * n_part)          where:
            n_sample is the number of pages sampled
            (defined by
            innodb_stats_persistent_sample_pages)
          
            n_cols_in_uniq_i is total number of all
            columns in all unique indexes (not counting the primary key
            columns)
          
            n_cols_in_non_uniq_i is the total number
            of all columns in all non-unique indexes
          
            n_cols_in_pk is the number of columns in
            the primary key (if a primary key is not defined,
            InnoDB creates a single column primary
            key internally)
          
            n_non_uniq_i is the number of non-unique
            indexes in the table
          
            n_part is the number of partitions. If no
            partitions are defined, the table is considered to be a
            single partition.
        Now, consider the following table (table t),
        which has a primary key (2 columns), a unique index (2 columns),
        and two non-unique indexes (two columns each):
      
CREATE TABLE t ( a INT, b INT, c INT, d INT, e INT, f INT, g INT, h INT, PRIMARY KEY (a, b), UNIQUE KEY i1uniq (c, d), KEY i2nonuniq (e, f), KEY i3nonuniq (g, h) );
        For the column and index data required by the algorithm
        described above, query the
        mysql.innodb_index_stats persistent index
        statistics table for table t. The
        n_diff_pfx% statistics show the columns that
        are counted for each index. For example, columns
        a and b are counted for
        the primary key index. For the non-unique indexes, the primary
        key columns (a,b) are counted in addition to the user defined
        columns.
          For additional information about the InnoDB
          persistent statistics tables, see
          Section 14.3.11.1, “Configuring Persistent Optimizer Statistics Parameters”
SELECT index_name, stat_name, stat_description FROM mysql.innodb_index_stats WHERE database_name='test' AND table_name='t' AND stat_name like 'n_diff_pfx%'; +------------+--------------+------------------+ | index_name | stat_name | stat_description | +------------+--------------+------------------+ | PRIMARY | n_diff_pfx01 | a | | PRIMARY | n_diff_pfx02 | a,b | | i1uniq | n_diff_pfx01 | c | | i1uniq | n_diff_pfx02 | c,d | | i2nonuniq | n_diff_pfx01 | e | | i2nonuniq | n_diff_pfx02 | e,f | | i2nonuniq | n_diff_pfx03 | e,f,a | | i2nonuniq | n_diff_pfx04 | e,f,a,b | | i3nonuniq | n_diff_pfx01 | g | | i3nonuniq | n_diff_pfx02 | g,h | | i3nonuniq | n_diff_pfx03 | g,h,a | | i3nonuniq | n_diff_pfx04 | g,h,a,b | +------------+--------------+------------------+
Based on the index statistics data shown above and the table definition, the following values can be determined:
            n_cols_in_uniq_i, the total number of all
            columns in all unique indexes not counting the primary key
            columns, is 2 (c and
            d)
          
            n_cols_in_non_uniq_i, the total number of
            all columns in all non-unique indexes, is 4
            (e, f,
            g and h)
          
            n_cols_in_pk, the number of columns in
            the primary key, is 2 (a and
            b)
          
            n_non_uniq_i, the number of non-unique
            indexes in the table, is 2 (i2nonuniq and
            i3nonuniq))
          
            n_part, the number of partitions, is 1.
        You can now calculate
        innodb_stats_persistent_sample_pages * (2 + 4
        + 2 * (1 + 2)) * 1 to determine the number of leaf pages that
        are scanned. With
        innodb_stats_persistent_sample_pages set to
        the default value of 20, and with a default
        page size of 16 KiB
        (innodb_page_size=16384), you
        can then estimate that 20 * 12 * 16384 bytes
        are read for table t, or about 4
        MiB.
          All 4 MiB may not be read from disk, as
          some leaf pages may already be cached in the buffer pool.
      Staring in MySQL 5.7.6, you can configure the
      MERGE_THRESHOLD value for index pages. If the
      “page-full” percentage for an index page falls below
      the MERGE_THRESHOLD value when a row is deleted
      or when a row is shortened by an
      UPDATE operation,
      InnoDB attempts to merge the index page with a
      neighboring index page. The default
      MERGE_THRESHOLD value is 50, which is the
      previously hard-coded value. The minimum
      MERGE_THRESHOLD value is 1 and the maximum
      value is 50.
    
      When the “page-full” percentage for an index page
      falls below 50%, which is the default
      MERGE_THRESHOLD setting,
      InnoDB attempts to merge the index page with a
      neighboring page. If both pages are close to 50% full, a page
      split can occur soon after the pages are merged. If this
      merge-split behavior occurs frequently, it can have an adverse
      affect on performance. To avoid frequent merge-splits, you can
      lower the MERGE_THRESHOLD value so that
      InnoDB attempts page merges at a lower
      “page-full” percentage. Merging pages at a lower
      page-full percentage leaves more room in index pages and helps
      reduce merge-split behaviour.
    
      The MERGE_THRESHOLD for index pages can be
      defined for a table or for individual indexes. A
      MERGE_THRESHOLD value defined for an individual
      index takes priority over a MERGE_THRESHOLD
      value defined for the table. If undefined, the
      MERGE_THRESHOLD value defaults to 50.
      You can set the MERGE_THRESHOLD value for a
      table using the table_option
      COMMENT clause of the
      CREATE TABLE statement. For
      example:
    
CREATE TABLE t1 ( id INT, KEY id_index (id) ) COMMENT='MERGE_THRESHOLD=45';
      You can also set the MERGE_THRESHOLD value for
      an existing table using the
      table_option COMMENT
      clause with ALTER TABLE:
    
CREATE TABLE t1 ( id INT, KEY id_index (id) ); ALTER TABLE t1 COMMENT='MERGE_THRESHOLD=40';
      To set the MERGE_THRESHOLD value for an
      individual index, you can use the
      index_option COMMENT
      clause with CREATE TABLE,
      ALTER TABLE, or
      CREATE INDEX, as shown in the
      following examples:
          Setting MERGE_THRESHOLD for an individual
          index using CREATE TABLE:
        
CREATE TABLE t1 ( id INT, KEY id_index (id) COMMENT 'MERGE_THRESHOLD=40' );
          Setting MERGE_THRESHOLD for an individual
          index using ALTER TABLE:
        
CREATE TABLE t1 ( id INT, KEY id_index (id) ); ALTER TABLE t1 DROP KEY id_index; ALTER TABLE t1 ADD KEY id_index (id) COMMENT 'MERGE_THRESHOLD=40';
          Setting MERGE_THRESHOLD for an individual
          index using CREATE INDEX:
        
CREATE TABLE t1 (id INT); CREATE INDEX id_index ON t1 (id) COMMENT 'MERGE_THRESHOLD=40';
        You cannot modify the MERGE_THRESHOLD value
        at the index level for GEN_CLUST_INDEX, which
        is the clustered index created by InnoDB when
        an InnoDB table is created without a primary
        key or unique key index. You can only modify the
        MERGE_THRESHOLD value for
        GEN_CLUST_INDEX by setting
        MERGE_THRESHOLD for the table.
      The current MERGE_THRESHOLD value for an index
      can be obtained by querying the
      INNODB_SYS_INDEXES table. For
      example:
    
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_INDEXES WHERE NAME='id_index' \G
*************************** 1. row ***************************
       INDEX_ID: 91
           NAME: id_index
       TABLE_ID: 68
           TYPE: 0
       N_FIELDS: 1
        PAGE_NO: 4
          SPACE: 57
MERGE_THRESHOLD: 40
1 row in set (0.00 sec)
      You can use SHOW CREATE TABLE to
      view the MERGE_THRESHOLD value for a table, if
      explicitly defined using the
      table_option COMMENT
      clause:
    
mysql> SHOW CREATE TABLE t2 \G
*************************** 1. row ***************************
       Table: t2
Create Table: CREATE TABLE `t2` (
  `id` int(11) DEFAULT NULL,
  KEY `id_index` (`id`) COMMENT 'MERGE_THRESHOLD=40'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
        A MERGE_THRESHOLD value defined at the index
        level takes priority over a MERGE_THRESHOLD
        value defined for the table. If undefined,
        MERGE_THRESHOLD defaults to 50%
        (MERGE_THRESHOLD=50, which is the previously
        hard-coded value.
      Likewise, you can use SHOW INDEX to
      view the MERGE_THRESHOLD value for an index, if
      explicitly defined using the
      index_option COMMENT
      clause:
    
mysql> SHOW INDEX FROM t2 \G
*************************** 1. row ***************************
        Table: t2
   Non_unique: 1
     Key_name: id_index
 Seq_in_index: 1
  Column_name: id
    Collation: A
  Cardinality: 0
     Sub_part: NULL
       Packed: NULL
         Null: YES
   Index_type: BTREE
      Comment: 
Index_comment: MERGE_THRESHOLD=40
1 row in set (0.00 sec)
      The INNODB_METRICS table provides two
      counters that can be used to measure the effect of a
      MERGE_THRESHOLD setting on index page merges.
    
mysql> SELECT NAME, COMMENT FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME like '%index_page_merge%'; +-----------------------------+----------------------------------------+ | NAME | COMMENT | +-----------------------------+----------------------------------------+ | index_page_merge_attempts | Number of index page merge attempts | | index_page_merge_successful | Number of successful index page merges | +-----------------------------+----------------------------------------+ 2 rows in set (0.00 sec)
      When lowering the MERGE_THRESHOLD value, the
      objectives are:
A smaller number of page merge attempts and successful page merges
A similar number of page merge attempts and successful page merges
      A MERGE_THRESHOLD setting that is too small
      could result in large data files due to an excessive amount of
      empty page space.
    
      For information about using
      INNODB_METRICS counters, see
      Section 14.12.6, “InnoDB INFORMATION_SCHEMA Metrics Table”.
      This section describes how to increase or decrease the size of the
      InnoDB system tablespace.
      The easiest way to increase the size of the
      InnoDB system tablespace is to configure it
      from the beginning to be auto-extending. Specify the
      autoextend attribute for the last data file in
      the tablespace definition. Then InnoDB
      increases the size of that file automatically in 8MB increments
      when it runs out of space. The increment size can be changed by
      setting the value of the
      innodb_autoextend_increment
      system variable, which is measured in megabytes.
    
You can expand the system tablespace by a defined amount by adding another data file:
Shut down the MySQL server.
          If the previous last data file is defined with the keyword
          autoextend, change its definition to use a
          fixed size, based on how large it has actually grown. Check
          the size of the data file, round it down to the closest
          multiple of 1024 × 1024 bytes (= 1MB), and specify this
          rounded size explicitly in
          innodb_data_file_path.
        
          Add a new data file to the end of
          innodb_data_file_path,
          optionally making that file auto-extending. Only the last data
          file in the
          innodb_data_file_path can be
          specified as auto-extending.
        
Start the MySQL server again.
      For example, this tablespace has just one auto-extending data file
      ibdata1:
    
innodb_data_home_dir = innodb_data_file_path = /ibdata/ibdata1:10M:autoextend
Suppose that this data file, over time, has grown to 988MB. Here is the configuration line after modifying the original data file to use a fixed size and adding a new auto-extending data file:
innodb_data_home_dir = innodb_data_file_path = /ibdata/ibdata1:988M;/disk2/ibdata2:50M:autoextend
      When you add a new data file to the system tablespace
      configuration, make sure that the filename does not refer to an
      existing file. InnoDB creates and initializes
      the file when you restart the server.
Currently, you cannot remove a data file from the system tablespace. To decrease the system tablespace size, use this procedure:
          Use mysqldump to dump all your
          InnoDB tables, including
          InnoDB tables located in the MySQL
          database. As of 5.6, there are five InnoDB
          tables included in the MySQL database:
        
mysql> select table_name from information_schema.tables where table_schema='mysql' and engine='InnoDB';
+----------------------+
| table_name           |
+----------------------+
| innodb_index_stats   |
| innodb_table_stats   |
| slave_master_info    |
| slave_relay_log_info |
| slave_worker_info    |
+----------------------+
5 rows in set (0.00 sec)
      Stop the server.
          Remove all the existing tablespace files
          (*.ibd), including the
          ibdata and ib_log
          files. Do not forget to remove *.ibd
          files for tables located in the MySQL database.
        
          Remove any .frm files for
          InnoDB tables.
        
Configure a new tablespace.
Restart the server.
Import the dump files.
        If your databases only use the InnoDB engine,
        it may be simpler to dump all
        databases, stop the server, remove all databases and
        InnoDB log files, restart the server, and
        import the dump files.
      To change the number or size of InnoDB
      redo log files in MySQL 5.6.7
      or earlier, perform the following steps:
          If innodb_fast_shutdown is
          set to 2, set
          innodb_fast_shutdown to 1:
        
mysql> SET GLOBAL innodb_fast_shutdown = 1;
          After ensuring that
          innodb_fast_shutdown is not
          set to 2, stop the MySQL server and make sure that it shuts
          down without errors (to ensure that there is no information
          for outstanding transactions in the log).
        
Copy the old log files into a safe place in case something went wrong during the shutdown and you need them to recover the tablespace.
Delete the old log files from the log file directory.
          Edit my.cnf to change the log file
          configuration.
        
          Start the MySQL server again. mysqld sees
          that no InnoDB log files exist at
          startup and creates new ones.
      As of MySQL 5.6.8, the
      innodb_fast_shutdown setting is
      no longer relevant when changing the number or the size of
      InnoDB log files. Additionally, you are no
      longer required remove old log files, although you may still want
      to copy the old log files to a safe place, as a backup. To change
      the number or size of InnoDB log files, perform
      the following steps:
Stop the MySQL server and make sure that it shuts down without errors.
          Edit my.cnf to change the log file
          configuration. To change the log file size, configure
          innodb_log_file_size. To
          increase the number of log files, configure
          innodb_log_files_in_group.
        
Start the MySQL server again.
      If InnoDB detects that the
      innodb_log_file_size differs from
      the redo log file size, it will write a log checkpoint, close and
      remove the old log files, create new log files at the requested
      size, and open the new log files.
      You can use raw disk partitions as data files in the
      InnoDB
      system tablespace.
      This technique enables nonbuffered I/O on Windows and on some
      Linux and Unix systems without file system overhead. Perform tests
      with and without raw partitions to verify whether this change
      actually improves performance on your system.
    
      When you use a raw disk partition, ensure that the user ID that
      runs the MySQL server has read and write privileges for that
      partition. For example, if you run the server as the
      mysql user, the partition must be readable and
      writeable by mysql. If you run the server with
      the --memlock option, the server
      must be run as root, so the partition must be
      readable and writeable by root.
    
The procedures described below involve option file modification. For additional information, see Section 4.2.6, “Using Option Files”.
          When you create a new data file, specify the keyword
          newraw immediately after the data file size
          for the innodb_data_file_path
          option. The partition must be at least as large as the size
          that you specify. Note that 1MB in InnoDB
          is 1024 × 1024 bytes, whereas 1MB in disk specifications
          usually means 1,000,000 bytes.
        
[mysqld] innodb_data_home_dir= innodb_data_file_path=/dev/hdd1:3Gnewraw;/dev/hdd2:2Gnewraw
          Restart the server. InnoDB notices the
          newraw keyword and initializes the new
          partition. However, do not create or change any
          InnoDB tables yet. Otherwise, when you next
          restart the server, InnoDB reinitializes
          the partition and your changes are lost. (As a safety measure
          InnoDB prevents users from modifying data
          when any partition with newraw is
          specified.)
        
          After InnoDB has initialized the new
          partition, stop the server, change newraw
          in the data file specification to raw:
        
[mysqld] innodb_data_home_dir= innodb_data_file_path=/dev/hdd1:3Graw;/dev/hdd2:2Graw
          Restart the server. InnoDB now permits
          changes to be made.
      On Windows systems, the same steps and accompanying guidelines
      described for Linux and Unix systems apply except that the
      innodb_data_file_path setting
      differs slightly on Windows.
          When you create a new data file, specify the keyword
          newraw immediately after the data file size
          for the innodb_data_file_path
          option:
        
[mysqld] innodb_data_home_dir= innodb_data_file_path=//./D::10Gnewraw
          The //./ corresponds to the Windows
          syntax of \\.\ for accessing physical
          drives. In the example above, D: is the
          drive letter of the partition.
        
          Restart the server. InnoDB notices the
          newraw keyword and initializes the new
          partition.
        
          After InnoDB has initialized the new
          partition, stop the server, change newraw
          in the data file specification to raw:
        
[mysqld] innodb_data_home_dir= innodb_data_file_path=//./D::10Graw
          Restart the server. InnoDB now permits
          changes to be made.
      Historically, all InnoDB tables and indexes
      were stored in the system
      tablespace. This monolithic approach was targeted at
      machines dedicated entirely to database processing, with carefully
      planned data growth, where any disk storage allocated to MySQL
      would never be needed for other purposes.
      InnoDB's
      file-per-table
      tablespace feature provides a more flexible alternative,
      where each InnoDB table and its indexes are
      stored in a separate
      .ibd data
      file. Each such
      .ibd data
      file represents an individual
      tablespace. This feature is
      controlled by the
      innodb_file_per_table
      configuration option, which is enabled by default in MySQL 5.6.6
      and higher.
          You can reclaim disk space when truncating or dropping a table
          stored in a file-per-table tablepace. Truncating or dropping
          tables stored in the system tablespace creates free space
          internally in the system tablespace data files
          (ibdata files) which
          can only be used for new InnoDB data.
        
          The TRUNCATE TABLE operation is
          faster when run on tables stored in file-per-table tablepaces.
        
          You can store specific tables on separate storage devices, for
          I/O optimization, space management, or backup purposes. In
          previous releases, you had to move entire database directories
          to other drives and create symbolic links in the MySQL data
          directory, as described in Section 8.12.4, “Using Symbolic Links”.
          In MySQL 5.6.6 and higher, you can specify the location of
          each table using the syntax CREATE TABLE ... DATA
          DIRECTORY =
          ,
          as explained in Section 14.4.5, “Creating a File-Per-Table Tablespace Outside the Data Directory”.
        absolute_path_to_directory
          You can run OPTIMIZE TABLE to
          compact or recreate a file-per-table tablespace. When you run
          an OPTIMIZE TABLE,
          InnoDB creates a new
          .ibd file with a temporary name, using
          only the space required to store actual data. When the
          optimization is complete, InnoDB removes
          the old .ibd file and replaces it with
          the new one. If the previous .ibd file
          grew significantly but the actual data only accounted for a
          portion of its size, running OPTIMIZE
          TABLE can reclaim the unused space.
        
          You can move individual InnoDB tables
          rather than entire databases.
        
          You can copy individual InnoDB tables from
          one MySQL instance to another (known as the
          transportable
          tablespace feature).
        
          Tables created in file-per-table tablespaces use the
          Barracuda file format.
          The Barracuda file format enables features such as
          compressed
          and dynamic row
          formats. Tables created in the system tablespace cannot use
          these features. To take advantage of these features for an
          existing table, enable the
          innodb_file_per_table setting
          and run ALTER TABLE  to place the table in a file-per-table
          tablespace. Before converting tables, refer to
          Section 14.5.4, “Converting Tables from MyISAM to InnoDB”.
        t
          ENGINE=INNODB
          You can enable more efficient storage for tables with large
          BLOB or TEXT columns
          using the dynamic row
          format.
        
File-per-table tablespaces may improve chances for a successful recovery and save time when a corruption occurs, when a server cannot be restarted, or when backup and binary logs are unavailable.
          You can back up or restore individual tables quickly using the
          MySQL Enterprise Backup product, without interrupting the use
          of other InnoDB tables. This is beneficial
          if you have tables that require backup less frequently or on a
          different backup schedule. See
          Partial Backup and Restore Options for details.
        
File-per-table tablespaces are convenient for per-table status reporting when copying or backing up tables.
You can monitor table size at a file system level, without accessing MySQL.
          Common Linux file systems do not permit concurrent writes to a
          single file when
          innodb_flush_method is set to
          O_DIRECT. As a result, there are possible
          performance improvements when using file-per-table tablespaces
          in conjunction with
          innodb_flush_method.
        
The system tablespace stores the data dictionary and undo logs, and has a 64TB size limit. By comparison, each file-per-table tablespace has a 64TB size limit, which provides you with room for growth. See Section D.10.3, “Limits on Table Size” for related information.
With file-per-table tablespaces, each table may have unused space, which can only be utilized by rows of the same table. This could lead to wasted space if not properly managed.
          fsync operations must run on each open
          table rather than on a single file. Because there is a
          separate fsync operation for each file,
          write operations on multiple tables cannot be combined into a
          single I/O operation. This may require
          InnoDB to perform a higher total number of
          fsync operations.
        
mysqld must keep one open file handle per table, which may impact performance if you have numerous tables in file-per-table tablespaces.
More file descriptors are used.
          innodb_file_per_table is
          enabled by default in MySQL 5.6.6 and higher. You may consider
          disabling it if backward compatibility with MySQL 5.5 or 5.1
          is a concern. Disabling
          innodb_file_per_table
          prevents ALTER TABLE from
          moving an InnoDB table from the system
          tablespace to an individual .ibd file in
          cases where ALTER TABLE
          recreates the table (ALGORITHM=COPY).
        
          For example, when restructuring the clustered index for an
          InnoDB table, the table is re-created using
          the current setting for
          innodb_file_per_table. This
          behavior does not apply when adding or dropping
          InnoDB secondary indexes. When a secondary
          index is created without rebuilding the table, the index is
          stored in the same file as the table data, regardless of the
          current innodb_file_per_table
          setting.
        
          If many tables are growing there is potential for more
          fragmentation which can impede DROP
          TABLE and table scan performance. However, when
          fragmentation is managed, having files in their own tablespace
          can improve performance.
        
The buffer pool is scanned when dropping a file-per-table tablespace, which can take several seconds for buffer pools that are tens of gigabytes in size. The scan is performed with a broad internal lock, which may delay other operations. Tables in the system tablespace are not affected.
          The
          innodb_autoextend_increment
          variable, which defines increment size (in MB) for extending
          the size of an auto-extending shared tablespace file when it
          becomes full, does not apply to file-per-table tablespace
          files, which are auto-extending regardless of the
          innodb_autoextend_increment
          setting. The initial extensions are by small amounts, after
          which extensions occur in increments of 4MB.
        The innodb_file_per_table
        option is enabled by default as of MySQL 5.6.6.
      
        To set the
        innodb_file_per_table option at
        startup, start the server with the
        --innodb_file_per_table
        command-line option, or add this line to the
        [mysqld] section of
        my.cnf:
      
[mysqld] innodb_file_per_table=1
        You can also set
        innodb_file_per_table
        dynamically, while the server is running:
      
SET GLOBAL innodb_file_per_table=1;
        With innodb_file_per_table
        enabled, you can store InnoDB tables in a
        tbl_name.ibdMyISAM storage engine, with
        its separate
        tbl_name.MYDtbl_name.MYIInnoDB stores the
        data and the indexes together in a single
        .ibd file. The
        tbl_name.frm
        If you disable
        innodb_file_per_table in your
        startup options and restart the server, or disable it with the
        SET GLOBAL command, InnoDB
        creates new tables inside the system tablespace.
      
        You can always read and write any InnoDB
        tables, regardless of the file-per-table setting.
      
        To move a table from the system tablespace to its own
        tablespace, change the
        innodb_file_per_table setting
        and rebuild the table:
      
SET GLOBAL innodb_file_per_table=1;
ALTER TABLE table_name ENGINE=InnoDB;
          InnoDB always needs the system tablespace
          because it puts its internal
          data dictionary
          and undo logs there. The
          .ibd files are not sufficient for
          InnoDB to operate.
        
          When a table is moved out of the system tablespace into its
          own .ibd file, the data files that make
          up the system tablespace remain the same size. The space
          formerly occupied by the table can be reused for new
          InnoDB data, but is not reclaimed for use
          by the operating system. When moving large
          InnoDB tables out of the system tablespace,
          where disk space is limited, you may prefer to enable
          innodb_file_per_table and
          recreate the entire instance using the
          mysqldump command.
      To create a new InnoDB
      file-per-table
      tablespace in a specific location outside the MySQL data
      directory, use the DATA DIRECTORY =
      
      clause of the absolute_path_to_directoryCREATE TABLE
      statement.
    
      Plan the location in advance, because you cannot use the
      DATA DIRECTORY clause with the
      ALTER TABLE statement. The
      directory you specify could be on another storage device with
      particular performance or capacity characteristics, such as a fast
      SSD or a high-capacity
      HDD.
    
      Within the destination directory, MySQL creates a subdirectory
      corresponding to the database name, and within that a
      .ibd file for the new table.
      In the database directory beneath the MySQL
      DATADIR directory, MySQL creates
      a table_name.islInnoDB tables.)
    
      The following example demonstrates creating a file-per-table
      tablespace outside the MySQL data directory. It shows the
      .ibd created in the specified directory, and
      the .isl created in the database directory
      beneath the MySQL data directory.
    
mysql> USE test; Database changed mysql> SHOW VARIABLES LIKE 'innodb_file_per_table'; +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | innodb_file_per_table | ON | +-----------------------+-------+ 1 row in set (0.00 sec) mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/alternative/directory'; Query OK, 0 rows affected (0.03 sec) # MySQL creates a .ibd file for the new table in a subdirectory that corresponding # to the database name db_user@ubuntu:~/alternative/directory/test$ ls t1.ibd # MySQL creates a .isl file containing the path name for the table in a directory # beneath the MySQL data directory db_user@ubuntu:~/mysql/data/test$ ls db.opt t1.frm t1.isl
          MySQL initially holds the .ibd file open,
          preventing you from dismounting the device, but might
          eventually close the table if the server is busy. Be careful
          not to accidentally dismount an external device while MySQL is
          running, or to start MySQL while the device is disconnected.
          Attempting to access a table when the associated
          .ibd file is missing causes a serious error
          that requires a server restart.
        
          A server restart might fail if the .ibd
          file is still not at the expected path. In this case, manually
          remove the
          table_name.islDROP TABLE to delete the
          .frm file and remove the information about
          the table from the data
          dictionary.
        
Do not put MySQL tables on an NFS-mounted volume. NFS uses a message-passing protocol to write to files, which could cause data inconsistency if network messages are lost or received out of order.
          If you use an LVM snapshot, file copy, or other file-based
          mechanism to back up the .ibd file, always
          use the FLUSH TABLES
          ... FOR EXPORT statement first to make sure all
          changes that were buffered in memory are
          flushed to disk before the
          backup occurs.
        
          The DATA DIRECTORY clause is a supported
          alternative to using symbolic
          links, which has always been problematic and was never
          supported for individual InnoDB tables.
This section describes how to copy file-per-table tablespaces from one database server to another, otherwise known as the Transportable Tablespaces feature.
      For information about other InnoDB table
      copying methods, see Section 14.5.2, “Moving or Copying InnoDB Tables to Another Machine”.
    
      There are many reasons why you might copy an
      InnoDB
      file-per-table
      tablespace to a different database server:
To run reports without putting extra load on a production server.
To set up identical data for a table on a new slave server.
To restore a backed-up version of a table after a problem or mistake.
As a faster way of moving data around than importing the results of a mysqldump command. The data is available immediately, rather than having to be re-inserted and the indexes rebuilt.
To move a file-per-table tablespace to a server with storage medium that better suits system requirements. For example, you may want to have busy tables on an SSD device, or large tables on a high-capacity HDD device.
          The tablespace copy procedure is only possible when
          innodb_file_per_table is set
          to ON, which is the default setting as of
          MySQL 5.6.6. Tables residing in the shared system tablespace
          cannot be quiesced.
        
When a table is quiesced, only read-only transactions are allowed on the affected table.
When importing a tablespace, the page size must match the page size of the importing instance.
          DISCARD TABLESPACE is not supported for
          partitioned tables meaning that transportable tablespaces is
          also unsupported. If you run
          ALTER
          TABLE ... DISCARD TABLESPACE on a partitioned table,
          the following error is returned: ERROR 1031
          (HY000): Table storage engine for 'part' doesn't have this
          option.
        
          DISCARD TABLESPACE is not supported for
          tablespaces with a parent-child (primary key-foreign key)
          relationship when
          foreign_key_checks is set to
          1. Before discarding a tablespace for
          parent-child tables, set
          foreign_key_checks=0.
        
          ALTER TABLE ...
          IMPORT TABLESPACE does not enforce foreign key
          constraints on imported data. If there are foreign key
          constraints between tables, all tables should be exported at
          the same (logical) point in time.
        
          ALTER TABLE ...
          IMPORT TABLESPACE does not require a
          .cfg metadata file to import a
          tablespace. However, metadata checks are not performed when
          importing without a .cfg file, and a
          warning similar to the following will be issued:
        
Message: InnoDB: IO Read error: (2, No such file or directory) Error opening '.\ test\t.cfg', will attempt to import without schema verification 1 row in set (0.00 sec)
          The ability to import without a .cfg file
          may be more convenient when no schema mismatches are expected.
          Additionally, the ability to import without a
          .cfg file could be useful in crash
          recovery scenarios in which metadata cannot be collected from
          an .ibd file.
        
In MySQL 5.6 or later, importing a tablespace file from another server works if both servers have GA (General Availability) status and their versions are within the same series. Otherwise, the file must have been created on the server into which it is imported.
          In replication scenarios,
          innodb_file_per_table must be
          set to ON on both the master and slave.
        
          On Windows, InnoDB stores database,
          tablespace, and table names internally in lowercase. To avoid
          import problems on case-sensitive operating systems such as
          Linux and UNIX, create all databases, tablespaces, and tables
          using lowercase names. A convenient way to accomplish this is
          to add the following line to the [mysqld]
          section of your my.cnf or
          my.ini file before creating databases,
          tablespaces, or tables:
        
[mysqld] lower_case_table_names=1
This procedure demonstrates how to copy a table stored in a file-per-table tablespace from a running MySQL server instance to another running instance. The same procedure with minor adjustments can be used to perform a full table restore on the same instance.
On the source server, create a table if one does not already exist:
mysql> use test; mysql> CREATE TABLE t(c1 INT) engine=InnoDB;
On the destination server, create a table if one does not exist:
mysql> use test; mysql> CREATE TABLE t(c1 INT) engine=InnoDB;
          On the destination server, discard the existing tablespace.
          (Before a tablespace can be imported,
          InnoDB must discard the tablespace that is
          attached to the receiving table.)
        
mysql> ALTER TABLE t DISCARD TABLESPACE;
          On the source server, run
          FLUSH
          TABLES ... FOR EXPORT to quiesce the table and
          create the .cfg metadata file:
        
mysql> use test; mysql> FLUSH TABLES t FOR EXPORT;
          The metadata (.cfg) file is created in
          the InnoDB data directory.
            FLUSH TABLES ... FOR
            EXPORT is available as of MySQL 5.6.6. The
            statement ensures that changes to the named tables have been
            flushed to disk so that binary table copies can be made
            while the server is running. When
            FLUSH TABLES ... FOR
            EXPORT is run, InnoDB produces
            a .cfg file in the same database
            directory as the table. The .cfg file
            contains metadata used for schema verification when
            importing the tablespace file.
          Copy the .ibd file and
          .cfg metadata file from the source server
          to the destination server. For example:
        
shell> scp/path/to/datadir/test/t.{ibd,cfg} destination-server:/path/to/datadir/test
            The .ibd file and
            .cfg file must be copied before
            releasing the shared locks, as described in the next step.
          On the source server, use
          UNLOCK
          TABLES to release the locks acquired by
          FLUSH
          TABLES ... FOR EXPORT:
        
mysql> use test; mysql> UNLOCK TABLES;
On the destination server, import the tablespace:
mysql> use test; mysql> ALTER TABLE t IMPORT TABLESPACE;
            The ALTER TABLE
            ... IMPORT TABLESPACE feature does not enforce
            foreign key constraints on imported data. If there are
            foreign key constraints between tables, all tables should be
            exported at the same (logical) point in time. In this case
            you would stop updating the tables, commit all transactions,
            acquire shared locks on the tables, and then perform the
            export operation.
The following information describes internals and error log messaging for the transportable tablespaces copy procedure.
      When ALTER TABLE ...
      DISCARD TABLESPACE is run on the destination instance:
The table is locked in X mode.
The tablespace is detached from the table.
      When FLUSH TABLES ... FOR
      EXPORT is run on the source instance:
The table being flushed for export is locked in shared mode.
The purge coordinator thread is stopped.
Dirty pages are synchronized to disk.
          Table metadata is written to the binary
          .cfg file.
Expected error log messages for this operation:
2013-07-18 14:47:31 34471 [Note] InnoDB: Sync to disk of '"test"."t"' started. 2013-07-18 14:47:31 34471 [Note] InnoDB: Stopping purge 2013-07-18 14:47:31 34471 [Note] InnoDB: Writing table metadata to './test/t.cfg' 2013-07-18 14:47:31 34471 [Note] InnoDB: Table '"test"."t"' flushed to disk
      When UNLOCK
      TABLES is run on the source instance:
The binary .cfg file is deleted.
The shared lock on the table or tables being imported is released and the purge coordinator thread is restarted.
Expected error log messages for this operation:
2013-07-18 15:01:40 34471 [Note] InnoDB: Deleting the meta-data file './test/t.cfg' 2013-07-18 15:01:40 34471 [Note] InnoDB: Resuming purge
      When ALTER TABLE ...
      IMPORT TABLESPACE is run on the destination instance,
      the import algorithm performs the following operations for each
      tablespace being imported:
Each tablespace page is checked for corruption.
The space ID and log sequence numbers (LSNs) on each page are updated
Flags are validated and LSN updated for the header page.
Btree pages are updated.
The page state is set to dirty so that it will be written to disk.
Expected error log messages for this operation:
2013-07-18 15:15:01 34960 [Note] InnoDB: Importing tablespace for table 'test/t' that was exported from host 'ubuntu' 2013-07-18 15:15:01 34960 [Note] InnoDB: Phase I - Update all pages 2013-07-18 15:15:01 34960 [Note] InnoDB: Sync to disk 2013-07-18 15:15:01 34960 [Note] InnoDB: Sync to disk - done! 2013-07-18 15:15:01 34960 [Note] InnoDB: Phase III - Flush changes to disk 2013-07-18 15:15:01 34960 [Note] InnoDB: Phase IV - Flush complete
        You may also receive a warning that a tablespace is discarded
        (if you discarded the tablespace for the destination table) and
        a message stating that statistics could not be calculated due to
        a missing .ibd file:
      
2013-07-18 15:14:38 34960 [Warning] InnoDB: Table "test"."t" tablespace is set as discarded. 2013-07-18 15:14:38 7f34d9a37700 InnoDB: cannot calculate statistics for table "test"."t" because the .ibd file is missing. For help, please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html
      As of MySQL 5.6.3, you can store InnoDB
      undo logs in one or more
      separate undo
      tablespaces outside of the
      system tablespace.
      This layout is different from the default configuration where the
      undo log is part of the
      system tablespace.
      The I/O patterns for the undo log make these tablespaces good
      candidates to move to SSD storage,
      while keeping the system tablespace on hard disk storage. Users
      cannot drop the separate tablespaces created to hold
      InnoDB undo logs, or the individual
      segments inside those
      tablespaces.
    
Because these files handle I/O operations formerly done inside the system tablespace, we broaden the definition of system tablespace to include these new files.
Undo logs are also referred to as rollback segments.
This feature involves the following new or renamed configuration options:
          innodb_rollback_segments
          becomes innodb_undo_logs. The
          old name is still available for compatibility.
      Because the InnoDB
      undo log feature involves
      setting two non-dynamic startup variables
      (innodb_undo_tablespaces and
      innodb_undo_directory), this
      feature can only be enabled when initializing a MySQL instance.
To use this feature, follow these steps:
          Decide on a path to hold the undo logs. You will specify that
          path as the argument to the
          innodb_undo_directory option
          in your MySQL configuration file or startup script. For
          embedded MySQL installations, an absolute path must be
          specified.
        
          Decide on a starting value for the
          innodb_undo_logs option. You
          can start with a relatively low value and increase it over
          time to examine the effect on performance.
        
          Decide on a non-zero value for the
          innodb_undo_tablespaces
          option. The multiple undo logs specified by the
          innodb_undo_logs value are
          divided between this number of separate tablespaces
          (represented by
          .ibd
          files). This value is fixed for the life of the MySQL
          instance, so if you are uncertain about the optimal value,
          estimate on the high side.
        
Create a new MySQL instance, using the values you chose in the configuration file or in your MySQL startup script. Use a realistic workload with data volume similar to your production servers. Alternatively, use the transportable tablespaces feature to copy existing database tables to your newly configured MySQL instance. See Section 14.4.6, “Copying File-Per-Table Tablespaces to Another Server” for more information.
Benchmark the performance of I/O intensive workloads.
          Periodically increase the value of
          innodb_undo_logs and rerun
          performance tests. Find the value where you stop experiencing
          gains in I/O performance.
        
Deploy a new production instance using the ideal settings for these options. Set it up as a slave server in a replication configuration, or transfer data from an earlier production instance.
      Keeping the undo logs in separate files allows the MySQL team to
      implement I/O and memory optimizations related to this
      transactional data. For example, because the undo data is written
      to disk and then rarely used (only in case of crash recovery), it
      does not need to be kept in the filesystem memory cache, in turn
      allowing a higher percentage of system memory to be devoted to the
      InnoDB buffer
      pool.
    
      The typical SSD best practice of keeping the
      InnoDB system tablespace on a hard drive and
      moving the per-table tablespaces to SSD, is assisted by moving the
      undo information into separate tablespace files.
      The physical tablespace files are named
      undo, where
      NN is the space ID, including leading
      zeros.
    
Currently, MySQL instances containing separate undo tablespaces cannot be downgraded to earlier releases such as MySQL 5.5 or 5.1.
      To create an InnoDB table, use the
      CREATE TABLE statement. You do not
      need to specify the ENGINE=InnoDB clause if
      InnoDB is defined as the default storage
      engine, which is the default as of MySQL 5.5. You might still use
      ENGINE=InnoDB clause if you plan to use
      mysqldump or replication to replay the
      CREATE TABLE statement on a server
      where the default storage engine is not InnoDB.
    
-- Default storage engine = InnoDB. CREATE TABLE t1 (a INT, b CHAR (20), PRIMARY KEY (a)); -- Backward-compatible with older MySQL. CREATE TABLE t2 (a INT, b CHAR (20), PRIMARY KEY (a)) ENGINE=InnoDB;
      An InnoDB table and its indexes can be created
      in the system
      tablespace or in a
      file-per-table
      tablespace. When
      innodb_file_per_table is enabled,
      which is the default setting as of MySQL 5.6.6, an
      InnoDB table is implicitly created in an
      individual file-per-table tablespace. Conversely, when
      innodb_file_per_table is
      disabled, an InnoDB table is implicitly created
      in the system tablespace.
    
      When you create an InnoDB table, MySQL creates
      a .frm file in a database
      directory under the MySQL data directory. For a table created in a
      file-per-table tablespace, an .ibd
      file is also created. A table created in the system
      tablespace is created in the existing system tablespace
      ibdata files.
    
      Internally, InnoDB adds an entry for each table
      to the InnoDB data dictionary. The entry
      includes the database name. For example, if table
      t1 is created in the test
      database, the data dictionary entry is
      'test/t1'. This means you can create a table of
      the same name (t1) in a different database, and
      the table names do not collide inside InnoDB.
      To view the properties of InnoDB tables, issue
      a SHOW TABLE STATUS statement:
    
mysql> SHOW TABLE STATUS FROM test LIKE 't%' \G;
*************************** 1. row ***************************
           Name: t1
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 0
 Avg_row_length: 0
    Data_length: 16384
Max_data_length: 0
   Index_length: 0
      Data_free: 0
 Auto_increment: NULL
    Create_time: 2015-03-16 16:26:52
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options: 
        Comment: 
1 row in set (0.00 sec)
      In the status output, you see the
      Row format property of
      table t1 is Compact.
      Although that setting is fine for basic experimentation, consider
      using the
      Dynamic
      or
      Compressed
      row format to take advantage of InnoDB features
      such as table compression and off-page storage for long column
      values. Using these row formats requires that
      innodb_file_per_table is enabled
      (the default as of MySQL 5.6.6) and that
      innodb_file_format is set to
      Barracuda:
    
SET GLOBAL innodb_file_per_table=1; SET GLOBAL innodb_file_format=barracuda; CREATE TABLE t3 (a INT, b CHAR (20), PRIMARY KEY (a)) ROW_FORMAT=DYNAMIC; CREATE TABLE t4 (a INT, b CHAR (20), PRIMARY KEY (a)) ROW_FORMAT=COMPRESSED;
      InnoDB table properties may also be queried
      using the InnoDB Information Schema system
      tables:
    
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE NAME='test/t1' \G
*************************** 1. row ***************************
     TABLE_ID: 42
         NAME: test/t1
         FLAG: 1
       N_COLS: 5
        SPACE: 24
  FILE_FORMAT: Antelope
   ROW_FORMAT: Compact
ZIP_PAGE_SIZE: 0
1 row in set (0.02 sec)
      Always set up a primary
      key for each InnoDB table, specifying
      the column or columns that:
Are referenced by the most important queries.
Are never left blank.
Never have duplicate values.
Rarely if ever change value once inserted.
      For example, in a table containing information about people, you
      would not create a primary key on (firstname,
      lastname) because more than one person can have the same
      name, some people have blank last names, and sometimes people
      change their names. With so many constraints, often there is not
      an obvious set of columns to use as a primary key, so you create a
      new column with a numeric ID to serve as all or part of the
      primary key. You can declare an
      auto-increment column
      so that ascending values are filled in automatically as rows are
      inserted:
    
-- The value of ID can act like a pointer between related items in different tables. CREATE TABLE t5 (id INT AUTO_INCREMENT, b CHAR (20), PRIMARY KEY (id)); -- The primary key can consist of more than one column. Any autoinc column must come first. CREATE TABLE t6 (id INT AUTO_INCREMENT, a INT, b CHAR (20), PRIMARY KEY (id,a));
      Although the table works correctly without defining a primary key,
      the primary key is involved with many aspects of performance and
      is a crucial design aspect for any large or frequently used table.
      It is recommended that you always specify a primary key in the
      CREATE TABLE statement. If you
      create the table, load data, and then run
      ALTER TABLE to add a primary key
      later, that operation is much slower than defining the primary key
      when creating the table.
    This section describes techniques for moving or copying some or all
    InnoDB tables to a different server. For example,
    you might move an entire MySQL instance to a larger, faster server;
    you might clone an entire MySQL instance to a new replication slave
    server; you might copy individual tables to another server to
    develop and test an application, or to a data warehouse server to
    produce reports.
  
    Techniques for moving or copying InnoDB tables
    include:
    On Windows, InnoDB always stores database and
    table names internally in lowercase. To move databases in a binary
    format from Unix to Windows or from Windows to Unix, create all
    databases and tables using lowercase names. A convenient way to
    accomplish this is to add the following line to the
    [mysqld] section of your
    my.cnf or my.ini file
    before creating any databases or tables:
  
[mysqld] lower_case_table_names=1
    Introduced in MySQL 5.6.6, the transportable tablespaces feature
    uses FLUSH TABLES ... FOR
    EXPORT to ready InnoDB tables for
    copying from one server instance to another. To use this feature,
    InnoDB tables must be created with
    innodb_file_per_table set to
    ON so that each InnoDB table
    has its own tablespace. For usage information, see
    Section 14.4.6, “Copying File-Per-Table Tablespaces to Another Server”.
    The MySQL Enterprise Backup product lets you back up a running MySQL
    database, including InnoDB and
    MyISAM tables, with minimal disruption to
    operations while producing a consistent snapshot of the database.
    When MySQL Enterprise Backup is copying InnoDB
    tables, reads and writes to both InnoDB and
    MyISAM tables can continue. During the copying of
    MyISAM and other non-InnoDB tables, reads (but
    not writes) to those tables are permitted. In addition, MySQL
    Enterprise Backup can create compressed backup files, and back up
    subsets of InnoDB tables. In conjunction with the
    MySQL binary log, you can perform point-in-time recovery. MySQL
    Enterprise Backup is included as part of the MySQL Enterprise
    subscription.
  
For more details about MySQL Enterprise Backup, see Section 25.2, “MySQL Enterprise Backup Overview”.
    You can move an InnoDB database simply by copying
    all the relevant files listed under "Cold Backups" in
    Section 14.15, “InnoDB Backup and Recovery”.
  
    Like MyISAM data files, InnoDB
    data and log files are binary-compatible on all platforms having the
    same floating-point number format. If the floating-point formats
    differ but you have not used FLOAT or
    DOUBLE data types in your tables,
    then the procedure is the same: simply copy the relevant files.
    When you move or copy .ibd files, the database
    directory name must be the same on the source and destination
    systems. The table definition stored in the
    InnoDB shared tablespace includes the database
    name. The transaction IDs and log sequence numbers stored in the
    tablespace files also differ between databases.
  
    To move an .ibd file and the associated table
    from one database to another, use a RENAME
    TABLE statement:
  
RENAME TABLEdb1.tbl_nameTOdb2.tbl_name;
    If you have a “clean” backup of an
    .ibd file, you can restore it to the MySQL
    installation from which it originated as follows:
        The table must not have been dropped or truncated since you
        copied the .ibd file, because doing so
        changes the table ID stored inside the tablespace.
      
        Issue this ALTER TABLE statement
        to delete the current .ibd file:
      
ALTER TABLE tbl_name DISCARD TABLESPACE;
        Copy the backup .ibd file to the proper
        database directory.
      
        Issue this ALTER TABLE statement
        to tell InnoDB to use the new
        .ibd file for the table:
      
ALTER TABLE tbl_name IMPORT TABLESPACE;
          The ALTER TABLE
          ... IMPORT TABLESPACE feature does not enforce
          foreign key constraints on imported data.
    In this context, a “clean” .ibd
    file backup is one for which the following requirements are
    satisfied:
        There are no uncommitted modifications by transactions in the
        .ibd file.
      
        There are no unmerged insert buffer entries in the
        .ibd file.
      
        Purge has removed all delete-marked index records from the
        .ibd file.
      
        mysqld has flushed all modified pages of the
        .ibd file from the buffer pool to the file.
    You can make a clean backup .ibd file using the
    following method:
Stop all activity from the mysqld server and commit all transactions.
        Wait until SHOW
        ENGINE INNODB STATUS shows that there are no active
        transactions in the database, and the main thread status of
        InnoDB is Waiting for server
        activity. Then you can make a copy of the
        .ibd file.
    Another method for making a clean copy of an
    .ibd file is to use the MySQL Enterprise Backup
    product:
        Use MySQL Enterprise Backup to back up the
        InnoDB installation.
      
        Start a second mysqld server on the backup
        and let it clean up the .ibd files in the
        backup.
You can use mysqldump to dump your tables on one machine and then import the dump files on the other machine. Using this method, it does not matter whether the formats differ or if your tables contain floating-point data.
One way to increase the performance of this method is to switch off autocommit mode when importing data, assuming that the tablespace has enough space for the big rollback segment that the import transactions generate. Do the commit only after importing a whole table or a segment of a table.
By default, connection to the MySQL server begins with autocommit mode enabled, which automatically commits every SQL statement as you execute it. This mode of operation might be unfamiliar if you have experience with other database systems, where it is standard practice to issue a sequence of DML statements and commit them or roll them back all together.
      To use multiple-statement
      transactions, switch
      autocommit off with the SQL statement SET autocommit =
      0 and end each transaction with
      COMMIT or
      ROLLBACK as
      appropriate. To leave autocommit on, begin each transaction with
      START
      TRANSACTION and end it with
      COMMIT or
      ROLLBACK. The
      following example shows two transactions. The first is committed;
      the second is rolled back.
    
shell>mysql testmysql>CREATE TABLE customer (a INT, b CHAR (20), INDEX (a));Query OK, 0 rows affected (0.00 sec) mysql>-- Do a transaction with autocommit turned on.mysql>START TRANSACTION;Query OK, 0 rows affected (0.00 sec) mysql>INSERT INTO customer VALUES (10, 'Heikki');Query OK, 1 row affected (0.00 sec) mysql>COMMIT;Query OK, 0 rows affected (0.00 sec) mysql>-- Do another transaction with autocommit turned off.mysql>SET autocommit=0;Query OK, 0 rows affected (0.00 sec) mysql>INSERT INTO customer VALUES (15, 'John');Query OK, 1 row affected (0.00 sec) mysql>INSERT INTO customer VALUES (20, 'Paul');Query OK, 1 row affected (0.00 sec) mysql>DELETE FROM customer WHERE b = 'Heikki';Query OK, 1 row affected (0.00 sec) mysql>-- Now we undo those last 2 inserts and the delete.mysql>ROLLBACK;Query OK, 0 rows affected (0.00 sec) mysql>SELECT * FROM customer;+------+--------+ | a | b | +------+--------+ | 10 | Heikki | +------+--------+ 1 row in set (0.00 sec) mysql>
      In APIs such as PHP, Perl DBI, JDBC, ODBC, or the standard C call
      interface of MySQL, you can send transaction control statements
      such as COMMIT to the MySQL server
      as strings just like any other SQL statements such as
      SELECT or
      INSERT. Some APIs also offer
      separate special transaction commit and rollback functions or
      methods.
    If you have existing tables, and applications that use them, that
    you want to convert to InnoDB for better
    reliability and scalability, use the following guidelines and tips.
    This section assumes most such tables were originally
    MyISAM, which was formerly the default.
    As you transition away from MyISAM tables, lower
    the value of the key_buffer_size
    configuration option to free memory no longer needed for caching
    results. Increase the value of the
    innodb_buffer_pool_size
    configuration option, which performs a similar role of allocating
    cache memory for InnoDB tables. The
    InnoDB buffer
    pool caches both table data and index data, so it does double
    duty in speeding up lookups for queries and keeping query results in
    memory for reuse.
Allocate as much memory to this option as you can afford, often up to 80% of physical memory on the server.
        If the operating system runs short of memory for other processes
        and begins to swap, reduce the
        innodb_buffer_pool_size value.
        Swapping is such an expensive operation that it drastically
        reduces the benefit of the cache memory.
      
        If the innodb_buffer_pool_size
        value is several gigabytes or higher, consider increasing the
        values of
        innodb_buffer_pool_instances.
        Doing so helps on busy servers where many connections are
        reading data into the cache at the same time.
      
        On a busy server, run benchmarks with the Query Cache turned
        off. The InnoDB buffer pool provides similar
        benefits, so the Query Cache might be tying up memory
        unnecessarily.
    Because MyISAM tables do not support
    transactions, you might not
    have paid much attention to the
    autocommit configuration option and
    the COMMIT and
    ROLLBACK
    statements. These keywords are important to allow multiple sessions
    to read and write InnoDB tables concurrently,
    providing substantial scalability benefits in write-heavy workloads.
  
While a transaction is open, the system keeps a snapshot of the data as seen at the beginning of the transaction, which can cause substantial overhead if the system inserts, updates, and deletes millions of rows while a stray transaction keeps running. Thus, take care to avoid transactions that run for too long:
        If you are using a mysql session for
        interactive experiments, always
        COMMIT (to finalize the changes)
        or ROLLBACK
        (to undo the changes) when finished. Close down interactive
        sessions rather than leaving them open for long periods, to
        avoid keeping transactions open for long periods by accident.
      
        Make sure that any error handlers in your application also
        ROLLBACK
        incomplete changes or COMMIT
        completed changes.
      
        ROLLBACK is a
        relatively expensive operation, because
        INSERT,
        UPDATE, and
        DELETE operations are written to
        InnoDB tables prior to the
        COMMIT, with the expectation that
        most changes will be committed successfully and rollbacks will
        be rare. When experimenting with large volumes of data, avoid
        making changes to large numbers of rows and then rolling back
        those changes.
      
        When loading large volumes of data with a sequence of
        INSERT statements, periodically
        COMMIT the results to avoid
        having transactions that last for hours. In typical load
        operations for data warehousing, if something goes wrong, you
        TRUNCATE TABLE and start over
        from the beginning rather than doing a
        ROLLBACK.
    The preceding tips save memory and disk space that can be wasted
    during too-long transactions. When transactions are shorter than
    they should be, the problem is excessive I/O. With each
    COMMIT, MySQL makes sure each change
    is safely recorded to disk, which involves some I/O.
        For most operations on InnoDB tables, you
        should use the setting
        autocommit=0. From an
        efficiency perspective, this avoids unnecessary I/O when you
        issue large numbers of consecutive
        INSERT,
        UPDATE, or
        DELETE statements. From a safety
        perspective, this allows you to issue a
        ROLLBACK
        statement to recover lost or garbled data if you make a mistake
        on the mysql command line, or in an exception
        handler in your application.
      
        The time when autocommit=1 is
        suitable for InnoDB tables is when running a
        sequence of queries for generating reports or analyzing
        statistics. In this situation, there is no I/O penalty related
        to COMMIT or
        ROLLBACK, and
        InnoDB can
        automatically optimize
        the read-only workload.
      
        If you make a series of related changes, finalize all those
        changes at once with a single
        COMMIT at the end. For example,
        if you insert related pieces of information into several tables,
        do a single COMMIT after making
        all the changes. Or if you run many consecutive
        INSERT statements, do a single
        COMMIT after all the data is
        loaded; if you are doing millions of
        INSERT statements, perhaps split
        up the huge transaction by issuing a
        COMMIT every ten thousand or
        hundred thousand records, so the transaction does not grow too
        large.
      
        Remember that even a SELECT
        statement opens a transaction, so after running some report or
        debugging queries in an interactive mysql
        session, either issue a COMMIT or
        close the mysql session.
    You might see warning messages referring to “deadlocks”
    in the MySQL error log, or the output of
    SHOW ENGINE INNODB
    STATUS. Despite the scary-sounding name, a
    deadlock is not a serious issue
    for InnoDB tables, and often does not require any
    corrective action. When two transactions start modifying multiple
    tables, accessing the tables in a different order, they can reach a
    state where each transaction is waiting for the other and neither
    can proceed. MySQL immediately detects this condition and cancels
    (rolls back) the
    “smaller” transaction, allowing the other to proceed.
  
Your applications do need error-handling logic to restart a transaction that is forcibly cancelled like this. When you re-issue the same SQL statements as before, the original timing issue no longer applies: either the other transaction has already finished and yours can proceed, or the other transaction is still in progress and your transaction waits until it finishes.
    If deadlock warnings occur constantly, you might review the
    application code to reorder the SQL operations in a consistent way,
    or to shorten the transactions. You can test with the
    innodb_print_all_deadlocks option
    enabled to see all deadlock warnings in the MySQL error log, rather
    than only the last warning in the
    SHOW ENGINE INNODB
    STATUS output.
    To get the best performance from InnoDB tables,
    you can adjust a number of parameters related to storage layout.
  
    When you convert MyISAM tables that are large,
    frequently accessed, and hold vital data, investigate and consider
    the innodb_file_per_table,
    innodb_file_format, and
    innodb_page_size configuration
    options, and the
    ROW_FORMAT and
    KEY_BLOCK_SIZE clauses of the
    CREATE TABLE statement.
  
    During your initial experiments, the most important setting is
    innodb_file_per_table. When this
    setting is enabled, which is the default as of MySQL 5.6.6, new
    InnoDB tables are implicitly created in
    file-per-table
    tablespaces. In contrast with the InnoDB system
    tablespace, file-per-table tablespaces allow disk space to be
    reclaimed by the operating system when a table is truncated or
    dropped. File-per-table tablespaces also support the
    Barracuda file format and
    associated features such as table compression and off-page storage
    for long variable-length columns. For more information, see
    Section 14.4.4, “InnoDB File-Per-Table Tablespaces”.
    To convert a non-InnoDB table to use
    InnoDB use ALTER
    TABLE:
  
ALTER TABLE table_name ENGINE=InnoDB;
      Do not convert MySQL system tables in the mysql
      database (such as user or
      host) to the InnoDB type.
      This is an unsupported operation. The system tables must always be
      of the MyISAM type.
    You might make an InnoDB table that is a clone of a MyISAM table,
    rather than doing the ALTER TABLE
    conversion, to test the old and new table side-by-side before
    switching.
  
    Create an empty InnoDB table with identical
    column and index definitions. Use show create table
     to see the full
    table_name\GCREATE TABLE statement to use. Change
    the ENGINE clause to
    ENGINE=INNODB.
    To transfer a large volume of data into an empty
    InnoDB table created as shown in the previous
    section, insert the rows with INSERT INTO
    .
  innodb_table SELECT * FROM
    myisam_table ORDER BY
    primary_key_columns
    You can also create the indexes for the InnoDB
    table after inserting the data. Historically, creating new secondary
    indexes was a slow operation for InnoDB, but now you can create the
    indexes after the data is loaded with relatively little overhead
    from the index creation step.
  
    If you have UNIQUE constraints on secondary keys,
    you can speed up a table import by turning off the uniqueness checks
    temporarily during the import operation:
  
SET unique_checks=0;
... import operation ...
SET unique_checks=1;
    For big tables, this saves disk I/O because
    InnoDB can use its
    change buffer to write
    secondary index records as a batch. Be certain that the data
    contains no duplicate keys.
    unique_checks permits but does not
    require storage engines to ignore duplicate keys.
  
To get better control over the insertion process, you might insert big tables in pieces:
INSERT INTO newtable SELECT * FROM oldtable WHERE yourkey >somethingAND yourkey <=somethingelse;
After all records have been inserted, you can rename the tables.
    During the conversion of big tables, increase the size of the
    InnoDB buffer pool to reduce disk I/O, to a
    maximum of 80% of physical memory. You can also increase the sizes
    of the InnoDB log files.
    If you intend to make several temporary copies of your data in
    InnoDB tables during the conversion process, it
    is recommended that you create the tables in file-per-table
    tablespaces so that you can reclaim the disk space when you drop the
    tables. As mentioned previously, when the
    innodb_file_per_table option is
    enabled, newly created InnoDB tables are
    implicitly created in file-per-table tablespaces.
  
    Whether you convert the MyISAM table directly or
    create a cloned InnoDB table, make sure that you
    have sufficient disk space to hold both the old and new tables
    during the process. InnoDB tables require more
    disk space than MyISAM tables. If an
    ALTER TABLE operation runs out of
    space, it starts a rollback, and that can take hours if it is
    disk-bound. For inserts, InnoDB uses the insert
    buffer to merge secondary index records to indexes in batches. That
    saves a lot of disk I/O. For rollback, no such mechanism is used,
    and the rollback can take 30 times longer than the insertion.
  
In the case of a runaway rollback, if you do not have valuable data in your database, it may be advisable to kill the database process rather than wait for millions of disk I/O operations to complete. For the complete procedure, see Section 14.18.2, “Forcing InnoDB Recovery”.
    The PRIMARY KEY clause is a critical factor
    affecting the performance of MySQL queries and the space usage for
    tables and indexes. Perhaps you have phoned a financial institution
    where you are asked for an account number. If you do not have the
    number, you are asked for a dozen different pieces of information to
    “uniquely identify” yourself. The primary key is like
    that unique account number that lets you get straight down to
    business when querying or modifying the information in a table.
    Every row in the table must have a primary key value, and no two
    rows can have the same primary key value.
  
Here are some guidelines for the primary key, followed by more detailed explanations.
        Declare a PRIMARY KEY for each table.
        Typically, it is the most important column that you refer to in
        WHERE clauses when looking up a single row.
      
        Declare the PRIMARY KEY clause in the
        original CREATE TABLE statement,
        rather than adding it later through an
        ALTER TABLE statement.
      
Choose the column and its data type carefully. Prefer numeric columns over character or string ones.
Consider using an auto-increment column if there is not another stable, unique, non-null, numeric column to use.
An auto-increment column is also a good choice if there is any doubt whether the value of the primary key column could ever change. Changing the value of a primary key column is an expensive operation, possibly involving rearranging data within the table and within each secondary index.
Consider adding a primary key to any table that does not already have one. Use the smallest practical numeric type based on the maximum projected size of the table. This can make each row slightly more compact, which can yield substantial space savings for large tables. The space savings are multiplied if the table has any secondary indexes, because the primary key value is repeated in each secondary index entry. In addition to reducing data size on disk, a small primary key also lets more data fit into the buffer pool, speeding up all kinds of operations and improving concurrency.
    If the table already has a primary key on some longer column, such
    as a VARCHAR, consider adding a new unsigned
    AUTO_INCREMENT column and switching the primary
    key to that, even if that column is not referenced in queries. This
    design change can produce substantial space savings in the secondary
    indexes. You can designate the former primary key columns as
    UNIQUE NOT NULL to enforce the same constraints
    as the PRIMARY KEY clause, that is, to prevent
    duplicate or null values across all those columns.
  
If you spread related information across multiple tables, typically each table uses the same column for its primary key. For example, a personnel database might have several tables, each with a primary key of employee number. A sales database might have some tables with a primary key of customer number, and other tables with a primary key of order number. Because lookups using the primary key are very fast, you can construct efficient join queries for such tables.
    If you leave the PRIMARY KEY clause out entirely,
    MySQL creates an invisible one for you. It is a 6-byte value that
    might be longer than you need, thus wasting space. Because it is
    hidden, you cannot refer to it in queries.
    The extra reliability and scalability features of
    InnoDB do require more disk storage than
    equivalent MyISAM tables. You might change the
    column and index definitions slightly, for better space utilization,
    reduced I/O and memory consumption when processing result sets, and
    better query optimization plans making efficient use of index
    lookups.
  
    If you do set up a numeric ID column for the primary key, use that
    value to cross-reference with related values in any other tables,
    particularly for join queries. For
    example, rather than accepting a country name as input and doing
    queries searching for the same name, do one lookup to determine the
    country ID, then do other queries (or a single join query) to look
    up relevant information across several tables. Rather than storing a
    customer or catalog item number as a string of digits, potentially
    using up several bytes, convert it to a numeric ID for storing and
    querying. A 4-byte unsigned INT
    column can index over 4 billion items (with the US meaning of
    billion: 1000 million). For the ranges of the different integer
    types, see Section 11.2.1, “Integer Types (Exact Value) - INTEGER, INT, SMALLINT, TINYINT,
        MEDIUMINT, BIGINT”.
    InnoDB files require more care and planning than
    MyISAM files do:
        You must not delete the ibdata
        files that represent the InnoDB
        system tablespace.
      
        Copying InnoDB tables from one server to another requires
        issuing the FLUSH TABLES ... FOR EXPORT
        statement first, and copying the
        table_name.cfgtable_name.ibd
      InnoDB provides an optimization that
      significantly improves scalability and performance of SQL
      statements that insert rows into tables with
      AUTO_INCREMENT columns. To use the
      AUTO_INCREMENT mechanism with an
      InnoDB table, an
      AUTO_INCREMENT column
      ai_col must be defined as part of an
      index such that it is possible to perform the equivalent of an
      indexed SELECT
      MAX( lookup on the
      table to obtain the maximum column value. Typically, this is
      achieved by making the column the first column of some table
      index.
    ai_col)
      This section provides background information on the original
      (“traditional”) implementation of auto-increment
      locking in InnoDB, explains the configurable
      locking mechanism, documents the parameter for configuring the
      mechanism, and describes its behavior and interaction with
      replication.
        The original implementation of auto-increment handling in
        InnoDB uses the following strategy to prevent
        problems when using the binary log for statement-based
        replication or for certain recovery scenarios.
      
        If you specify an AUTO_INCREMENT column for
        an InnoDB table, the table handle in the
        InnoDB data dictionary contains a special
        counter called the auto-increment counter that is used in
        assigning new values for the column. This counter is stored only
        in main memory, not on disk.
      
        InnoDB uses the following algorithm to
        initialize the auto-increment counter for a table
        t that contains an
        AUTO_INCREMENT column named
        ai_col: After server startup or after opening
        a table that was evicted from the table cache,
        InnoDB executes the equivalent of this
        statement for the first insert into the table:
      
SELECT MAX(ai_col) FROM table_name FOR UPDATE;
        InnoDB increments the value retrieved by the
        statement and assigns it to the column and to the auto-increment
        counter for the table. By default, the value is incremented by
        1. This default can be overridden by the
        auto_increment_increment
        configuration setting.
      
        If the table is empty, InnoDB uses the value
        1. This default can be overridden by the
        auto_increment_offset
        configuration setting.
      
        If a SHOW TABLE STATUS statement
        examines the table t before the
        auto-increment counter is initialized, InnoDB
        initializes but does not increment the value and stores it for
        use by later inserts. This initialization uses a normal
        exclusive-locking read on the table and the lock lasts to the
        end of the transaction.
      
        InnoDB follows the same procedure for
        initializing the auto-increment counter for a freshly created
        table.
      
        After the auto-increment counter has been initialized, if you do
        not explicitly specify a value for an
        AUTO_INCREMENT column,
        InnoDB increments the counter and assigns the
        new value to the column. If you insert a row that explicitly
        specifies the column value, and the value is bigger than the
        current counter value, the counter is set to the specified
        column value.
      
        If a user specifies NULL or
        0 for the AUTO_INCREMENT
        column in an INSERT,
        InnoDB treats the row as if the value was not
        specified and generates a new value for it.
      
The behavior of the auto-increment mechanism is not defined if you assign a negative value to the column, or if the value becomes bigger than the maximum integer that can be stored in the specified integer type.
        When accessing the auto-increment counter,
        InnoDB uses a special table-level
        AUTO-INC lock that it keeps to the end of the
        current SQL statement, not to the end of the transaction. The
        special lock release strategy was introduced to improve
        concurrency for inserts into a table containing an
        AUTO_INCREMENT column. Nevertheless, two
        transactions cannot have the AUTO-INC lock on
        the same table simultaneously, which can have a performance
        impact if the AUTO-INC lock is held for a
        long time. That might be the case for a statement such as
        INSERT INTO t1 ... SELECT ... FROM t2 that
        inserts all rows from one table into another.
      
        InnoDB uses the in-memory auto-increment
        counter as long as the server runs. When the server is stopped
        and restarted, InnoDB reinitializes the
        counter for each table for the first
        INSERT to the table, as described
        earlier.
      
        A server restart also cancels the effect of the
        AUTO_INCREMENT = 
        table option in NCREATE TABLE and
        ALTER TABLE statements, which you
        can use with InnoDB tables to set the initial
        counter value or alter the current counter value.
      
        You may see gaps in the sequence of values assigned to the
        AUTO_INCREMENT column if you roll back
        transactions that have generated numbers using the counter.
        As described in the previous section, InnoDB
        uses a special lock called the table-level
        AUTO-INC lock for inserts into tables with
        AUTO_INCREMENT columns. This lock is normally
        held to the end of the statement (not to the end of the
        transaction), to ensure that auto-increment numbers are assigned
        in a predictable and repeatable order for a given sequence of
        INSERT statements.
      
        In the case of statement-based replication, this means that when
        an SQL statement is replicated on a slave server, the same
        values are used for the auto-increment column as on the master
        server. The result of execution of multiple
        INSERT statements is
        deterministic, and the slave reproduces the same data as on the
        master. If auto-increment values generated by multiple
        INSERT statements were
        interleaved, the result of two concurrent
        INSERT statements would be
        nondeterministic, and could not reliably be propagated to a
        slave server using statement-based replication.
      
To make this clear, consider an example that uses this table:
CREATE TABLE t1 ( c1 INT(11) NOT NULL AUTO_INCREMENT, c2 VARCHAR(10) DEFAULT NULL, PRIMARY KEY (c1) ) ENGINE=InnoDB;
        Suppose that there are two transactions running, each inserting
        rows into a table with an AUTO_INCREMENT
        column. One transaction is using an
        INSERT ...
        SELECT statement that inserts 1000 rows, and another
        is using a simple INSERT
        statement that inserts one row:
      
Tx1: INSERT INTO t1 (c2) SELECT 1000 rows from another table ...
Tx2: INSERT INTO t1 (c2) VALUES ('xxx');
        InnoDB cannot tell in advance how many rows
        will be retrieved from the SELECT
        in the INSERT statement in Tx1,
        and it assigns the auto-increment values one at a time as the
        statement proceeds. With a table-level lock, held to the end of
        the statement, only one INSERT
        statement referring to table t1 can execute
        at a time, and the generation of auto-increment numbers by
        different statements is not interleaved. The auto-increment
        value generated by the Tx1
        INSERT ...
        SELECT statement will be consecutive, and the (single)
        auto-increment value used by the
        INSERT statement in Tx2 will
        either be smaller or larger than all those used for Tx1,
        depending on which statement executes first.
      
        As long as the SQL statements execute in the same order when
        replayed from the binary log (when using statement-based
        replication, or in recovery scenarios), the results will be the
        same as they were when Tx1 and Tx2 first ran. Thus, table-level
        locks held until the end of a statement make
        INSERT statements using
        auto-increment safe for use with statement-based replication.
        However, those locks limit concurrency and scalability when
        multiple transactions are executing insert statements at the
        same time.
      
        In the preceding example, if there were no table-level lock, the
        value of the auto-increment column used for the
        INSERT in Tx2 depends on
        precisely when the statement executes. If the
        INSERT of Tx2 executes while the
        INSERT of Tx1 is running (rather
        than before it starts or after it completes), the specific
        auto-increment values assigned by the two
        INSERT statements are
        nondeterministic, and may vary from run to run.
      
        InnoDB can avoid using the table-level
        AUTO-INC lock for a class of
        INSERT statements where the
        number of rows is known in advance, and still preserve
        deterministic execution and safety for statement-based
        replication. Further, if you are not using the binary log to
        replay SQL statements as part of recovery or replication, you
        can entirely eliminate use of the table-level
        AUTO-INC lock for even greater concurrency
        and performance, at the cost of permitting gaps in
        auto-increment numbers assigned by a statement and potentially
        having the numbers assigned by concurrently executing statements
        interleaved.
      
        For INSERT statements where the
        number of rows to be inserted is known at the beginning of
        processing the statement, InnoDB quickly
        allocates the required number of auto-increment values without
        taking any lock, but only if there is no concurrent session
        already holding the table-level AUTO-INC lock
        (because that other statement will be allocating auto-increment
        values one-by-one as it proceeds). More precisely, such an
        INSERT statement obtains
        auto-increment values under the control of a mutex (a
        light-weight lock) that is not held until
        the statement completes, but only for the duration of the
        allocation process.
      
        This new locking scheme enables much greater scalability, but it
        does introduce some subtle differences in how auto-increment
        values are assigned compared to the original mechanism. To
        describe the way auto-increment works in
        InnoDB, the following discussion defines some
        terms, and explains how InnoDB behaves using
        different settings of the
        innodb_autoinc_lock_mode
        configuration parameter, which you can set at server startup.
        Additional considerations are described following the
        explanation of auto-increment locking behavior.
      
First, some definitions:
            “INSERT-like”
            statements
          
            All statements that generate new rows in a table, including
            INSERT,
            INSERT ...
            SELECT, REPLACE,
            REPLACE ...
            SELECT, and LOAD
            DATA.
          
“Simple inserts”
            Statements for which the number of rows to be inserted can
            be determined in advance (when the statement is initially
            processed). This includes single-row and multiple-row
            INSERT and
            REPLACE statements that do
            not have a nested subquery, but not
            INSERT
            ... ON DUPLICATE KEY UPDATE.
          
“Bulk inserts”
            Statements for which the number of rows to be inserted (and
            the number of required auto-increment values) is not known
            in advance. This includes
            INSERT ...
            SELECT,
            REPLACE ...
            SELECT, and LOAD
            DATA statements, but not plain
            INSERT. InnoDB will
            assign new values for the AUTO_INCREMENT
            column one at a time as each row is processed.
          
“Mixed-mode inserts”
            These are “simple insert” statements that
            specify the auto-increment value for some (but not all) of
            the new rows. An example follows, where
            c1 is an
            AUTO_INCREMENT column of table
            t1:
          
INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');
            Another type of “mixed-mode insert” is
            INSERT
            ... ON DUPLICATE KEY UPDATE, which in the worst
            case is in effect an INSERT
            followed by a UPDATE, where
            the allocated value for the
            AUTO_INCREMENT column may or may not be
            used during the update phase.
        There are three possible settings for the
        innodb_autoinc_lock_mode
        parameter:
            innodb_autoinc_lock_mode = 0
            (“traditional” lock mode)
          
            This lock mode provides the same behavior as before
            innodb_autoinc_lock_mode
            existed. For all
            “INSERT-like”
            statements, a special table-level
            AUTO-INC lock is obtained and held to the
            end of the statement. This assures that the auto-increment
            values assigned by any given statement are consecutive.
          
This lock mode is provided for:
Backward compatibility.
Performance testing.
Working around issues with “mixed-mode inserts”, due to the possible differences in semantics described later.
            innodb_autoinc_lock_mode = 1
            (“consecutive” lock mode)
          
            This is the default lock mode. In this mode, “bulk
            inserts” use the special AUTO-INC
            table-level lock and hold it until the end of the statement.
            This applies to all
            INSERT ...
            SELECT,
            REPLACE ...
            SELECT, and LOAD
            DATA statements. Only one statement holding the
            AUTO-INC lock can execute at a time.
          
            With this lock mode, “simple inserts” (only)
            use a new locking model where a light-weight mutex is used
            during the allocation of auto-increment values, and no
            table-level AUTO-INC lock is used, unless
            an AUTO-INC lock is held by another
            transaction. If another transaction does hold an
            AUTO-INC lock, a “simple
            insert” waits for the AUTO-INC
            lock, as if it too were a “bulk insert”.
          
            This lock mode ensures that, in the presence of
            INSERT statements where the
            number of rows is not known in advance (and where
            auto-increment numbers are assigned as the statement
            progresses), all auto-increment values assigned by any
            “INSERT-like”
            statement are consecutive, and operations are safe for
            statement-based replication.
          
Simply put, the important impact of this lock mode is significantly better scalability. This mode is safe for use with statement-based replication. Further, as with “traditional” lock mode, auto-increment numbers assigned by any given statement are consecutive. In this mode, there is no change in semantics compared to “traditional” mode for any statement that uses auto-increment, with one important exception.
            The exception is for “mixed-mode inserts”,
            where the user provides explicit values for an
            AUTO_INCREMENT column for some, but not
            all, rows in a multiple-row “simple insert”.
            For such inserts, InnoDB will allocate
            more auto-increment values than the number of rows to be
            inserted. However, all values automatically assigned are
            consecutively generated (and thus higher than) the
            auto-increment value generated by the most recently executed
            previous statement. “Excess” numbers are lost.
          
            innodb_autoinc_lock_mode = 2
            (“interleaved” lock mode)
          
            In this lock mode, no
            “INSERT-like”
            statements use the table-level AUTO-INC
            lock, and multiple statements can execute at the same time.
            This is the fastest and most scalable lock mode, but it is
            not safe when using statement-based
            replication or recovery scenarios when SQL statements are
            replayed from the binary log.
          
            In this lock mode, auto-increment values are guaranteed to
            be unique and monotonically increasing across all
            concurrently executing
            “INSERT-like”
            statements. However, because multiple statements can be
            generating numbers at the same time (that is, allocation of
            numbers is interleaved across
            statements), the values generated for the rows inserted by
            any given statement may not be consecutive.
          
If the only statements executing are “simple inserts” where the number of rows to be inserted is known ahead of time, there will be no gaps in the numbers generated for a single statement, except for “mixed-mode inserts”. However, when “bulk inserts” are executed, there may be gaps in the auto-increment values assigned by any given statement.
        The auto-increment locking modes provided by
        innodb_autoinc_lock_mode have
        several usage implications:
Using auto-increment with replication
            If you are using statement-based replication, set
            innodb_autoinc_lock_mode to
            0 or 1 and use the same value on the master and its slaves.
            Auto-increment values are not ensured to be the same on the
            slaves as on the master if you use
            innodb_autoinc_lock_mode =
            2 (“interleaved”) or configurations where the
            master and slaves do not use the same lock mode.
          
If you are using row-based or mixed-format replication, all of the auto-increment lock modes are safe, since row-based replication is not sensitive to the order of execution of the SQL statements (and the mixed format uses row-based replication for any statements that are unsafe for statement-based replication).
“Lost” auto-increment values and sequence gaps
            In all lock modes (0, 1, and 2), if a transaction that
            generated auto-increment values rolls back, those
            auto-increment values are “lost”. Once a value
            is generated for an auto-increment column, it cannot be
            rolled back, whether or not the
            “INSERT-like”
            statement is completed, and whether or not the containing
            transaction is rolled back. Such lost values are not reused.
            Thus, there may be gaps in the values stored in an
            AUTO_INCREMENT column of a table.
          
Gaps in auto-increment values for “bulk inserts”
            With
            innodb_autoinc_lock_mode
            set to 0 (“traditional”) or 1
            (“consecutive”), the auto-increment values
            generated by any given statement will be consecutive,
            without gaps, because the table-level
            AUTO-INC lock is held until the end of
            the statement, and only one such statement can execute at a
            time.
          
            With
            innodb_autoinc_lock_mode
            set to 2 (“interleaved”), there may be gaps in
            the auto-increment values generated by “bulk
            inserts,” but only if there are concurrently
            executing
            “INSERT-like”
            statements.
          
For lock modes 1 or 2, gaps may occur between successive statements because for bulk inserts the exact number of auto-increment values required by each statement may not be known and overestimation is possible.
Auto-increment values assigned by “mixed-mode inserts”
            Consider a “mixed-mode insert,” where a
            “simple insert” specifies the auto-increment
            value for some (but not all) resulting rows. Such a
            statement will behave differently in lock modes 0, 1, and 2.
            For example, assume c1 is an
            AUTO_INCREMENT column of table
            t1, and that the most recent
            automatically generated sequence number is 100. Consider the
            following “mixed-mode insert” statement:
          
INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');
            With
            innodb_autoinc_lock_mode
            set to 0 (“traditional”), the four new rows
            will be:
          
+-----+------+ | c1 | c2 | +-----+------+ | 1 | a | | 101 | b | | 5 | c | | 102 | d | +-----+------+
            The next available auto-increment value will be 103 because
            the auto-increment values are allocated one at a time, not
            all at once at the beginning of statement execution. This
            result is true whether or not there are concurrently
            executing
            “INSERT-like”
            statements (of any type).
          
            With
            innodb_autoinc_lock_mode
            set to 1 (“consecutive”), the four new rows
            will also be:
          
+-----+------+ | c1 | c2 | +-----+------+ | 1 | a | | 101 | b | | 5 | c | | 102 | d | +-----+------+
            However, in this case, the next available auto-increment
            value will be 105, not 103 because four auto-increment
            values are allocated at the time the statement is processed,
            but only two are used. This result is true whether or not
            there are concurrently executing
            “INSERT-like”
            statements (of any type).
          
            With
            innodb_autoinc_lock_mode
            set to mode 2 (“interleaved”), the four new
            rows will be:
          
+-----+------+ | c1 | c2 | +-----+------+ | 1 | a | |x| b | | 5 | c | |y| d | +-----+------+
            The values of x and
            y will be unique and larger than
            any previously generated rows. However, the specific values
            of x and
            y will depend on the number of
            auto-increment values generated by concurrently executing
            statements.
          
Finally, consider the following statement, issued when the most-recently generated sequence number was the value 4:
INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');
            With any
            innodb_autoinc_lock_mode
            setting, this statement will generate a duplicate-key error
            23000 (Can't write; duplicate key in
            table) because 5 will be allocated for the row
            (NULL, 'b') and insertion of the row
            (5, 'c') will fail.
This section describes differences in the InnoDB storage engine's handling of foreign keys as compared with that of the MySQL Server.
      Foreign key definitions for InnoDB tables are
      subject to the following conditions:
          InnoDB permits a foreign key to reference
          any index column or group of columns. However, in the
          referenced table, there must be an index where the referenced
          columns are listed as the first columns
          in the same order.
        
          InnoDB does not currently support
          foreign keys for tables with user-defined partitioning. This
          means that no user-partitioned InnoDB table
          may contain foreign key references or columns referenced by
          foreign keys.
        
          InnoDB allows a foreign key constraint to
          reference a non-unique key. This is an
          InnoDB extension to standard
          SQL.
      Referential actions for foreign keys of InnoDB
      tables are subject to the following conditions:
          While SET DEFAULT is allowed by the MySQL
          Server, it is rejected as invalid by
          InnoDB. CREATE
          TABLE and ALTER TABLE
          statements using this clause are not allowed for InnoDB
          tables.
        
          If there are several rows in the parent table that have the
          same referenced key value, InnoDB acts in
          foreign key checks as if the other parent rows with the same
          key value do not exist. For example, if you have defined a
          RESTRICT type constraint, and there is a
          child row with several parent rows, InnoDB
          does not permit the deletion of any of those parent rows.
        
          InnoDB performs cascading operations
          through a depth-first algorithm, based on records in the
          indexes corresponding to the foreign key constraints.
        
          If ON UPDATE CASCADE or ON UPDATE
          SET NULL recurses to update the same
          table it has previously updated during the cascade,
          it acts like RESTRICT. This means that you
          cannot use self-referential ON UPDATE
          CASCADE or ON UPDATE SET NULL
          operations. This is to prevent infinite loops resulting from
          cascaded updates. A self-referential ON DELETE SET
          NULL, on the other hand, is possible, as is a
          self-referential ON DELETE CASCADE.
          Cascading operations may not be nested more than 15 levels
          deep.
        
          Like MySQL in general, in an SQL statement that inserts,
          deletes, or updates many rows, InnoDB
          checks UNIQUE and FOREIGN
          KEY constraints row-by-row. When performing foreign
          key checks, InnoDB sets shared row-level
          locks on child or parent records it has to look at.
          InnoDB checks foreign key constraints
          immediately; the check is not deferred to transaction commit.
          According to the SQL standard, the default behavior should be
          deferred checking. That is, constraints are only checked after
          the entire SQL statement has been
          processed. Until InnoDB implements deferred
          constraint checking, some things will be impossible, such as
          deleting a record that refers to itself using a foreign key.
      You can obtain general information about foreign keys and their
      usage from querying the
      INFORMATION_SCHEMA.KEY_COLUMN_USAGE
      table, and more information more specific to
      InnoDB tables can be found in the
      INNODB_SYS_FOREIGN and
      INNODB_SYS_FOREIGN_COLS tables, also
      in the INFORMATION_SCHEMA database. See also
      Section 13.1.17.3, “Using FOREIGN KEY Constraints”.
    
      In addition to SHOW ERRORS, in the
      event of a foreign key error involving InnoDB
      tables (usually Error 150 in the MySQL Server), you can obtain a
      detailed explanation of the most recent InnoDB
      foreign key error by checking the output of
      SHOW ENGINE INNODB
      STATUS.
        Do not convert MySQL system tables in the
        mysql database from MyISAM
        to InnoDB tables. This is an unsupported
        operation. If you do this, MySQL does not restart until you
        restore the old system tables from a backup or re-generate them
        with the mysql_install_db program.
        
        
        It is not a good idea to configure InnoDB to
        use data files or log files on NFS volumes. Otherwise, the files
        might be locked by other processes and become unavailable for
        use by MySQL.
A table can contain a maximum of 1017 columns (raised in MySQL 5.6.9 from the earlier limit of 1000).
A table can contain a maximum of 64 secondary indexes.
          By default, an index key for a single-column index can be up
          to 767 bytes. The same length limit applies to any index key
          prefix. See Section 13.1.13, “CREATE INDEX Syntax”. For example, you
          might hit this limit with a
          column prefix index
          of more than 255 characters on a TEXT or
          VARCHAR column, assuming a UTF-8 character
          set and the maximum of 3 bytes for each character. When the
          innodb_large_prefix
          configuration option is enabled, this length limit is raised
          to 3072 bytes, for InnoDB tables that use
          the
          DYNAMIC
          and
          COMPRESSED
          row formats.
        
          Attempting to use an index prefix length that is greater than
          the allowed maximum value produces an error. To avoid such
          errors for replication configurations, avoid setting the
          innodb_large_prefix option on
          the master if it cannot also be set on the slaves, and the
          slaves have unique indexes that could be affected by this
          limit.
        
          The InnoDB internal maximum key length is
          3500 bytes, but MySQL itself restricts this to 3072 bytes.
          This limit applies to the length of the combined index key in
          a multi-column index.
        
          If you reduce the InnoDB
          page size to 8KB or 4KB
          by specifying the
          innodb_page_size option when
          creating the MySQL instance, the maximum length of the index
          key is lowered proportionally, based on the limit of 3072
          bytes for a 16KB page size. That is, the maximum index key
          length is 1536 bytes when the page size is 8KB, and 768 bytes
          when the page size is 4KB.
        
          The maximum row length, except for variable-length columns
          (VARBINARY,
          VARCHAR,
          BLOB and
          TEXT), is slightly less than
          half of a database page. That is, the maximum row length is
          about 8000 bytes for the default page size of 16KB; if you
          reduce the page size by specifying the
          innodb_page_size option when
          creating the MySQL instance, the maximum row length is 4000
          bytes for 8KB pages and 2000 bytes for 4KB pages.
          LONGBLOB and
          LONGTEXT
          columns must be less than 4GB, and the total row length,
          including BLOB and
          TEXT columns, must be less than
          4GB.
        
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 14.9.2, “File Space Management”.
          Although InnoDB supports row sizes larger
          than 65,535 bytes internally, MySQL itself imposes a row-size
          limit of 65,535 for the combined size of all columns:
        
mysql>CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),->c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),->f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
See Section D.10.4, “Limits on Table Column Count and Row Size”.
          On some older operating systems, files must be less than 2GB.
          This is not a limitation of InnoDB itself,
          but if you require a large tablespace, you will need to
          configure it using several smaller data files rather than one
          large data file.
        
          The combined size of the InnoDB log files
          can be up to 512GB.
        
The minimum tablespace size is slightly larger than 10MB. The maximum tablespace size is four billion database pages (64TB). This is also the maximum size for a table.
          
          
          The default database page size in InnoDB is
          16KB, or you can lower the page size to 8KB or 4KB by
          specifying the
          innodb_page_size option when
          creating the MySQL instance.
            Increasing the page size is not a supported operation: there
            is no guarantee that InnoDB will function
            normally with a page size greater than 16KB. Problems
            compiling or running InnoDB may occur. In particular,
            ROW_FORMAT=COMPRESSED in the Barracuda
            file format assumes that the page size is at most 16KB and
            uses 14-bit pointers.
          
            A MySQL instance using a particular
            InnoDB page size cannot use data files or
            log files from an instance that uses a different page size.
            This limitation could affect restore or downgrade operations
            using data from MySQL 5.6, which does support page sizes
            other than 16KB.
          InnoDB tables support
          FULLTEXT indexes, starting in MySQL 5.6.4.
          See Section 14.2.6.3, “InnoDB FULLTEXT Indexes” for details.
        
          InnoDB tables support spatial data types,
          but not indexes on them.
          ANALYZE TABLE determines index
          cardinality (as displayed in the
          Cardinality column of
          SHOW INDEX output) by doing
          random dives to each
          of the index trees and updating index cardinality estimates
          accordingly. Because these are only estimates, repeated runs
          of ANALYZE TABLE could produce
          different numbers. This makes ANALYZE
          TABLE fast on InnoDB tables but
          not 100% accurate because it does not take all rows into
          account.
        
          You can make the
          statistics collected by
          ANALYZE TABLE more precise and
          more stable by turning on the
          innodb_stats_persistent
          configuration option, as explained in
          Section 14.3.11.1, “Configuring Persistent Optimizer Statistics Parameters”. When that setting
          is enabled, it is important to run
          ANALYZE TABLE after major
          changes to indexed column data, because the statistics are not
          recalculated periodically (such as after a server restart) as
          they traditionally have been.
        
          You can change the number of random dives by modifying the
          innodb_stats_persistent_sample_pages
          system variable (if the persistent statistics setting is
          turned on), or the
          innodb_stats_transient_sample_pages
          system variable (if the persistent statistics setting is
          turned off).
        
          MySQL uses index cardinality estimates only in join
          optimization. If some join is not optimized in the right way,
          you can try using ANALYZE
          TABLE. In the few cases that
          ANALYZE TABLE does not produce
          values good enough for your particular tables, you can use
          FORCE INDEX with your queries to force the
          use of a particular index, or set the
          max_seeks_for_key system
          variable to ensure that MySQL prefers index lookups over table
          scans. See Section 5.1.4, “Server System Variables”, and
          Section B.5.6, “Optimizer-Related Issues”.
        
          If statements or transactions are running on a table and
          ANALYZE TABLE is run on the
          same table followed by a second ANALYZE
          TABLE operation, the second
          ANALYZE TABLE operation is
          blocked until the statements or transactions are completed.
          This behaviour occurs because ANALYZE
          TABLE marks the currently loaded table definition as
          obsolete when ANALYZE TABLE is
          finished running. New statements or transactions (including a
          second ANALYZE TABLE statement)
          must load the new table definition into the table cache, which
          cannot occur until currently running statements or
          transactions are completed and the old table definition is
          purged. Loading multiple concurrent table definitions is not
          supported.
        
          SHOW TABLE STATUS does not give
          accurate statistics on InnoDB tables,
          except for the physical size reserved by the table. The row
          count is only a rough estimate used in SQL optimization.
        
          InnoDB does not keep an internal count of
          rows in a table because concurrent transactions might
          “see” different numbers of rows at the same time.
          To process a SELECT COUNT(*) FROM t
          statement, InnoDB scans an index of the
          table, which takes some time if the index is not entirely in
          the buffer pool. If your table does not change often, using
          the MySQL query cache is a good solution. To get a fast count,
          you have to use a counter table you create yourself and let
          your application update it according to the inserts and
          deletes it does. If an approximate row count is sufficient,
          SHOW TABLE STATUS can be used.
        
          On Windows, InnoDB always stores database
          and table names internally in lowercase. To move databases in
          a binary format from Unix to Windows or from Windows to Unix,
          create all databases and tables using lowercase names.
        
          An AUTO_INCREMENT column
          ai_col must be defined as part of
          an index such that it is possible to perform the equivalent of
          an indexed SELECT
          MAX( lookup on the
          table to obtain the maximum column value. Typically, this is
          achieved by making the column the first column of some table
          index.
        ai_col)
          InnoDB sets an exclusive lock on the end of
          the index associated with the
          AUTO_INCREMENT column while initializing a
          previously specified AUTO_INCREMENT column
          on a table.
        
          With
          innodb_autoinc_lock_mode=0,
          InnoDB uses a special
          AUTO-INC table lock mode where the lock is
          obtained and held to the end of the current SQL statement
          while accessing the auto-increment counter. Other clients
          cannot insert into the table while the
          AUTO-INC table lock is held. The same
          behavior occurs for “bulk inserts” with
          innodb_autoinc_lock_mode=1.
          Table-level AUTO-INC locks are not used
          with
          innodb_autoinc_lock_mode=2.
          For more information, See
          Section 14.5.5, “AUTO_INCREMENT Handling in InnoDB”.
        
          When you restart the MySQL server, InnoDB
          may reuse an old value that was generated for an
          AUTO_INCREMENT column but never stored
          (that is, a value that was generated during an old transaction
          that was rolled back).
        
          When an AUTO_INCREMENT integer column runs
          out of values, a subsequent INSERT
          operation returns a duplicate-key error. This is general MySQL
          behavior, similar to how MyISAM works.
        
          DELETE FROM
           does not
          regenerate the table but instead deletes all rows, one by one.
        tbl_name
Currently, cascaded foreign key actions do not activate triggers.
          You cannot create a table with a column name that matches the
          name of an internal InnoDB column (including
          DB_ROW_ID, DB_TRX_ID,
          DB_ROLL_PTR, and
          DB_MIX_ID). The server reports error 1005
          and refers to error −1 in the error message. This
          restriction applies only to use of the names in uppercase.
          LOCK TABLES acquires two locks
          on each table if innodb_table_locks=1 (the
          default). In addition to a table lock on the MySQL layer, it
          also acquires an InnoDB table lock.
          Versions of MySQL before 4.1.2 did not acquire
          InnoDB table locks; the old behavior can be
          selected by setting innodb_table_locks=0.
          If no InnoDB table lock is acquired,
          LOCK TABLES completes even if
          some records of the tables are being locked by other
          transactions.
        
          In MySQL 5.6,
          innodb_table_locks=0 has no
          effect for tables locked explicitly with
          LOCK TABLES ...
          WRITE. It does have an effect for tables locked for
          read or write by
          LOCK TABLES ...
          WRITE implicitly (for example, through triggers) or
          by LOCK TABLES
          ... READ.
        
          All InnoDB locks held by a transaction are
          released when the transaction is committed or aborted. Thus,
          it does not make much sense to invoke
          LOCK TABLES on
          InnoDB tables in
          autocommit=1 mode because the
          acquired InnoDB table locks would be
          released immediately.
        
          You cannot lock additional tables in the middle of a
          transaction because LOCK TABLES
          performs an implicit COMMIT and
          UNLOCK
          TABLES.
        
The limit of 1023 concurrent data-modifying transactions has been raised in MySQL 5.5 and above. The limit is now 128 * 1023 concurrent transactions that generate undo records. You can remove any workarounds that require changing the proper structure of your transactions, such as committing more frequently.
By using the SQL syntax and MySQL configuration options for compression, you can create tables where the data is stored in compressed form. Compression can help to improve both raw performance and scalability. The compression means less data is transferred between disk and memory, and takes up less space on disk and in memory. The benefits are amplified for tables with secondary indexes, because index data is compressed also. Compression can be especially important for SSD storage devices, because they tend to have lower capacity than HDD devices.
Because processors and cache memories have increased in speed more than disk storage devices, many workloads are disk-bound. Data compression enables smaller database size, reduced I/O, and improved throughput, at the small cost of increased CPU utilization. Compression is especially valuable for read-intensive applications, on systems with enough RAM to keep frequently used data in memory.
      An InnoDB table created with
      ROW_FORMAT=COMPRESSED can use a smaller
      page size on disk than the
      usual 16KB default. Smaller pages require less I/O to read from
      and write to disk, which is especially valuable for
      SSD devices.
    
      The page size is specified through the
      KEY_BLOCK_SIZE parameter. The different page
      size means the table must be in its own .ibd
      file rather than in the
      system tablespace,
      which requires enabling the
      innodb_file_per_table option. The level of
      compression is the same regardless of the
      KEY_BLOCK_SIZE value. As you specify smaller
      values for KEY_BLOCK_SIZE, you get the I/O
      benefits of increasingly smaller pages. But if you specify a value
      that is too small, there is additional overhead to reorganize the
      pages when data values cannot be compressed enough to fit multiple
      rows in each page. There is a hard limit on how small
      KEY_BLOCK_SIZE can be for a table, based on the
      lengths of the key columns for each of its indexes. Specify a
      value that is too small, and the CREATE
      TABLE or ALTER TABLE
      statement fails.
    
      In the buffer pool, the compressed data is held in small pages,
      with a page size based on the KEY_BLOCK_SIZE
      value. For extracting or updating the column values, MySQL also
      creates a 16KB page in the buffer pool with the uncompressed data.
      Within the buffer pool, any updates to the uncompressed page are
      also re-written back to the equivalent compressed page. You might
      need to size your buffer pool to accommodate the additional data
      of both compressed and uncompressed pages, although the
      uncompressed pages are
      evicted from the buffer pool
      when space is needed, and then uncompressed again on the next
      access.
      Before creating a compressed table, make sure the
      innodb_file_per_table
      configuration option is enabled, and
      innodb_file_format is set to
      Barracuda. You can set these parameters in the
      MySQL configuration
      file my.cnf or
      my.ini, or with the SET
      statement without shutting down the MySQL server.
    
      To enable compression for a table, you use the clauses
      ROW_FORMAT=COMPRESSED,
      KEY_BLOCK_SIZE, or both in a
      CREATE TABLE or
      ALTER TABLE statement.
    
To create a compressed table, you might use statements like these:
SET GLOBAL innodb_file_per_table=1; SET GLOBAL innodb_file_format=Barracuda; CREATE TABLE t1 (c1 INT PRIMARY KEY) ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;
          If you specify ROW_FORMAT=COMPRESSED, you
          can omit KEY_BLOCK_SIZE; the default page
          size value is used, which is half the
          innodb_page_size value.
        
          If you specify KEY_BLOCK_SIZE, you can omit
          ROW_FORMAT=COMPRESSED; compression is
          enabled automatically.
        
          To determine the best value for
          KEY_BLOCK_SIZE, typically you create
          several copies of the same table with different values for
          this clause, then measure the size of the resulting
          .ibd files and see how well each performs
          with a realistic
          workload.
        
          The KEY_BLOCK_SIZE value is treated as a
          hint; a different size could be used by
          InnoDB if necessary. A value of 0
          represents the default compressed page size, which is half of
          the innodb_page_size value.
          The KEY_BLOCK_SIZE can only be less than or
          equal to the innodb_page_size
          value. If you specify a value greater than the
          innodb_page_size value, the
          specified value is ignored, a warning is issued, and
          KEY_BLOCK_SIZE is set to half of the
          innodb_page_size value. If
          innodb_strict_mode=ON,
          specifying an invalid KEY_BLOCK_SIZE value
          returns an error.
        
For additional performance-related configuration options, see Section 14.6.3, “Tuning Compression for InnoDB Tables”.
      The default uncompressed size of InnoDB data
      pages is 16KB. Depending on the
      combination of option values, MySQL uses a page size of 1KB, 2KB,
      4KB, 8KB, or 16KB for the .ibd file of the
      table. The actual compression algorithm is not affected by the
      KEY_BLOCK_SIZE value; the value determines how
      large each compressed chunk is, which in turn affects how many
      rows can be packed into each compressed page.
    
      Setting KEY_BLOCK_SIZE equal to the
      InnoDB page
      size does not typically result in much compression. For
      example, setting KEY_BLOCK_SIZE=16 typically
      would not result in much compression, since the normal
      InnoDB page
      size is 16KB. This setting may still be useful for tables
      with many long BLOB,
      VARCHAR or
      TEXT columns, because such values
      often do compress well, and might therefore require fewer
      overflow pages as
      described in Section 14.6.5, “How Compression Works for InnoDB Tables”.
    
      All indexes of a table (including the
      clustered index) are
      compressed using the same page size, as specified in the
      CREATE TABLE or
      ALTER TABLE statement. Table
      attributes such as ROW_FORMAT and
      KEY_BLOCK_SIZE are not part of the
      CREATE INDEX syntax for
      InnoDB tables, and are ignored if they are
      specified (although you see them in the output of the
      SHOW CREATE TABLE statement).
      Because MySQL versions prior to 5.1 cannot process compressed
      tables, using compression requires specifying the configuration
      parameter
      innodb_file_format=Barracuda, to
      avoid accidentally introducing compatibility issues.
    
      Table compression is also not available for the InnoDB
      system tablespace.
      The system tablespace (space 0, the ibdata*
      files) can contain user data, but it also contains internal system
      information, and therefore is never compressed. Thus, compression
      applies only to tables (and indexes) stored in their own
      tablespaces, that is, created with the
      innodb_file_per_table option
      enabled.
    
      Compression applies to an entire table and all its associated
      indexes, not to individual rows, despite the clause name
      ROW_FORMAT.
Most often, the internal optimizations described in InnoDB Data Storage and Compression ensure that the system runs well with compressed data. However, because the efficiency of compression depends on the nature of your data, you can make decisions that affect the performance of compressed tables:
Which tables to compress.
What compressed page size to use.
Whether to adjust the size of the buffer pool based on run-time performance characteristics, such as the amount of time the system spends compressing and uncompressing data. Whether the workload is more like a data warehouse (primarily queries) or an OLTP system (mix of queries and DML).
If the system performs DML operations on compressed tables, and the way the data is distributed leads to expensive compression failures at runtime, you might adjust additional advanced configuration options.
Use the guidelines in this section to help make those architectural and configuration choices. When you are ready to conduct long-term testing and put compressed tables into production, see Section 14.6.4, “Monitoring Compression at Runtime” for ways to verify the effectiveness of those choices under real-world conditions.
In general, compression works best on tables that include a reasonable number of character string columns and where the data is read far more often than it is written. Because there are no guaranteed ways to predict whether or not compression benefits a particular situation, always test with a specific workload and data set running on a representative configuration. Consider the following factors when deciding which tables to compress.
      A key determinant of the efficiency of compression in reducing the
      size of data files is the nature of the data itself. Recall that
      compression works by identifying repeated strings of bytes in a
      block of data. Completely randomized data is the worst case.
      Typical data often has repeated values, and so compresses
      effectively. Character strings often compress well, whether
      defined in CHAR, VARCHAR,
      TEXT or BLOB columns. On the
      other hand, tables containing mostly binary data (integers or
      floating point numbers) or data that is previously compressed (for
      example JPEG or PNG images)
      may not generally compress well, significantly or at all.
    
You choose whether to turn on compression for each InnoDB table. A table and all of its indexes use the same (compressed) page size. It might be that the primary key (clustered) index, which contains the data for all columns of a table, compresses more effectively than the secondary indexes. For those cases where there are long rows, the use of compression might result in long column values being stored “off-page”, as discussed in Section 14.8.3, “DYNAMIC and COMPRESSED Row Formats”. Those overflow pages may compress well. Given these considerations, for many applications, some tables compress more effectively than others, and you might find that your workload performs best only with a subset of tables compressed.
      To determine whether or not to compress a particular table,
      conduct experiments. You can get a rough estimate of how
      efficiently your data can be compressed by using a utility that
      implements LZ77 compression (such as gzip or
      WinZip) on a copy of the .ibd
      file for an uncompressed table. You can expect less
      compression from a MySQL compressed table than from file-based
      compression tools, because MySQL compresses data in chunks based
      on the page size, 16KB by
      default. In addition to user data, the page format includes some
      internal system data that is not compressed. File-based
      compression utilities can examine much larger chunks of data, and
      so might find more repeated strings in a huge file than MySQL can
      find in an individual page.
    
      Another way to test compression on a specific table is to copy
      some data from your uncompressed table to a similar, compressed
      table (having all the same indexes) and look at the size of the
      resulting .ibd file. For example:
    
use test; set global innodb_file_per_table=1; set global innodb_file_format=Barracuda; set global autocommit=0; -- Create an uncompressed table with a million or two rows. create table big_table as select * from information_schema.columns; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; commit; alter table big_table add id int unsigned not null primary key auto_increment; show create table big_table\G select count(id) from big_table; -- Check how much space is needed for the uncompressed table. \! ls -l data/test/big_table.ibd create table key_block_size_4 like big_table; alter table key_block_size_4 key_block_size=4 row_format=compressed; insert into key_block_size_4 select * from big_table; commit; -- Check how much space is needed for a compressed table -- with particular compression settings. \! ls -l data/test/key_block_size_4.ibd
This experiment produced the following numbers, which of course could vary considerably depending on your table structure and data:
-rw-rw---- 1 cirrus staff 310378496 Jan 9 13:44 data/test/big_table.ibd -rw-rw---- 1 cirrus staff 83886080 Jan 9 15:10 data/test/key_block_size_4.ibd
To see whether compression is efficient for your particular workload:
          For simple tests, use a MySQL instance with no other
          compressed tables and run queries against the
          INFORMATION_SCHEMA.INNODB_CMP
          table.
        
          For more elaborate tests involving workloads with multiple
          compressed tables, run queries against the
          INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX
          table. Because the statistics in the
          INNODB_CMP_PER_INDEX table are expensive to
          collect, you must enable the configuration option
          innodb_cmp_per_index_enabled
          before querying that table, and you might restrict such
          testing to a development server or a non-critical
          slave server.
        
Run some typical SQL statements against the compressed table you are testing.
          Examine the ratio of successful compression operations to
          overall compression operations by querying the
          INFORMATION_SCHEMA.INNODB_CMP or
          INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX
          table, and comparing COMPRESS_OPS to
          COMPRESS_OPS_OK.
        
If a high percentage of compression operations complete successfully, the table might be a good candidate for compression.
          If you get a high proportion of
          compression
          failures, you can adjust
          innodb_compression_level,
          innodb_compression_failure_threshold_pct,
          and
          innodb_compression_pad_pct_max
          options as described in
          Section 14.6.6, “Compression for OLTP Workloads”, and try
          further tests.
Decide whether to compress data in your application or in the table; do not use both types of compression for the same data. When you compress the data in the application and store the results in a compressed table, extra space savings are extremely unlikely, and the double compression just wastes CPU cycles.
      When enabled, MySQL table compression is automatic and applies to
      all columns and index values. The columns can still be tested with
      operators such as LIKE, and sort operations can
      still use indexes even when the index values are compressed.
      Because indexes are often a significant fraction of the total size
      of a database, compression could result in significant savings in
      storage, I/O or processor time. The compression and decompression
      operations happen on the database server, which likely is a
      powerful system that is sized to handle the expected load.
If you compress data such as text in your application, before it is inserted into the database, You might save overhead for data that does not compress well by compressing some columns and not others. This approach uses CPU cycles for compression and uncompression on the client machine rather than the database server, which might be appropriate for a distributed application with many clients, or where the client machine has spare CPU cycles.
Of course, it is possible to combine these approaches. For some applications, it may be appropriate to use some compressed tables and some uncompressed tables. It may be best to externally compress some data (and store it in uncompressed tables) and allow MySQL to compress (some of) the other tables in the application. As always, up-front design and real-life testing are valuable in reaching the right decision.
      In addition to choosing which tables to compress (and the page
      size), the workload is another key determinant of performance. If
      the application is dominated by reads, rather than updates, fewer
      pages need to be reorganized and recompressed after the index page
      runs out of room for the per-page “modification log”
      that MySQL maintains for compressed data. If the updates
      predominantly change non-indexed columns or those containing
      BLOBs or large strings that happen to be stored
      “off-page”, the overhead of compression may be
      acceptable. If the only changes to a table are
      INSERTs that use a monotonically increasing
      primary key, and there are few secondary indexes, there is little
      need to reorganize and recompress index pages. Since MySQL can
      “delete-mark” and delete rows on compressed pages
      “in place” by modifying uncompressed data,
      DELETE operations on a table are relatively
      efficient.
    
For some environments, the time it takes to load data can be as important as run-time retrieval. Especially in data warehouse environments, many tables may be read-only or read-mostly. In those cases, it might or might not be acceptable to pay the price of compression in terms of increased load time, unless the resulting savings in fewer disk reads or in storage cost is significant.
Fundamentally, compression works best when the CPU time is available for compressing and uncompressing data. Thus, if your workload is I/O bound, rather than CPU-bound, you might find that compression can improve overall performance. When you test your application performance with different compression configurations, test on a platform similar to the planned configuration of the production system.
Reading and writing database pages from and to disk is the slowest aspect of system performance. Compression attempts to reduce I/O by using CPU time to compress and uncompress data, and is most effective when I/O is a relatively scarce resource compared to processor cycles.
This is often especially the case when running in a multi-user environment with fast, multi-core CPUs. When a page of a compressed table is in memory, MySQL often uses additional memory, typically 16KB, in the buffer pool for an uncompressed copy of the page. The adaptive LRU algorithm attempts to balance the use of memory between compressed and uncompressed pages to take into account whether the workload is running in an I/O-bound or CPU-bound manner. Still, a configuration with more memory dedicated to the buffer pool tends to run better when using compressed tables than a configuration where memory is highly constrained.
The optimal setting of the compressed page size depends on the type and distribution of data that the table and its indexes contain. The compressed page size should always be bigger than the maximum record size, or operations may fail as noted in Compression of B-Tree Pages.
Setting the compressed page size too large wastes some space, but the pages do not have to be compressed as often. If the compressed page size is set too small, inserts or updates may require time-consuming recompression, and the B-tree nodes may have to be split more frequently, leading to bigger data files and less efficient indexing.
      Typically, you set the compressed page size to 8K or 4K bytes.
      Given that the maximum row size for an InnoDB table is around 8K,
      KEY_BLOCK_SIZE=8 is usually a safe choice.
Overall application performance, CPU and I/O utilization and the size of disk files are good indicators of how effective compression is for your application. This section builds on the performance tuning advice from Section 14.6.3, “Tuning Compression for InnoDB Tables”, and shows how to find problems that might not turn up during initial testing.
To dig deeper into performance considerations for compressed tables, you can monitor compression performance at runtime using the Information Schema tables described in Example 14.10, “Using the Compression Information Schema Tables”. These tables reflect the internal use of memory and the rates of compression used overall.
      The INNODB_CMP table reports
      information about compression activity for each compressed page
      size (KEY_BLOCK_SIZE) in use. The information
      in these tables is system-wide: it summarizes the compression
      statistics across all compressed tables in your database. You can
      use this data to help decide whether or not to compress a table by
      examining these tables when no other compressed tables are being
      accessed. It involves relatively low overhead on the server, so
      you might query it periodically on a production server to check
      the overall efficiency of the compression feature.
    
      The INNODB_CMP_PER_INDEX table
      reports information about compression activity for individual
      tables and indexes. This information is more targeted and more
      useful for evaluating compression efficiency and diagnosing
      performance issues one table or index at a time. (Because that
      each InnoDB table is represented as a clustered
      index, MySQL does not make a big distinction between tables and
      indexes in this context.) The
      INNODB_CMP_PER_INDEX table does
      involve substantial overhead, so it is more suitable for
      development servers, where you can compare the effects of
      different workloads, data,
      and compression settings in isolation. To guard against imposing
      this monitoring overhead by accident, you must enable the
      innodb_cmp_per_index_enabled
      configuration option before you can query the
      INNODB_CMP_PER_INDEX table.
    
      The key statistics to consider are the number of, and amount of
      time spent performing, compression and uncompression operations.
      Since MySQL splits B-tree nodes when they are too full to contain
      the compressed data following a modification, compare the number
      of “successful” compression operations with the
      number of such operations overall. Based on the information in the
      INNODB_CMP and
      INNODB_CMP_PER_INDEX tables and
      overall application performance and hardware resource utilization,
      you might make changes in your hardware configuration, adjust the
      size of the buffer pool, choose a different page size, or select a
      different set of tables to compress.
    
If the amount of CPU time required for compressing and uncompressing is high, changing to faster or multi-core CPUs can help improve performance with the same data, application workload and set of compressed tables. Increasing the size of the buffer pool might also help performance, so that more uncompressed pages can stay in memory, reducing the need to uncompress pages that exist in memory only in compressed form.
      A large number of compression operations overall (compared to the
      number of INSERT, UPDATE and
      DELETE operations in your application and the
      size of the database) could indicate that some of your compressed
      tables are being updated too heavily for effective compression. If
      so, choose a larger page size, or be more selective about which
      tables you compress.
    
      If the number of “successful” compression operations
      (COMPRESS_OPS_OK) is a high percentage of the
      total number of compression operations
      (COMPRESS_OPS), then the system is likely
      performing well. If the ratio is low, then MySQL is reorganizing,
      recompressing, and splitting B-tree nodes more often than is
      desirable. In this case, avoid compressing some tables, or
      increase KEY_BLOCK_SIZE for some of the
      compressed tables. You might turn off compression for tables that
      cause the number of “compression failures” in your
      application to be more than 1% or 2% of the total. (Such a failure
      ratio might be acceptable during a temporary operation such as a
      data load).
This section describes some internal implementation details about compression for InnoDB tables. The information presented here may be helpful in tuning for performance, but is not necessary to know for basic use of compression.
Some operating systems implement compression at the file system level. Files are typically divided into fixed-size blocks that are compressed into variable-size blocks, which easily leads into fragmentation. Every time something inside a block is modified, the whole block is recompressed before it is written to disk. These properties make this compression technique unsuitable for use in an update-intensive database system.
MySQL implements compression with the help of the well-known zlib library, which implements the LZ77 compression algorithm. This compression algorithm is mature, robust, and efficient in both CPU utilization and in reduction of data size. The algorithm is “lossless”, so that the original uncompressed data can always be reconstructed from the compressed form. LZ77 compression works by finding sequences of data that are repeated within the data to be compressed. The patterns of values in your data determine how well it compresses, but typical user data often compresses by 50% or more.
      Unlike compression performed by an application, or compression
      features of some other database management systems, InnoDB
      compression applies both to user data and to indexes. In many
      cases, indexes can constitute 40-50% or more of the total database
      size, so this difference is significant. When compression is
      working well for a data set, the size of the InnoDB data files
      (the .idb files) is 25% to 50% of the
      uncompressed size or possibly smaller. Depending on the
      workload, this smaller
      database can in turn lead to a reduction in I/O, and an increase
      in throughput, at a modest cost in terms of increased CPU
      utilization. You can adjust the balance between compression level
      and CPU overhead by modifying the
      innodb_compression_level
      configuration option.
All user data in InnoDB tables is stored in pages comprising a B-tree index (the clustered index). In some other database systems, this type of index is called an “index-organized table”. Each row in the index node contains the values of the (user-specified or system-generated) primary key and all the other columns of the table.
Secondary indexes in InnoDB tables are also B-trees, containing pairs of values: the index key and a pointer to a row in the clustered index. The pointer is in fact the value of the primary key of the table, which is used to access the clustered index if columns other than the index key and primary key are required. Secondary index records must always fit on a single B-tree page.
      The compression of B-tree nodes (of both clustered and secondary
      indexes) is handled differently from compression of
      overflow pages used to
      store long VARCHAR, BLOB, or
      TEXT columns, as explained in the following
      sections.
Because they are frequently updated, B-tree pages require special treatment. It is important to minimize the number of times B-tree nodes are split, as well as to minimize the need to uncompress and recompress their content.
One technique MySQL uses is to maintain some system information in the B-tree node in uncompressed form, thus facilitating certain in-place updates. For example, this allows rows to be delete-marked and deleted without any compression operation.
In addition, MySQL attempts to avoid unnecessary uncompression and recompression of index pages when they are changed. Within each B-tree page, the system keeps an uncompressed “modification log” to record changes made to the page. Updates and inserts of small records may be written to this modification log without requiring the entire page to be completely reconstructed.
When the space for the modification log runs out, InnoDB uncompresses the page, applies the changes and recompresses the page. If recompression fails (a situation known as a compression failure), the B-tree nodes are split and the process is repeated until the update or insert succeeds.
      To avoid frequent compression failures in write-intensive
      workloads, such as for OLTP
      applications, MySQL sometimes reserves some empty space (padding)
      in the page, so that the modification log fills up sooner and the
      page is recompressed while there is still enough room to avoid
      splitting it. The amount of padding space left in each page varies
      as the system keeps track of the frequency of page splits. On a
      busy server doing frequent writes to compressed tables, you can
      adjust the
      innodb_compression_failure_threshold_pct,
      and
      innodb_compression_pad_pct_max
      configuration options to fine-tune this mechanism.
    
      Generally, MySQL requires that each B-tree page in an InnoDB table
      can accommodate at least two records. For compressed tables, this
      requirement has been relaxed. Leaf pages of B-tree nodes (whether
      of the primary key or secondary indexes) only need to accommodate
      one record, but that record must fit, in uncompressed form, in the
      per-page modification log. If
      innodb_strict_mode is
      ON, MySQL checks the maximum row size during
      CREATE TABLE or
      CREATE INDEX. If the row does not
      fit, the following error message is issued: ERROR HY000:
      Too big row.
    
      If you create a table when
      innodb_strict_mode is OFF, and a
      subsequent INSERT or UPDATE
      statement attempts to create an index entry that does not fit in
      the size of the compressed page, the operation fails with
      ERROR 42000: Row size too large. (This error
      message does not name the index for which the record is too large,
      or mention the length of the index record or the maximum record
      size on that particular index page.) To solve this problem,
      rebuild the table with ALTER TABLE
      and select a larger compressed page size
      (KEY_BLOCK_SIZE), shorten any column prefix
      indexes, or disable compression entirely with
      ROW_FORMAT=DYNAMIC or
      ROW_FORMAT=COMPACT.
      In an InnoDB table, BLOB,
      VARCHAR, and
      TEXT columns that are not part of
      the primary key may be stored on separately allocated
      overflow pages. We refer
      to these columns as off-page
      columns. Their values are stored on singly-linked lists of
      overflow pages.
    
      For tables created in ROW_FORMAT=DYNAMIC or
      ROW_FORMAT=COMPRESSED, the values of
      BLOB,
      TEXT, or
      VARCHAR columns may be stored fully
      off-page, depending on their length and the length of the entire
      row. For columns that are stored off-page, the clustered index
      record only contains 20-byte pointers to the overflow pages, one
      per column. Whether any columns are stored off-page depends on the
      page size and the total size of the row. When the row is too long
      to fit entirely within the page of the clustered index, MySQL
      chooses the longest columns for off-page storage until the row
      fits on the clustered index page. As noted above, if a row does
      not fit by itself on a compressed page, an error occurs.
        For tables created in ROW_FORMAT=DYNAMIC or
        ROW_FORMAT=COMPRESSED,
        TEXT and
        BLOB columns that are less than
        or equal to 40 bytes are always stored in-line.
      Tables created in older versions of MySQL use the
      Antelope file format, which
      supports only ROW_FORMAT=REDUNDANT and
      ROW_FORMAT=COMPACT. In these formats, MySQL
      stores the first 768 bytes of BLOB,
      VARCHAR, and
      TEXT columns in the clustered index
      record along with the primary key. The 768-byte prefix is followed
      by a 20-byte pointer to the overflow pages that contain the rest
      of the column value.
    
      When a table is in COMPRESSED format, all data
      written to overflow pages is compressed “as is”; that
      is, MySQL applies the zlib compression algorithm to the entire
      data item. Other than the data, compressed overflow pages contain
      an uncompressed header and trailer comprising a page checksum and
      a link to the next overflow page, among other things. Therefore,
      very significant storage savings can be obtained for longer
      BLOB, TEXT, or
      VARCHAR columns if the data is highly
      compressible, as is often the case with text data. Image data,
      such as JPEG, is typically already compressed
      and so does not benefit much from being stored in a compressed
      table; the double compression can waste CPU cycles for little or
      no space savings.
    
The overflow pages are of the same size as other pages. A row containing ten columns stored off-page occupies ten overflow pages, even if the total length of the columns is only 8K bytes. In an uncompressed table, ten uncompressed overflow pages occupy 160K bytes. In a compressed table with an 8K page size, they occupy only 80K bytes. Thus, it is often more efficient to use compressed table format for tables with long column values.
      Using a 16K compressed page size can reduce storage and I/O costs
      for BLOB,
      VARCHAR, or
      TEXT columns, because such data
      often compress well, and might therefore require fewer overflow
      pages, even though the B-tree nodes themselves take as many pages
      as in the uncompressed form.
      In a compressed InnoDB table, every compressed page (whether 1K,
      2K, 4K or 8K) corresponds to an uncompressed page of 16K bytes (or
      a smaller size if
      innodb_page_size is set). To
      access the data in a page, MySQL reads the compressed page from
      disk if it is not already in the
      buffer pool, then
      uncompresses the page to its original form. This section describes
      how InnoDB manages the buffer pool with respect to pages of
      compressed tables.
    
To minimize I/O and to reduce the need to uncompress a page, at times the buffer pool contains both the compressed and uncompressed form of a database page. To make room for other required database pages, MySQL can evict from the buffer pool an uncompressed page, while leaving the compressed page in memory. Or, if a page has not been accessed in a while, the compressed form of the page might be written to disk, to free space for other data. Thus, at any given time, the buffer pool might contain both the compressed and uncompressed forms of the page, or only the compressed form of the page, or neither.
MySQL keeps track of which pages to keep in memory and which to evict using a least-recently-used (LRU) list, so that hot (frequently accessed) data tends to stay in memory. When compressed tables are accessed, MySQL uses an adaptive LRU algorithm to achieve an appropriate balance of compressed and uncompressed pages in memory. This adaptive algorithm is sensitive to whether the system is running in an I/O-bound or CPU-bound manner. The goal is to avoid spending too much processing time uncompressing pages when the CPU is busy, and to avoid doing excess I/O when the CPU has spare cycles that can be used for uncompressing compressed pages (that may already be in memory). When the system is I/O-bound, the algorithm prefers to evict the uncompressed copy of a page rather than both copies, to make more room for other disk pages to become memory resident. When the system is CPU-bound, MySQL prefers to evict both the compressed and uncompressed page, so that more memory can be used for “hot” pages and reducing the need to uncompress data in memory only in compressed form.
      Before a compressed page is written to a
      data file, MySQL writes a
      copy of the page to the redo log (if it has been recompressed
      since the last time it was written to the database). This is done
      to ensure that redo logs are usable for
      crash recovery, even in
      the unlikely case that the zlib library is
      upgraded and that change introduces a compatibility problem with
      the compressed data. Therefore, some increase in the size of
      log files, or a need for more
      frequent checkpoints, can
      be expected when using compression. The amount of increase in the
      log file size or checkpoint frequency depends on the number of
      times compressed pages are modified in a way that requires
      reorganization and recompression.
    
Compressed tables use a different file format for the redo log and the per-table tablespaces than in MySQL 5.1 and earlier. The MySQL Enterprise Backup product supports this latest Barracuda file format for compressed InnoDB tables.
      Traditionally, the InnoDB
      compression feature was
      recommended primarily for read-only or read-mostly
      workloads, such as in a
      data warehouse
      configuration. The rise of SSD
      storage devices, which are fast but relatively small and
      expensive, makes compression attractive also for
      OLTP workloads: high-traffic, interactive web
      sites can reduce their storage requirements and their I/O
      operations per second (IOPS) by
      using compressed tables with applications that do frequent
      INSERT,
      UPDATE, and
      DELETE operations.
    
Configuration options introduced in MySQL 5.6 let you adjust the way compression works for a particular MySQL instance, with an emphasis on performance and scalability for write-intensive operations:
          innodb_compression_level lets
          you turn the degree of compression up or down. A higher value
          lets you fit more data onto a storage device, at the expense
          of more CPU overhead during compression. A lower value lets
          you reduce CPU overhead when storage space is not critical, or
          you expect the data is not especially compressible.
        
          innodb_compression_failure_threshold_pct
          specifies a cutoff point for
          compression
          failures during updates to a compressed table. When
          this threshold is passed, MySQL begins to leave additional
          free space within each new compressed page, dynamically
          adjusting the amount of free space up to the percentage of
          page size specified by
          innodb_compression_pad_pct_max
        
          innodb_compression_pad_pct_max
          lets you adjust the maximum amount of space reserved within
          each page to record changes
          to compressed rows, without needing to compress the entire
          page again. The higher the value, the more changes can be
          recorded without recompressing the page. MySQL uses a variable
          amount of free space for the pages within each compressed
          table, only when a designated percentage of compression
          operations
          “fail”
          at runtime, requiring an expensive operation to split the
          compressed page.
      Because working with compressed data sometimes involves keeping
      both compressed and uncompressed versions of a page in memory at
      the same time, when using compression with an OLTP-style workload,
      be prepared to increase the value of the
      innodb_buffer_pool_size
      configuration option.
      Specifying ROW_FORMAT=COMPRESSED or
      KEY_BLOCK_SIZE in CREATE
      TABLE or ALTER TABLE
      statements produces the following warnings if the Barracuda file
      format is not enabled. You can view them with the SHOW
      WARNINGS statement.
| Level | Code | Message | 
|---|---|---|
| Warning | 1478 | InnoDB: KEY_BLOCK_SIZE requires
              innodb_file_per_table. | 
| Warning | 1478 | InnoDB: KEY_BLOCK_SIZE requires innodb_file_format=1 | 
| Warning | 1478 | InnoDB: ignoring
              KEY_BLOCK_SIZE= | 
| Warning | 1478 | InnoDB: ROW_FORMAT=COMPRESSED requires
innodb_file_per_table. | 
| Warning | 1478 | InnoDB: assuming ROW_FORMAT=COMPACT. | 
Notes:
By default, these messages are only warnings, not errors, and the table is created without compression, as if the options were not specified.
          When innodb_strict_mode is
          enabled, MySQL generates an error, not a warning, for these
          cases. The table is not created if the current configuration
          does not permit using compressed tables.
      The “non-strict” behavior lets you import a
      mysqldump file into a database that does not
      support compressed tables, even if the source database contained
      compressed tables. In that case, MySQL creates the table in
      ROW_FORMAT=COMPACT instead of preventing the
      operation.
    
      To import the dump file into a new database, and have the tables
      re-created as they exist in the original database, ensure the
      server has the proper settings for the configuration parameters
      innodb_file_format and
      innodb_file_per_table.
    
      The attribute KEY_BLOCK_SIZE is permitted only
      when ROW_FORMAT is specified as
      COMPRESSED or is omitted. Specifying a
      KEY_BLOCK_SIZE with any other
      ROW_FORMAT generates a warning that you can
      view with SHOW WARNINGS. However, the table is
      non-compressed; the specified KEY_BLOCK_SIZE is
      ignored).
| Level | Code | Message | 
|---|---|---|
| Warning | 1478 |  InnoDB: ignoring KEY_BLOCK_SIZE= | 
      If you are running with
      innodb_strict_mode enabled, the
      combination of a KEY_BLOCK_SIZE with any
      ROW_FORMAT other than
      COMPRESSED generates an error, not a warning,
      and the table is not created.
    
      Table 14.4, “Meaning of CREATE TABLE and ALTER TABLE options”
      summarizes how the various options on CREATE
      TABLE and ALTER TABLE are
      handled.
Table 14.4 Meaning of CREATE TABLE and ALTER TABLE options
| Option | Usage | Description | 
|---|---|---|
| ROW_FORMAT=REDUNDANT | Storage format used prior to MySQL 5.0.3 | Less efficient than ROW_FORMAT=COMPACT; for backward
              compatibility | 
| ROW_FORMAT=COMPACT | Default storage format since MySQL 5.0.3 | Stores a prefix of 768 bytes of long column values in the clustered index page, with the remaining bytes stored in an overflow page | 
| ROW_FORMAT=DYNAMIC | Available only with innodb_file_format=Barracuda | Store values within the clustered index page if they fit; if not, stores only a 20-byte pointer to an overflow page (no prefix) | 
| ROW_FORMAT=COMPRESSED | Available only with innodb_file_format=Barracuda | Compresses the table and indexes using zlib to default compressed page
              size of 8K bytes; implies ROW_FORMAT=DYNAMIC | 
| KEY_BLOCK_SIZE= | Available only with innodb_file_format=Barracuda | Specifies compressed page size of 1, 2, 4, 8 or 16 kilobytes; implies ROW_FORMAT=DYNAMICandROW_FORMAT=COMPRESSED | 
      Table 14.5, “CREATE/ALTER TABLE Warnings and Errors when InnoDB Strict Mode is OFF”
      summarizes error conditions that occur with certain combinations
      of configuration parameters and options on the
      CREATE TABLE or
      ALTER TABLE statements, and how the
      options appear in the output of SHOW TABLE
      STATUS.
    
      When innodb_strict_mode is
      OFF, MySQL creates or alters the table, but
      ignores certain settings as shown below. You can see the warning
      messages in the MySQL error log. When
      innodb_strict_mode is
      ON, these specified combinations of options
      generate errors, and the table is not created or altered. To see
      the full description of the error condition, issue the
      SHOW ERRORS statement: example:
mysql>CREATE TABLE x (id INT PRIMARY KEY, c INT)->ENGINE=INNODB KEY_BLOCK_SIZE=33333;ERROR 1005 (HY000): Can't create table 'test.x' (errno: 1478) mysql>SHOW ERRORS;+-------+------+-------------------------------------------+ | Level | Code | Message | +-------+------+-------------------------------------------+ | Error | 1478 | InnoDB: invalid KEY_BLOCK_SIZE=33333. | | Error | 1005 | Can't create table 'test.x' (errno: 1478) | +-------+------+-------------------------------------------+ 2 rows in set (0.00 sec)
Table 14.5 CREATE/ALTER TABLE Warnings and Errors when InnoDB Strict Mode is OFF
| Syntax | Warning or Error Condition | Resulting ROW_FORMAT, as shown inSHOW TABLE
              STATUS | 
|---|---|---|
| ROW_FORMAT=REDUNDANT | None | REDUNDANT | 
| ROW_FORMAT=COMPACT | None | COMPACT | 
| ROW_FORMAT=COMPRESSEDorROW_FORMAT=DYNAMICorKEY_BLOCK_SIZEis specified | Ignored unless both innodb_file_format=Barracudaandinnodb_file_per_tableare enabled | COMPACT | 
| Invalid KEY_BLOCK_SIZEis specified (not 1, 2, 4, 8
              or 16) | KEY_BLOCK_SIZEis ignored | the requested one, or COMPACTby default | 
| ROW_FORMAT=COMPRESSEDand validKEY_BLOCK_SIZEare specified | None; KEY_BLOCK_SIZEspecified is used, not the 8K
              default | COMPRESSED | 
| KEY_BLOCK_SIZEis specified withREDUNDANT,COMPACTorDYNAMICrow format | KEY_BLOCK_SIZEis ignored | REDUNDANT,COMPACTorDYNAMIC | 
| ROW_FORMATis not one ofREDUNDANT,COMPACT,DYNAMICorCOMPRESSED | Ignored if recognized by the MySQL parser. Otherwise, an error is issued. | COMPACTor N/A | 
      When innodb_strict_mode is
      ON, MySQL rejects invalid
      ROW_FORMAT or KEY_BLOCK_SIZE
      parameters. For compatibility with earlier versions of MySQL,
      strict mode is not enabled by default; instead, MySQL issues
      warnings (not errors) for ignored invalid parameters.
    
      It is not possible to see the chosen
      KEY_BLOCK_SIZE using SHOW TABLE
      STATUS. The statement SHOW CREATE
      TABLE displays the KEY_BLOCK_SIZE
      (even if it was ignored when creating the table). The real
      compressed page size of the table cannot be displayed by MySQL.
    As InnoDB evolves, data file formats that are not
    compatible with prior versions of InnoDB are
    sometimes required to support new features. To to help manage
    compatibility in upgrade and downgrade situations, and systems that
    run different versions of MySQL, InnoDB uses
    named file formats. InnoDB currently supports two
    named file formats, Antelope
    and Barracuda.
        Antelope is the original
        InnoDB file format, which previously did not
        have a name. It supports
        COMPACT and
        REDUNDANT row
        formats for InnoDB tables and is the default
        file format in MySQL 5.6 to ensure maximum compatibility with
        earlier MySQL versions that do not support the Barracuda file
        format.
      
        Barracuda is the newest
        file format. In addition to supporting
        COMPACT and
        REDUNDANT row
        formats, Barracuda supports
        COMPRESSED and
        DYNAMIC row
        formats for InnoDB tables. The features
        associated with
        COMPRESSED and
        DYNAMIC row
        formats include compressed tables (see
        Section 14.8, “InnoDB Row Storage and Row Formats”), off-page storage for long
        column data (see Section 14.8, “InnoDB Row Storage and Row Formats”), and index
        key prefixes up to 3072 bytes
        (innodb_large_prefix).
    This section discusses enabling file formats for new
    InnoDB tables, verifying compatibility of
    different file formats between MySQL releases, identifying the file
    format in use, and downgrading the file format.
      The innodb_file_format
      configuration parameter defines the file format to use for new
      InnoDB tables that are created in
      file_per_table
      tablespaces. Therefore, to use the
      innodb_file_format parameter,
      innodb_file_per_table must be
      enabled.
    
      To create new tables that take advantage of features supported by
      the Barracuda file format,
      including table compression (see
      Section 14.8, “InnoDB Row Storage and Row Formats”), off-page storage for long
      column data (see Section 14.8, “InnoDB Row Storage and Row Formats”), and index
      key prefixes up to 3072 bytes
      (innodb_large_prefix), set
      innodb_file_format to
      Barracuda. You must also specify
      ROW_FORMAT=COMPRESSED or
      ROW_FORMAT=DYNAMIC when creating the table.
    
      To preclude the use of features supported by the Barracuda file
      that make your database inaccessible to the built-in
      InnoDB in MySQL 5.1 and prior releases, set
      innodb_file_format to
      Antelope. Alternatively, you can disable
      innodb_file_per_table to have new
      tables created in the
      system tablespace.
      The system tablespace is stored in the original Antelope file
      format.
    
      You can set the value of
      innodb_file_format on the command
      line when you start mysqld, or in the option
      file (my.cnf on Unix,
      my.ini on Windows). You can also change it
      dynamically with the SET GLOBAL statement.
    
mysql> SET GLOBAL innodb_file_format=BARRACUDA; Query OK, 0 rows affected (0.00 sec)
      Be aware that ALTER TABLE
      operations that recreate InnoDB tables use the
      current innodb_file_format
      setting.
    
Although Oracle recommends using the Barracuda format for new tables where practical, in MySQL 5.6 the default file format is Antelope, for maximum compatibility with replication configurations containing earlier MySQL releases.
InnoDB incorporates several checks to guard against the possible crashes and data corruptions that might occur if you run an older release of the MySQL server on InnoDB data files using the new Barracuda file format. These checks take place when the server is started, and when you first access a table. This section describes these checks, how you can control them, and error and warning conditions that might arise.
Considerations of backward compatibility only apply when using a recent version of InnoDB (the InnoDB Plugin, or MySQL 5.5 and higher with InnoDB) alongside an older one (MySQL 5.1 or earlier, with the built-in InnoDB rather than the InnoDB Plugin). To minimize the chance of compatibility issues, you can standardize on the InnoDB Plugin for all your MySQL 5.1 and earlier database servers.
In general, a newer version of InnoDB may create a table or index that cannot safely be read or written with a prior version of InnoDB without risk of crashes, hangs, wrong results or corruptions. MySQL 5.5 and higher with InnoDB includes a mechanism to guard against these conditions, and to help preserve compatibility among database files and versions of InnoDB. This mechanism lets you take advantage of some new features of an InnoDB release (such as performance improvements and bug fixes), and still preserve the option of using your database with a prior version of InnoDB, by preventing accidental use of new features that create downward-incompatible disk files.
If a version of InnoDB supports a particular file format (whether or not that format is the default), you can query and update any table that requires that format or an earlier format. Only the creation of new tables using new features is limited based on the particular file format enabled. Conversely, if a tablespace contains a table or index that uses a file format that is not supported by the currently running software, it cannot be accessed at all, even for read access.
The only way to “downgrade” an InnoDB tablespace to the earlier Antelope file format is to copy the data to a new table, in a tablespace that uses the earlier format.
      The easiest way to determine the file format of an existing InnoDB
      tablespace is to examine the properties of the table it contains,
      using the SHOW TABLE STATUS command or querying
      the table INFORMATION_SCHEMA.TABLES. If the
      Row_format of the table is reported as
      'Compressed' or 'Dynamic',
      the tablespace containing the table uses the Barracuda format.
      Otherwise, it uses the prior InnoDB file format, Antelope.
      Every InnoDB file-per-table tablespace (represented by a
      *.ibd file) file is labeled with a file format
      identifier. The system tablespace (represented by the
      ibdata files) is tagged with the
      “highest” file format in use in a group of InnoDB
      database files, and this tag is checked when the files are opened.
    
      Creating a compressed table, or a table with
      ROW_FORMAT=DYNAMIC, updates the file header for
      the corresponding .ibd file and the table type
      in the InnoDB data dictionary with the identifier for the
      Barracuda file format. From that point forward, the table cannot
      be used with a version of InnoDB that does not support this new
      file format. To protect against anomalous behavior, InnoDB version
      5.0.21 and later performs a compatibility check when the table is
      opened. (In many cases, the ALTER
      TABLE statement recreates a table and thus changes its
      properties. The special case of adding or dropping indexes without
      rebuilding the table is described in
      InnoDB Fast Index Creation.)
To avoid confusion, for the purposes of this discussion we define the term “ib-file set” to mean the set of operating system files that InnoDB manages as a unit. The ib-file set includes the following files:
          The system tablespace (one or more ibdata
          files) that contain internal system information (including
          internal catalogs and undo information) and may include user
          data and indexes.
        
          Zero or more single-table tablespaces (also called “file
          per table” files, named *.ibd
          files).
        
          InnoDB log files; usually two, ib_logfile0
          and ib_logfile1. Used for crash recovery
          and in backups.
      An “ib-file set” does not include the corresponding
      .frm files that contain metadata about InnoDB
      tables. The .frm files are created and managed
      by MySQL, and can sometimes get out of sync with the internal
      metadata in InnoDB.
    
Multiple tables, even from more than one database, can be stored in a single “ib-file set”. (In MySQL, a “database” is a logical collection of tables, what other systems refer to as a “schema” or “catalog”.)
        To prevent possible crashes or data corruptions when InnoDB
        opens an ib-file set, it checks that it can fully support the
        file formats in use within the ib-file set. If the system is
        restarted following a crash, or a “fast shutdown”
        (i.e., innodb_fast_shutdown is
        greater than zero), there may be on-disk data structures (such
        as redo or undo entries, or doublewrite pages) that are in a
        “too-new” format for the current software. During
        the recovery process, serious damage can be done to your data
        files if these data structures are accessed. The startup check
        of the file format occurs before any recovery process begins,
        thereby preventing consistency issues with the new tables or
        startup problems for the MySQL server.
      
        Beginning with version InnoDB 1.0.1, the system tablespace
        records an identifier or tag for the “highest” file
        format used by any table in any of the tablespaces that is part
        of the ib-file set. Checks against this file format tag are
        controlled by the configuration parameter
        innodb_file_format_check, which
        is ON by default.
      
        If the file format tag in the system tablespace is newer or
        higher than the highest version supported by the particular
        currently executing software and if
        innodb_file_format_check is
        ON, the following error is issued when the
        server is started:
      
InnoDB: Error: the system tablespace is in a file format that this version doesn't support
        You can also set
        innodb_file_format to a file
        format name. Doing so prevents InnoDB from starting if the
        current software does not support the file format specified. It
        also sets the “high water mark” to the value you
        specify. The ability to set
        innodb_file_format_check will
        be useful (with future releases of InnoDB) if you manually
        “downgrade” all of the tables in an ib-file set (as
        described in Downgrading the InnoDB Storage Engine). You can then
        rely on the file format check at startup if you subsequently use
        an older version of InnoDB to access the ib-file set.
      
        In some limited circumstances, you might want to start the
        server and use an ib-file set that is in a new file format that
        is not supported by the software you are using. If you set the
        configuration parameter
        innodb_file_format_check to
        OFF, InnoDB opens the database, but issues
        this warning message in the error log:
      
InnoDB: Warning: the system tablespace is in a file format that this version doesn't support
          This is a dangerous setting, as it permits the recovery
          process to run, possibly corrupting your database if the
          previous shutdown was a crash or “fast shutdown”.
          You should only set
          innodb_file_format_check to
          OFF if you are sure that the previous
          shutdown was done with
          innodb_fast_shutdown=0, so that essentially
          no recovery process occurs.
        The parameter
        innodb_file_format_check
        affects only what happens when a database is opened, not
        subsequently. Conversely, the parameter
        innodb_file_format (which
        enables a specific format) only determines whether or not a new
        table can be created in the enabled format and has no effect on
        whether or not a database can be opened.
      
        The file format tag is a “high water mark”, and as
        such it is increased after the server is started, if a table in
        a “higher” format is created or an existing table
        is accessed for read or write (assuming its format is
        supported). If you access an existing table in a format higher
        than the format the running software supports, the system
        tablespace tag is not updated, but table-level compatibility
        checking applies (and an error is issued), as described in
        Section 14.7.2.2, “Compatibility Check When a Table Is Opened”.
        Any time the high water mark is updated, the value of
        innodb_file_format_check is
        updated as well, so the command SELECT
        @@innodb_file_format_check; displays the name of the
        latest file format known to be used by tables in the currently
        open ib-file set and supported by the currently executing
        software.
When a table is first accessed, InnoDB (including some releases prior to InnoDB 1.0) checks that the file format of the tablespace in which the table is stored is fully supported. This check prevents crashes or corruptions that would otherwise occur when tables using a “too new” data structure are encountered.
        All tables using any file format supported by a release can be
        read or written (assuming the user has sufficient privileges).
        The setting of the system configuration parameter
        innodb_file_format can prevent
        creating a new table that uses a specific file format, even if
        the file format is supported by a given release. Such a setting
        might be used to preserve backward compatibility, but it does
        not prevent accessing any table that uses a supported format.
      
Versions of MySQL older than 5.0.21 cannot reliably use database files created by newer versions if a new file format was used when a table was created. To prevent various error conditions or corruptions, InnoDB checks file format compatibility when it opens a file (for example, upon first access to a table). If the currently running version of InnoDB does not support the file format identified by the table type in the InnoDB data dictionary, MySQL reports the following error:
ERROR 1146 (42S02): Table 'test.t1' doesn't exist
InnoDB also writes a message to the error log:
InnoDB: tabletest/t1: unknown table type33
The table type should be equal to the tablespace flags, which contains the file format version as discussed in Section 14.7.3, “Identifying the File Format in Use”.
Versions of InnoDB prior to MySQL 4.1 did not include table format identifiers in the database files, and versions prior to MySQL 5.0.21 did not include a table format compatibility check. Therefore, there is no way to ensure proper operations if a table in a newer file format is used with versions of InnoDB prior to 5.0.21.
The file format management capability in InnoDB 1.0 and higher (tablespace tagging and run-time checks) allows InnoDB to verify as soon as possible that the running version of software can properly process the tables existing in the database.
        If you permit InnoDB to open a database containing files in a
        format it does not support (by setting the parameter
        innodb_file_format_check to
        OFF), the table-level checking described in
        this section still applies.
      
Users are strongly urged not to use database files that contain Barracuda file format tables with releases of InnoDB older than the MySQL 5.1 with the InnoDB Plugin. It may be possible to rebuild such tables to use the Antelope format.
      If you enable a different file
      format using the
      innodb_file_format configuration
      option, the change only applies to newly created tables. Also,
      when you create a new table, the tablespace containing the table
      is tagged with the “earliest” or
      “simplest” file format that is required to support
      the table's features. For example, if you enable the
      Barracuda file format, and create a new table
      that does not use the Dynamic or Compressed row format, the new
      tablespace that contains the table is tagged as using the
      Antelope file format .
    
      It is easy to identify the file format used by a given table. The
      table uses the Antelope file format if the row
      format reported by SHOW TABLE STATUS is either
      Compact or Redundant. The
      table uses the Barracuda file format if the row
      format reported by SHOW TABLE STATUS is either
      Compressed or Dynamic.
    
mysql> SHOW TABLE STATUS\G
*************************** 1. row ***************************
           Name: t1
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 0
 Avg_row_length: 0
    Data_length: 16384
Max_data_length: 0
   Index_length: 16384
      Data_free: 0
 Auto_increment: 1
    Create_time: 2014-11-03 13:32:10
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options: 
        Comment: 
1 row in set (0.00 sec)
      You can also identify the file format used by a given table or
      tablespace using InnoDB
      INFORMATION_SCHEMA tables. For example:
    
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE NAME='test/t1'\G 
*************************** 1. row ***************************
     TABLE_ID: 44
         NAME: test/t1
         FLAG: 1
       N_COLS: 6
        SPACE: 30
  FILE_FORMAT: Antelope
   ROW_FORMAT: Compact
ZIP_PAGE_SIZE: 0
1 row in set (0.00 sec)
    
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES WHERE NAME='test/t1'\G
*************************** 1. row ***************************
        SPACE: 30
         NAME: test/t1
         FLAG: 0
  FILE_FORMAT: Antelope
   ROW_FORMAT: Compact or Redundant
    PAGE_SIZE: 16384
ZIP_PAGE_SIZE: 0
1 row in set (0.00 sec)
      Each InnoDB tablespace file (with a name matching
      *.ibd) is tagged with the file format used to
      create its table and indexes. The way to modify the file format is
      to re-create the table and its indexes. The easiest way to
      recreate a table and its indexes is to use the following command
      on each table that you want to modify:
    
ALTER TABLEtROW_FORMAT=format_name;
If you are modifying the file format to downgrade to an older MySQL version, there may be incompatibilities in table storage formats that require additional steps. For information about downgrading to a previous MySQL version, see Section 2.11.2, “Downgrading MySQL”.
    This section discusses how certain InnoDB features, such as table
    compression and off-page
    storage of long columns, are controlled by the
    ROW_FORMAT clause of the
    CREATE TABLE statement. It discusses
    considerations for choosing the right row format and compatibility
    of row formats between MySQL releases.
The storage for rows and associated columns affects performance for queries and DML operations. As more rows fit into a single disk page, queries and index lookups can work faster, less cache memory is required in the InnoDB buffer pool, and less I/O is required to write out updated values for the numeric and short string columns.
The data in each InnoDB table is divided into pages. The pages that make up each table are arranged in a tree data structure called a B-tree index. Table data and secondary indexes both use this type of structure. The B-tree index that represents an entire table is known as the clustered index, which is organized according to the primary key columns. The nodes of the index data structure contain the values of all the columns in that row (for the clustered index) or the index columns and the primary key columns (for secondary indexes).
      Variable-length columns are an exception to this rule. Columns
      such as BLOB and VARCHAR
      that are too long to fit on a B-tree page are stored on separately
      allocated disk pages called
      overflow pages. We call
      such columns off-page
      columns. The values of these columns are stored in
      singly-linked lists of overflow pages, and each such column has
      its own list of one or more overflow pages. In some cases, all or
      a prefix of the long column value is stored in the B-tree, to
      avoid wasting storage and eliminating the need to read a separate
      page.
    
      The next section describes the clauses you can use with the
      CREATE TABLE and
      ALTER TABLE statements to control
      how these variable-length columns are represented:
      ROW_FORMAT and
      KEY_BLOCK_SIZE. To use these clauses, you might
      also need to change the settings for the
      innodb_file_per_table and
      innodb_file_format configuration
      options.
      You specify the row format for a table with the
      ROW_FORMAT clause of the
      CREATE TABLE and
      ALTER TABLE statements. For
      example:
    
CREATE TABLE t1 (f1 int unsigned) ROW_FORMAT=DYNAMIC ENGINE=INNODB;
    
      InnoDB ROW_FORMAT options
      include COMPACT,
      REDUNDANT,
      DYNAMIC, and
      COMPRESSED. For
      InnoDB tables, rows are stored in
      COMPACT format
      (ROW_FORMAT=COMPACT) by default. Refer to the
      CREATE TABLE documentation for
      additional information about the ROW_FORMAT
      table option.
    
      The physical row structure for InnoDB tables is
      dependant on the ROW_FORMAT that you specify.
      See Section 14.2.6.7, “Physical Row Structure” for more information.
      This section discusses the DYNAMIC and
      COMPRESSED row formats for InnoDB tables. You
      can only create these kinds of tables when the
      innodb_file_format configuration
      option is set to Barracuda. (The Barracuda file
      format also allows the COMPACT and
      REDUNDANT row formats.)
    
      When a table is created with ROW_FORMAT=DYNAMIC
      or ROW_FORMAT=COMPRESSED, long column values
      are stored fully off-page, and the clustered index record contains
      only a 20-byte pointer to the overflow page.
    
      Whether any columns are stored off-page depends on the page size
      and the total size of the row. When the row is too long, InnoDB
      chooses the longest columns for off-page storage until the
      clustered index record fits on the B-tree page.
      TEXT and
      BLOB columns that are less than or
      equal to 40 bytes are always stored in-line.
    
      The DYNAMIC row format maintains the efficiency
      of storing the entire row in the index node if it fits (as do the
      COMPACT and REDUNDANT
      formats), but this new format avoids the problem of filling B-tree
      nodes with a large number of data bytes of long columns. The
      DYNAMIC format is based on the idea that if a
      portion of a long data value is stored off-page, it is usually
      most efficient to store all of the value off-page. With
      DYNAMIC format, shorter columns are likely to
      remain in the B-tree node, minimizing the number of overflow pages
      needed for any given row.
    
      The COMPRESSED row format uses similar internal
      details for off-page storage as the DYNAMIC row
      format, with additional storage and performance considerations
      from the table and index data being compressed and using smaller
      page sizes. With the COMPRESSED row format, the
      option KEY_BLOCK_SIZE controls how much column
      data is stored in the clustered index, and how much is placed on
      overflow pages. For full details about the
      COMPRESSED row format, see
      Section 14.6, “InnoDB Table Compression”.
      Early versions of InnoDB used an unnamed file format (now called
      Antelope) for database files.
      With that file format, tables are defined with
      ROW_FORMAT=COMPACT or
      ROW_FORMAT=REDUNDANT. InnoDB stores up to the
      first 768 bytes of variable-length columns (such as
      BLOB and VARCHAR) in the
      index record within the B-tree node, with the remainder stored on
      the overflow pages.
    
      To preserve compatibility with those prior versions, tables
      created with the newest InnoDB default to the
      COMPACT row format. See
      Section 14.8.3, “DYNAMIC and COMPRESSED Row Formats” for information about
      the newer DYNAMIC and
      COMPRESSED row formats.
    
      With the Antelope file format, if the value of a column is 768
      bytes or less, no overflow page is needed, and some savings in I/O
      may result, since the value is in the B-tree node. This works well
      for relatively short BLOBs, but may cause
      B-tree nodes to fill with data rather than key values, reducing
      their efficiency. Tables with many BLOB columns
      could cause B-tree nodes to become too full of data, and contain
      too few rows, making the entire index less efficient than if the
      rows were shorter or if the column values were stored off-page.
    As a DBA, you must manage disk I/O to keep the I/O subsystem from
    becoming saturated, and manage disk space to avoid filling up
    storage devices. The ACID design
    model requires a certain amount of I/O that might seem redundant,
    but helps to ensure data reliability. Within these constraints,
    InnoDB tries to optimize the database work and
    the organization of disk files to minimize the amount of disk I/O.
    Sometimes, I/O is postponed until the database is not busy, or until
    everything needs to be brought to a consistent state, such as during
    a database restart after a fast
    shutdown.
  
    This section discusses the main considerations for I/O and disk
    space with the default kind of MySQL tables (also known as
    InnoDB tables):
Controlling the amount of background I/O used to improve query performance.
Enabling or disabling features that provide extra durability at the expense of additional I/O.
Organizing tables into many small files, a few larger files, or a combination of both.
Balancing the size of redo log files against the I/O activity that occurs when the log files become full.
How to reorganize a table for optimal query performance.
      InnoDB uses asynchronous disk I/O where
      possible, by creating a number of threads to handle I/O
      operations, while permitting other database operations to proceed
      while the I/O is still in progress. On Linux and Windows
      platforms, InnoDB uses the available OS and library functions to
      perform “native” asynchronous I/O. On other
      platforms, InnoDB still uses I/O threads, but the threads may
      actually wait for I/O requests to complete; this technique is
      known as “simulated” asynchronous I/O.
If InnoDB can determine there is a high probability that data might be needed soon, it performs read-ahead operations to bring that data into the buffer pool so that it is available in memory. Making a few large read requests for contiguous data can be more efficient than making several small, spread-out requests. There are two read-ahead heuristics in InnoDB:
          In sequential read-ahead, if InnoDB notices
          that the access pattern to a segment in the tablespace is
          sequential, it posts in advance a batch of reads of database
          pages to the I/O system.
        
          In random read-ahead, if InnoDB notices
          that some area in a tablespace seems to be in the process of
          being fully read into the buffer pool, it posts the remaining
          reads to the I/O system.
      InnoDB uses a novel file flush technique
      involving a structure called the
      doublewrite buffer,
      which is enabled by default
      (innodb_doublewrite=ON). It adds
      safety to recovery following a crash or power outage, and improves
      performance on most varieties of Unix by reducing the need for
      fsync() operations.
    
      Before writing pages to a data file, InnoDB
      first writes them to a contiguous tablespace area called the
      doublewrite buffer. Only after the write and the flush to the
      doublewrite buffer has completed does InnoDB
      write the pages to their proper positions in the data file. If
      there is an operating system, storage subsystem, or
      mysqld process crash in the middle of a page
      write (causing a torn page
      condition), InnoDB can later find a good copy
      of the page from the doublewrite buffer during recovery.
      The data files that you define in the configuration file form the
      InnoDB
      system tablespace.
      The files are logically concatenated to form the tablespace. There
      is no striping in use. Currently, you cannot define where within
      the tablespace your tables are allocated. In a newly created
      tablespace, InnoDB allocates space starting
      from the first data file.
    
      To avoid the issues that come with storing all tables and indexes
      inside the system tablespace, you can turn on the
      innodb_file_per_table
      configuration option, which stores each newly created table in a
      separate tablespace file (with extension .ibd).
      For tables stored this way, there is less fragmentation within the
      disk file, and when the table is truncated, the space is returned
      to the operating system rather than still being reserved by InnoDB
      within the system tablespace.
      Each tablespace consists of database
      pages. Every tablespace in a
      MySQL instance has the same page
      size. By default, all tablespaces have a page size of 16KB;
      you can reduce the page size to 8KB or 4KB by specifying the
      innodb_page_size option when you
      create the MySQL instance.
    
      The pages are grouped into
      extents of size 1MB (64
      consecutive 16KB pages, or 128 8KB pages, or 256 4KB pages). The
      “files” inside a tablespace are called
      segments in
      InnoDB. (These segments are different from the
      rollback segment,
      which actually contains many tablespace segments.)
    
      When a segment grows inside the tablespace,
      InnoDB allocates the first 32 pages to it one
      at a time. After that, InnoDB starts to
      allocate whole extents to the segment. InnoDB
      can add up to 4 extents at a time to a large segment to ensure
      good sequentiality of data.
    
      Two segments are allocated for each index in
      InnoDB. One is for nonleaf nodes of the B-tree,
      the other is for the leaf nodes. Keeping the leaf nodes contiguous
      on disk enables better sequential I/O operations, because these
      leaf nodes contain the actual table data.
    
      Some pages in the tablespace contain bitmaps of other pages, and
      therefore a few extents in an InnoDB tablespace
      cannot be allocated to segments as a whole, but only as individual
      pages.
    
      When you ask for available free space in the tablespace by issuing
      a SHOW TABLE STATUS statement,
      InnoDB reports the extents that are definitely
      free in the tablespace. InnoDB always reserves
      some extents for cleanup and other internal purposes; these
      reserved extents are not included in the free space.
    
      When you delete data from a table, InnoDB
      contracts the corresponding B-tree indexes. Whether the freed
      space becomes available for other users depends on whether the
      pattern of deletes frees individual pages or extents to the
      tablespace. Dropping a table or deleting all rows from it is
      guaranteed to release the space to other users, but remember that
      deleted rows are physically removed only by the
      purge operation, which happens
      automatically some time after they are no longer needed for
      transaction rollbacks or consistent reads. (See
      Section 14.2.3, “InnoDB Multi-Versioning”.)
    
To see information about the tablespace, use the Tablespace Monitor. See Section 14.14, “InnoDB Monitors”.
      The maximum row length, except for variable-length columns
      (VARBINARY,
      VARCHAR,
      BLOB and
      TEXT), is slightly less than half
      of a database page. That is, the maximum row length is about 8000
      bytes. LONGBLOB and
      LONGTEXT columns
      must be less than 4GB, and the total row length, including
      BLOB and
      TEXT columns, must be less than
      4GB.
    
      If a row is less than half a page long, all of it is stored
      locally within the page. If it exceeds half a page,
      variable-length columns are chosen for external off-page storage
      until the row fits within half a page. For a column chosen for
      off-page storage, InnoDB stores the first 768
      bytes locally in the row, and the rest externally into overflow
      pages. Each such column has its own list of overflow pages. The
      768-byte prefix is accompanied by a 20-byte value that stores the
      true length of the column and points into the overflow list where
      the rest of the value is stored.
Making your log files very large may reduce disk I/O during checkpointing. It often makes sense to set the total size of the log files as large as the buffer pool or even larger. Although in the past large log files could make crash recovery take excessive time, starting with MySQL 5.5, performance enhancements to crash recovery make it possible to use large log files with fast startup after a crash. (Strictly speaking, this performance improvement is available for MySQL 5.1 with the InnoDB Plugin 1.0.7 and higher. It is with MySQL 5.5 that this improvement is available in the default InnoDB storage engine.)
      InnoDB implements a
      checkpoint mechanism known
      as fuzzy
      checkpointing. InnoDB flushes modified
      database pages from the buffer pool in small batches. There is no
      need to flush the buffer pool in one single batch, which would
      disrupt processing of user SQL statements during the checkpointing
      process.
    
      During crash recovery,
      InnoDB looks for a checkpoint label written to
      the log files. It knows that all modifications to the database
      before the label are present in the disk image of the database.
      Then InnoDB scans the log files forward from
      the checkpoint, applying the logged modifications to the database.
Random insertions into or deletions from a secondary index can cause the index to become fragmented. Fragmentation means that the physical ordering of the index pages on the disk is not close to the index ordering of the records on the pages, or that there are many unused pages in the 64-page blocks that were allocated to the index.
      One symptom of fragmentation is that a table takes more space than
      it “should” take. How much that is exactly, is
      difficult to determine. All InnoDB data and
      indexes are stored in B-trees, and their
      fill factor may vary from
      50% to 100%. Another symptom of fragmentation is that a table scan
      such as this takes more time than it “should” take:
    
SELECT COUNT(*) FROM t WHERE non_indexed_column <> 12345;
The preceding query requires MySQL to perform a full table scan, the slowest type of query for a large table.
      To speed up index scans, you can periodically perform a
      “null” ALTER TABLE
      operation, which causes MySQL to rebuild the table:
    
ALTER TABLE tbl_name ENGINE=INNODB
      As of MySQL 5.6.3, you can also use
      ALTER TABLE
       to perform a
      “null” alter operation that rebuilds the table.
      Previously the tbl_name FORCEFORCE option was recognized but
      ignored.
    
      As of MySQL 5.6.17, both
      ALTER TABLE
       and
      tbl_name ENGINE=INNODBALTER TABLE
       use
      online DDL
      (tbl_name FORCEALGORITHM=COPY). For more information, see
      Section 14.10.1, “Overview of Online DDL”.
    
Another way to perform a defragmentation operation is to use mysqldump to dump the table to a text file, drop the table, and reload it from the dump file.
      If the insertions into an index are always ascending and records
      are deleted only from the end, the InnoDB
      filespace management algorithm guarantees that fragmentation in
      the index does not occur.
      To reclaim operating system disk space when
      truncating an
      InnoDB table, the table must be stored in its
      own .ibd file. For a table to
      be stored in its own .ibd
      file, innodb_file_per_table must
      enabled when the table is created. Additionally, there cannot be a
      foreign key constraint
      between the table being truncated and other tables, otherwise the
      TRUNCATE TABLE operation fails. A foreign key
      constraint between two columns in the same table, however, is
      permitted.
    
      When a table is truncated, it is dropped and re-created in a new
      .ibd file, and the freed space is returned to
      the operating system. This is in contrast to truncating
      InnoDB tables that are stored within the
      InnoDB
      system tablespace
      (tables created when
      innodb_file_per_table=OFF), where only
      InnoDB can use the freed space after the table
      is truncated.
    
      The ability to truncate tables and return disk space to the
      operating system also means that
      physical backups can
      be smaller. Truncating tables that are stored in the system
      tablespace (tables created when
      innodb_file_per_table=OFF) leaves blocks of
      unused space in the system tablespace.
    The online DDL feature builds
    on the InnoDB Fast Index Creation feature that is
    available in MySQL 5.1 and MySQL 5.5. The
    InnoDB Fast Index Creation feature optimized
    CREATE INDEX and DROP INDEX to
    avoid table-copying behavior. The
    online DDL feature,
    introduced in MySQL 5.6, enhances many other types of
    ALTER TABLE operations to avoid table
    copying, blocking DML operations
    while DDL is in progress, or both.
  
The online DDL feature has the following benefits:
It improves responsiveness and availability in busy production environments, where making a table unavailable for minutes or hours whenever you modify its indexes or column definitions is not practical.
        It lets you adjust the balance between performance and
        concurrency during the DDL operation, by choosing whether to
        block access to the table entirely
        (LOCK=EXCLUSIVE clause), allow queries but
        not DML (LOCK=SHARED clause), or allow full
        query and DML access to the table (LOCK=NONE
        clause). When you omit the LOCK clause or
        specify LOCK=DEFAULT, MySQL allows as much
        concurrency as possible depending on the type of operation.
      
By doing the changes in-place where possible, rather than creating a new copy of the table, it avoids temporary increases in disk space usage and the I/O overhead of copying the table and reconstructing all the secondary indexes.
    MySQL Cluster's NDB storage engine
    also supports online table schema changes, but uses its own syntax
    that is not compatible with that used for InnoDB
    online operations. For more information, see
    Section 13.1.7.2, “ALTER TABLE Online Operations in MySQL Cluster”.
      Historically, many DDL operations
      on InnoDB tables were expensive. Many
      ALTER TABLE operations worked by
      creating a new, empty table defined with the requested table
      options and indexes, then copying the existing rows to the new
      table one-by-one, updating the indexes as the rows were inserted.
      After all rows from the original table were copied, the old table
      was dropped and the copy was renamed with the name of the original
      table.
    
      MySQL 5.5, and MySQL 5.1 with the InnoDB Plugin, optimized
      CREATE INDEX and DROP INDEX
      to avoid the table-copying behavior. That feature was known as
      Fast Index
      Creation. MySQL 5.6 enhances many other types of
      ALTER TABLE operations to avoid
      copying the table. Another enhancement allows
      SELECT queries and
      INSERT,
      UPDATE, and
      DELETE
      (DML) statements to proceed while
      the table is being altered. This combination of features is now
      known as online DDL.
    
This new mechanism also means that you can generally speed the overall process of creating and loading a table and associated indexes by creating the table without any secondary indexes, then adding the secondary indexes after the data is loaded.
      Although no syntax changes are required in the
      CREATE INDEX or
      DROP INDEX commands, some factors
      affect the performance, space usage, and semantics of this
      operation (see Section 14.10.9, “Limitations of Online DDL”).
    
      The online DDL enhancements in MySQL 5.6 improve many DDL
      operations that formerly required a table copy, blocked DML
      operations on the table, or both.
      Table 14.6, “Summary of Online Status for DDL Operations” shows the
      variations of the ALTER TABLE
      statement and shows how the online DDL feature applies to each
      one.
    
      With the exception of ALTER TABLE
      partitioning clauses, online DDL operations for partitioned
      InnoDB tables follow the same rules that apply
      to regular InnoDB tables. For more information,
      see Section 14.10.8, “Online DDL for Partitioned InnoDB Tables”.
          The “In-Place?” column shows which operations
          allow the ALGORITHM=INPLACE clause; the
          preferred value is “Yes”.
        
          The “Copies Table?” column shows which operations
          are able to avoid the expensive table-copying operation; the
          preferred value is “No”. This column is mostly
          the reverse of the “In-Place?” column, except
          that a few operations allow
          ALGORITHM=INPLACE but still involve some
          amount of table copying.
        
          The “Allows Concurrent DML?” column shows which
          operations can be performed fully online; the preferred value
          is “Yes”. You can specify
          LOCK=NONE to assert that full concurrency
          is allowed during the DDL, but MySQL automatically allows this
          level of concurrency when possible. When concurrent DML is
          allowed, concurrent queries are also always allowed.
        
          The “Allows Concurrent Queries?” column shows
          which DDL operations allow queries on the table while the
          operation is in progress; the preferred value is
          “Yes”. Concurrent query is allowed during all
          online DDL operations. It is shown with “Yes”
          listed for all cells, for reference purposes. You can specify
          LOCK=SHARED to assert that concurrent
          queries are allowed during the DDL, but MySQL automatically
          allows this level of concurrency when possible.
        
The “Notes” column explains any exceptions to the “yes/no” values of the other columns, such as when the answer depends on the setting of a configuration option or some other clause in the DDL statement. The values “Yes*” and “No*” indicate that an answer depends on these additional notes.
Table 14.6 Summary of Online Status for DDL Operations
| Operation | In-Place? | Copies Table? | Allows Concurrent DML? | Allows Concurrent Query? | Notes | 
|---|---|---|---|---|---|
| CREATE INDEX,ADD
              INDEX | Yes* | No* | Yes | Yes | Some restrictions for FULLTEXTindex; see next row. | 
| ADD FULLTEXT
              INDEX | Yes | No* | No | Yes | Creating the first FULLTEXTindex for a table
              involves a table copy, unless there is a user-suppliedFTS_DOC_IDcolumn. SubsequentFULLTEXTindexes on the same table can
              be created in-place. | 
| DROP INDEX | Yes | No | Yes | Yes | Modifies .frmfile only, not the data file. | 
| OPTIMIZE TABLE | Yes | Yes | Yes | Yes | Uses ALGORITHM=INPLACEas of MySQL 5.6.17.ALGORITHM=COPYis used ifold_alter_table=1or
              mysqld--skip-newoption is enabled.OPTIMIZE TABLEusing
              online DDL (ALGORITHM=INPLACE) is not
              supported for tables with FULLTEXT indexes. | 
| Set default value for a column | Yes | No | Yes | Yes | Modifies .frmfile only, not the data file. | 
| Change auto-increment value for a column | Yes | No | Yes | Yes | Modifies a value stored in memory, not the data file. | 
| Add a foreign key constraint | Yes* | No* | Yes | Yes | To avoid copying the table, disable foreign_key_checksduring
              constraint creation. | 
| Drop a foreign key constraint | Yes | No | Yes | Yes | The foreign_key_checksoption can be
              enabled or disabled. | 
| Rename a column | Yes* | No* | Yes* | Yes | To allow concurrent DML, keep the same data type and only change the column name. | 
| Add a column | Yes | Yes | Yes* | Yes | Concurrent DML is not allowed when adding an
              auto-increment
              column. Although ALGORITHM=INPLACEis
              allowed, the data is reorganized substantially, so it is
              still an expensive operation. | 
| Drop a column | Yes | Yes | Yes | Yes | Although ALGORITHM=INPLACEis allowed, the data is
              reorganized substantially, so it is still an expensive
              operation. | 
| Reorder columns | Yes | Yes | Yes | Yes | Although ALGORITHM=INPLACEis allowed, the data is
              reorganized substantially, so it is still an expensive
              operation. | 
| Change ROW_FORMATproperty | Yes | Yes | Yes | Yes | Although ALGORITHM=INPLACEis allowed, the data is
              reorganized substantially, so it is still an expensive
              operation. | 
| Change KEY_BLOCK_SIZEproperty | Yes | Yes | Yes | Yes | Although ALGORITHM=INPLACEis allowed, the data is
              reorganized substantially, so it is still an expensive
              operation. | 
| Make column NULL | Yes | Yes | Yes | Yes | Although ALGORITHM=INPLACEis allowed, the data is
              reorganized substantially, so it is still an expensive
              operation. | 
| Make column NOT NULL | Yes* | Yes | Yes | Yes | When SQL_MODEincludesstrict_all_tablesorstrict_all_tables, the operation fails
              if the column contains any nulls. AlthoughALGORITHM=INPLACEis allowed, the data
              is reorganized substantially, so it is still an expensive
              operation. | 
| Change data type of column | No | Yes | No | Yes | |
| Add primary key | Yes* | Yes | Yes | Yes | Although ALGORITHM=INPLACEis allowed, the data is
              reorganized substantially, so it is still an expensive
              operation.ALGORITHM=INPLACEis not
              allowed under certain conditions if columns have to be
              converted toNOT NULL. See
              Example 14.9, “Creating and Dropping the Primary Key”. | 
| Drop primary key and add another | Yes | Yes | Yes | Yes | ALGORITHM=INPLACEis only allowed when you add a new
              primary key in the sameALTER
              TABLE; the data is reorganized substantially, so
              it is still an expensive operation. | 
| Drop primary key | No | Yes | No | Yes | Restrictions apply when you drop a primary key primary key without
              adding a new one in the same ALTER
              TABLEstatement. | 
| Convert character set | No | Yes | No | Yes | Rebuilds the table if the new character encoding is different. | 
| Specify character set | No | Yes | No | Yes | Rebuilds the table if the new character encoding is different. | 
| Rebuild with FORCEoption | Yes | Yes | Yes | Yes | Uses ALGORITHM=INPLACEas of MySQL 5.6.17.ALGORITHM=COPYis used ifold_alter_table=1or
              mysqld--skip-newoption is enabled. Table rebuild using online DDL
              (ALGORITHM=INPLACE) is not supported
              for tables with FULLTEXT indexes. | 
| Rebuild with “null” ALTER TABLE ...
              ENGINE=INNODB | Yes | Yes | Yes | Yes | Uses ALGORITHM=INPLACEas of MySQL 5.6.17.ALGORITHM=COPYis used ifold_alter_table=1or
              mysqld--skip-newoption is enabled. Table rebuild using online DDL
              (ALGORITHM=INPLACE) is not supported
              for tables with FULLTEXT indexes. | 
| Set table-level persistent
              statistics options
              ( STATS_PERSISTENT,STATS_AUTO_RECALCSTATS_SAMPLE_PAGES) | Yes | No | Yes | Yes | Modifies .frmfile only, not the data file. | 
The following sections shows the basic syntax, and usage notes related to online DDL, for each of the major operations that can be performed with concurrent DML, in-place, or both:
          Create secondary
          indexes: CREATE INDEX
           or
          name ON
          table
          (col_list)ALTER TABLE . (Creating a
          table ADD
          INDEX name
          (col_list)FULLTEXT index still requires locking the
          table.)
        
          Drop secondary
          indexes: DROP INDEX
           or name ON
          table;ALTER
          TABLE 
table DROP INDEX
          name
      Creating and dropping secondary indexes on
      InnoDB tables skips the table-copying behavior,
      the same as in MySQL 5.5 and MySQL 5.1 with the
      InnoDB Plugin.
    
      In MySQL 5.6 and higher, the table remains available for read and
      write operations while the index is being created or dropped. The
      CREATE INDEX or
      DROP INDEX statement only finishes
      after all transactions that are accessing the table are completed,
      so that the initial state of the index reflects the most recent
      contents of the table. Previously, modifying the table while an
      index was being created or dropped typically resulted in a
      deadlock that cancelled the
      INSERT,
      UPDATE, or
      DELETE statement on the table.
          Set a default value for a column: ALTER TABLE
           or tbl ALTER COLUMN
          col SET DEFAULT
          literalALTER
          TABLE 
        tbl ALTER COLUMN
          col DROP DEFAULT
          The default values for columns are stored in the
          .frm file for the table,
          not the InnoDB
          data dictionary.
        
          Changing the
          auto-increment
          value for a column: ALTER TABLE
          
        table
          AUTO_INCREMENT=next_value;
Especially in a distributed system using replication or sharding, you sometimes reset the auto-increment counter for a table to a specific value. The next row inserted into the table uses the specified value for its auto-increment column. You might also use this technique in a data warehousing environment where you periodically empty all the tables and reload them, and you can restart the auto-increment sequence from 1.
          Renaming a column: ALTER TABLE
          
        tbl CHANGE
          old_col_name
          new_col_name
          datatype
          When you keep the same data type and [NOT]
          NULL attribute, only changing the column name, this
          operation can always be performed online.
        
          As part of this enhancement, you can now rename a column that
          is part of a foreign key constraint, which was not allowed
          before. The foreign key definition is automatically updated to
          use the new column name. Renaming a column participating in a
          foreign key only works with the in-place mode of
          ALTER TABLE. If you use the
          ALGORITHM=COPY clause, or some other
          condition causes the command to use
          ALGORITHM=COPY behind the scenes, the
          ALTER TABLE statement will fail.
Adding or dropping a foreign key constraint:
ALTER TABLEtbl1ADD CONSTRAINTfk_nameFOREIGN KEYindex(col1) REFERENCEStbl2(col2)referential_actions; ALTER TABLEtblDROP FOREIGN KEYfk_name;
          Dropping a foreign key can be performed online with the
          foreign_key_checks option
          enabled or disabled. Creating a foreign key online requires
          foreign_key_checks to be
          disabled.
        
          If you do not know the names of the foreign key constraints on
          a particular table, issue the following statement and find the
          constraint name in the CONSTRAINT clause
          for each foreign key:
        
show create table table\G
          Or, query the
          information_schema.table_constraints
          table and use the constraint_name and
          constraint_type columns to identify the
          foreign key names.
        
As a consequence of this enhancement, you can now also drop a foreign key and its associated index in a single statement, which previously required separate statements in a strict order:
ALTER TABLEtableDROP FOREIGN KEYconstraint, DROP INDEXindex;
      If foreign keys are
      already present in the table being altered (that is, it is a
      child table containing any
      FOREIGN KEY ... REFERENCE clauses), additional
      restrictions apply to online DDL operations, even those not
      directly involving the foreign key columns:
Concurrent DML is disallowed during online DDL operations on such child tables. (This restriction is being evaluated as a bug and might be lifted.)
          An ALTER TABLE on the child
          table could also wait for another transaction to commit, if a
          change to the parent table caused associated changes in the
          child table through an ON UPDATE or
          ON DELETE clause using the
          CASCADE or SET NULL
          parameters.
      In the same way, if a table is the
      parent table in a foreign
      key relationship, even though it does not contain any
      FOREIGN KEY clauses, it could wait for the
      ALTER TABLE to complete if an
      INSERT,
      UPDATE, or
      DELETE statement caused an
      ON UPDATE or ON DELETE
      action in the child table.
      Any ALTER TABLE operation run with
      the ALGORITHM=COPY clause prevents concurrent
      DML operations. Concurrent queries are still allowed. That is, a
      table-copying operation always includes at least the concurrency
      restrictions of LOCK=SHARED (allow queries but
      not DML). You can further restrict concurrency for such operations
      by specifying LOCK=EXCLUSIVE (prevent DML and
      queries).
      Some other ALTER TABLE operations
      allow concurrent DML but still require a table copy. However, the
      table copy for these operations is faster than it was in MySQL 5.5
      and prior.
Adding, dropping, or reordering columns.
Adding or dropping a primary key.
          Changing the ROW_FORMAT or
          KEY_BLOCK_SIZE properties for a table.
        
Changing the nullable status for a column.
          Rebuilding a table with the FORCE option
        
          Rebuilding a table using a “null” ALTER
          TABLE ... ENGINE=INNODB statement
        As your database schema evolves with new columns, data types,
        constraints, indexes, and so on, keep your
        CREATE TABLE statements up to
        date with the latest table definitions. Even with the
        performance improvements of online DDL, it is more efficient to
        create stable database structures at the beginning, rather than
        creating part of the schema and then issuing
        ALTER TABLE statements afterward.
      
The main exception to this guideline is for secondary indexes on tables with large numbers of rows. It is typically most efficient to create the table with all details specified except the secondary indexes, load the data, then create the secondary indexes. You can use the same technique with foreign keys (load the data first, then set up the foreign keys) if you know the initial data is clean and do not need consistency checks during the loading process.
        Whatever sequence of CREATE
        TABLE, CREATE INDEX,
        ALTER TABLE, and similar
        statements went into putting a table together, you can capture
        the SQL needed to reconstruct the current form of the table by
        issuing the statement SHOW CREATE TABLE
         (uppercase
        table\G\G required for tidy formatting). This output
        shows clauses such as numeric precision, NOT
        NULL, and CHARACTER SET that are
        sometimes added behind the scenes, and you might otherwise leave
        out when cloning the table on a new system or setting up foreign
        key columns with identical type.
Online DDL improves several aspects of MySQL operation, such as performance, concurrency, availability, and scalability:
Because queries and DML operations on the table can proceed while the DDL is in progress, applications that access the table are more responsive. Reduced locking and waiting for other resources all throughout the MySQL server leads to greater scalability, even for operations not involving the table being altered.
For in-place operations, by avoiding the disk I/O and CPU cycles to rebuild the table, you minimize the overall load on the database and maintain good performance and high throughput during the DDL operation.
For in-place operations, because less data is read into the buffer pool than if all the data was copied, you avoid purging frequently accessed data from memory, which formerly could cause a temporary performance dip after a DDL operation.
      If an online operation requires temporary files,
      InnoDB creates them in the temporary file
      directory, not the directory containing the original table. If
      this directory is not large enough to hold such files, you may
      need to set the tmpdir system
      variable to a different directory. (See
      Section B.5.4.4, “Where MySQL Stores Temporary Files”.)
      While an InnoDB table is being changed by a DDL operation, the
      table may or may not be
      locked, depending on the
      internal workings of that operation and the
      LOCK clause of the ALTER
      TABLE statement. By default, MySQL uses as little
      locking as possible during a DDL operation; you specify the clause
      either to make the locking more restrictive than it normally would
      be (thus limiting concurrent DML, or DML and queries), or to
      ensure that some expected degree of locking is allowed for an
      operation. If the LOCK clause specifies a level
      of locking that is not available for that specific kind of DDL
      operation, such as LOCK=SHARED or
      LOCK=NONE while creating or dropping a primary
      key, the clause works like an assertion, causing the statement to
      fail with an error. The following list shows the different
      possibilities for the LOCK clause, from the
      most permissive to the most restrictive:
          For DDL operations with LOCK=NONE, both
          queries and concurrent DML are allowed. This clause makes the
          ALTER TABLE fail if the kind of
          DDL operation cannot be performed with the requested type of
          locking, so specify LOCK=NONE if keeping
          the table fully available is vital and it is OK to cancel the
          DDL if that is not possible. For example, you might use this
          clause in DDLs for tables involving customer signups or
          purchases, to avoid making those tables unavailable by
          mistakenly issuing an expensive ALTER
          TABLE statement.
        
          For DDL operations with LOCK=SHARED, any
          writes to the table (that is, DML operations) are blocked, but
          the data in the table can be read. This clause makes the
          ALTER TABLE fail if the kind of
          DDL operation cannot be performed with the requested type of
          locking, so specify LOCK=SHARED if keeping
          the table available for queries is vital and it is OK to
          cancel the DDL if that is not possible. For example, you might
          use this clause in DDLs for tables in a data warehouse, where
          it is OK to delay data load operations until the DDL is
          finished, but queries cannot be delayed for long periods.
        
          For DDL operations with LOCK=DEFAULT, or
          with the LOCK clause omitted, MySQL uses
          the lowest level of locking that is available for that kind of
          operation, allowing concurrent queries, DML, or both wherever
          possible. This is the setting to use when making pre-planned,
          pre-tested changes that you know will not cause any
          availability problems based on the workload for that table.
        
          For DDL operations with LOCK=EXCLUSIVE,
          both queries and DML operations are blocked. This clause makes
          the ALTER TABLE fail if the
          kind of DDL operation cannot be performed with the requested
          type of locking, so specify LOCK=EXCLUSIVE
          if the primary concern is finishing the DDL in the shortest
          time possible, and it is OK to make applications wait when
          they try to access the table. You might also use
          LOCK=EXCLUSIVE if the server is supposed to
          be idle, to avoid unexpected accesses to the table.
An online DDL statement for an InnoDB table always waits for currently executing transactions that are accessing the table to commit or roll back, because it requires exclusive access to the table for a brief period while the DDL statement is being prepared. Likewise, it requires exclusive access to the table for a brief time before finishing. Thus, an online DDL statement waits for any transactions that are started while the DDL is in progress, and query or modify the table, to commit or roll back before the DDL completes.
Because there is some processing work involved with recording the changes made by concurrent DML operations, then applying those changes at the end, an online DDL operation could take longer overall than the old-style mechanism that blocks table access from other sessions. The reduction in raw performance is balanced against better responsiveness for applications that use the table. When evaluating the ideal techniques for changing table structure, consider end-user perception of performance, based on factors such as load times for web pages.
      A newly created InnoDB secondary index contains only the committed
      data in the table at the time the CREATE
      INDEX or ALTER TABLE
      statement finishes executing. It does not contain any uncommitted
      values, old versions of values, or values marked for deletion but
      not yet removed from the old index.
The raw performance of an online DDL operation is largely determined by whether the operation is performed in-place, or requires copying and rebuilding the entire table. See Table 14.6, “Summary of Online Status for DDL Operations” to see what kinds of operations can be performed in-place, and any requirements for avoiding table-copy operations.
The performance speedup from in-place DDL applies to operations on secondary indexes, not to the primary key index. The rows of an InnoDB table are stored in a clustered index organized based on the primary key, forming what some database systems call an “index-organized table”. Because the table structure is so closely tied to the primary key, redefining the primary key still requires copying the data.
      When an operation on the primary key uses
      ALGORITHM=INPLACE, even though the data is
      still copied, it is more efficient than using
      ALGORITHM=COPY because:
          No undo logging or associated redo logging is required for
          ALGORITHM=INPLACE. These operations add
          overhead to DDL statements that use
          ALGORITHM=COPY.
        
The secondary index entries are pre-sorted, and so can be loaded in order.
The change buffer is not used, because there are no random-access inserts into the secondary indexes.
      To judge the relative performance of online DDL operations, you
      can run such operations on a big InnoDB table
      using current and earlier versions of MySQL. You can also run all
      the performance tests under the latest MySQL version, simulating
      the previous DDL behavior for the “before” results,
      by setting the old_alter_table
      system variable. Issue the statement set
      old_alter_table=1 in the session, and measure DDL
      performance to record the “before” figures. Then
      set old_alter_table=0 to re-enable the newer,
      faster behavior, and run the DDL operations again to record the
      “after” figures.
    
For a basic idea of whether a DDL operation does its changes in-place or performs a table copy, look at the “rows affected” value displayed after the command finishes. For example, here are lines you might see after doing different types of DDL operations:
Changing the default value of a column (super-fast, does not affect the table data at all):
Query OK, 0 rows affected (0.07 sec)
          Adding an index (takes time, but 0 rows
          affected shows that the table is not copied):
        
Query OK, 0 rows affected (21.42 sec)
Changing the data type of a column (takes substantial time and does require rebuilding all the rows of the table):
Query OK, 1671168 rows affected (1 min 35.54 sec)
For example, before running a DDL operation on a big table, you might check whether the operation will be fast or slow as follows:
Clone the table structure.
Populate the cloned table with a tiny amount of data.
Run the DDL operation on the cloned table.
Check whether the “rows affected” value is zero or not. A non-zero value means the operation will require rebuilding the entire table, which might require special planning. For example, you might do the DDL operation during a period of scheduled downtime, or on each replication slave server one at a time.
      For a deeper understanding of the reduction in MySQL processing,
      examine the performance_schema and
      INFORMATION_SCHEMA tables related to
      InnoDB before and after DDL operations, to see
      the number of physical reads, writes, memory allocations, and so
      on.
      Typically, you do not need to do anything special to enable
      online DDL when using the
      ALTER TABLE statement for
      InnoDB tables. See
      Table 14.6, “Summary of Online Status for DDL Operations” for the kinds of
      DDL operations that can be performed in-place, allowing concurrent
      DML, or both. Some variations require particular combinations of
      configuration settings or ALTER
      TABLE clauses.
    
      You can control the various aspects of a particular online DDL
      operation by using the LOCK and
      ALGORITHM clauses of the
      ALTER TABLE statement. These
      clauses come at the end of the statement, separated from the table
      and column specifications by commas. The LOCK
      clause is useful for fine-tuning the degree of concurrent access
      to the table. The ALGORITHM clause is primarily
      intended for performance comparisons and as a fallback to the
      older table-copying behavior in case you encounter any issues with
      existing DDL code. For example:
          To avoid accidentally making the table unavailable for reads,
          writes, or both, you could specify a clause on the
          ALTER TABLE statement such as
          LOCK=NONE (allow both reads and writes) or
          LOCK=SHARED (allow reads). The operation
          halts immediately if the requested level of concurrency is not
          available.
        
          To compare performance, you could run one statement with
          ALGORITHM=INPLACE and another with
          ALGORITHM=COPY, as an alternative to
          setting the old_alter_table
          configuration option.
        
          To avoid the chance of tying up the server by running an
          ALTER TABLE that copied the
          table, you could include ALGORITHM=INPLACE
          so the statement halts immediately if it cannot use the
          in-place mechanism. See
          Table 14.6, “Summary of Online Status for DDL Operations” for a list of
          the DDL operations that can or cannot be performed in-place.
      See Section 14.10.2, “Performance and Concurrency Considerations for Online DDL” for more
      details about the LOCK clause. For full
      examples of using online DDL, see
      Section 14.10.5, “Examples of Online DDL”.
      Before the introduction of online
      DDL, it was common practice to combine many DDL operations
      into a single ALTER TABLE
      statement. Because each ALTER TABLE
      statement involved copying and rebuilding the table, it was more
      efficient to make several changes to the same table at once, since
      those changes could all be done with a single rebuild operation
      for the table. The downside was that SQL code involving DDL
      operations was harder to maintain and to reuse in different
      scripts. If the specific changes were different each time, you
      might have to construct a new complex ALTER
      TABLE for each slightly different scenario.
    
      For DDL operations that can be done in-place, as shown in
      Table 14.6, “Summary of Online Status for DDL Operations”, now you can
      separate them into individual ALTER
      TABLE statements for easier scripting and maintenance,
      without sacrificing efficiency. For example, you might take a
      complicated statement such as:
    
alter table t1 add index i1(c1), add unique index i2(c2), change c4_old_name c4_new_name integer unsigned;
and break it down into simpler parts that can be tested and performed independently, such as:
alter table t1 add index i1(c1); alter table t1 add unique index i2(c2); alter table t1 change c4_old_name c4_new_name integer unsigned not null;
      You might still use multi-part ALTER
      TABLE statements for:
Operations that must be performed in a specific sequence, such as creating an index followed by a foreign key constraint that uses that index.
          Operations all using the same specific LOCK
          clause, that you want to either succeed or fail as a group.
        
Operations that cannot be performed in-place, that is, that still copy and rebuild the table.
          Operations for which you specify
          ALGORITHM=COPY or
          old_alter_table=1, to force the
          table-copying behavior if needed for precise
          backward-compatibility in specialized scenarios.
Here are code examples showing some operations whose performance, concurrency, and scalability are improved by the latest online DDL enhancements.
          Example 14.1, “Schema Setup Code for Online DDL Experiments” sets up tables named
          BIG_TABLE and
          SMALL_TABLE used in the subsequent
          examples.
        
Example 14.2, “Speed and Efficiency of CREATE INDEX and DROP INDEX” illustrates the performance aspects of creating and dropping indexes.
          Example 14.3, “Concurrent DML During CREATE INDEX and DROP INDEX” shows queries
          and DML statements running during a DROP
          INDEX operation.
        
Example 14.4, “Renaming a Column” demonstrates the speed improvement for renaming a column, and shows the care needed to keep the data type precisely the same when doing the rename operation.
Example 14.5, “Dropping Foreign Keys” demonstrates how foreign keys work with online DDL. Because two tables are involved in foreign key operations, there are extra locking considerations. Thus, tables with foreign keys sometimes have restrictions for online DDL operations.
Example 14.6, “Changing Auto-Increment Value” demonstrates how auto-increment columns work with online DDL. Tables with auto-increment columns sometimes have restrictions for online DDL operations.
Example 14.7, “Controlling Concurrency with the LOCK Clause” demonstrates the options to permit or restrict concurrent queries and DML operations while an online DDL operation is in progress. It shows the situations when the DDL statement might wait, or the concurrent transaction might wait, or the concurrent transaction might cancel a DML statement due to a deadlock error.
Example 14.8, “Schema Setup Code for Online DDL Experiments” demonstrates creating and dropping multiple indexes in a single statement, which can be more efficient than using a separate statement for each index operation.
Example 14.9, “Creating and Dropping the Primary Key” demonstrates how it is more efficient to define a primary key when creating the table, and relatively expensive to add one later.
Example 14.1 Schema Setup Code for Online DDL Experiments
Here is the code that sets up the initial tables used in these demonstrations:
/* Setup code for the online DDL demonstration: - Set up some config variables. - Create 2 tables that are clones of one of the INFORMATION_SCHEMA tables that always has some data. The "small" table has a couple of thousand rows. For the "big" table, keep doubling the data until it reaches over a million rows. - Set up a primary key for the sample tables, since we are demonstrating InnoDB aspects. */ set autocommit = 0; set foreign_key_checks = 1; set global innodb_file_per_table = 1; set old_alter_table=0; prompt mysql: use test; \! echo "Setting up 'small' table:" drop table if exists small_table; create table small_table as select * from information_schema.columns; alter table small_table add id int unsigned not null primary key auto_increment; select count(id) from small_table; \! echo "Setting up 'big' table:" drop table if exists big_table; create table big_table as select * from information_schema.columns; show create table big_table\G insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; insert into big_table select * from big_table; commit; alter table big_table add id int unsigned not null primary key auto_increment; select count(id) from big_table;
Running this code gives this output, condensed for brevity and with the most important points bolded:
Setting up 'small' table: Query OK, 0 rows affected (0.01 sec) Query OK, 1678 rows affected (0.13 sec) Records: 1678 Duplicates: 0 Warnings: 0 Query OK, 1678 rows affected (0.07 sec) Records: 1678 Duplicates: 0 Warnings: 0 +-----------+ | count(id) | +-----------+ | 1678 | +-----------+ 1 row in set (0.00 sec) Setting up 'big' table: Query OK, 0 rows affected (0.16 sec) Query OK, 1678 rows affected (0.17 sec) Records: 1678 Duplicates: 0 Warnings: 0 *************************** 1. row *************************** Table: big_table Create Table: CREATE TABLE `big_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '' ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) Query OK, 1678 rows affected (0.09 sec) Records: 1678 Duplicates: 0 Warnings: 0 Query OK, 3356 rows affected (0.07 sec) Records: 3356 Duplicates: 0 Warnings: 0 Query OK, 6712 rows affected (0.17 sec) Records: 6712 Duplicates: 0 Warnings: 0 Query OK, 13424 rows affected (0.44 sec) Records: 13424 Duplicates: 0 Warnings: 0 Query OK, 26848 rows affected (0.63 sec) Records: 26848 Duplicates: 0 Warnings: 0 Query OK, 53696 rows affected (1.72 sec) Records: 53696 Duplicates: 0 Warnings: 0 Query OK, 107392 rows affected (3.02 sec) Records: 107392 Duplicates: 0 Warnings: 0 Query OK, 214784 rows affected (6.28 sec) Records: 214784 Duplicates: 0 Warnings: 0 Query OK, 429568 rows affected (13.25 sec) Records: 429568 Duplicates: 0 Warnings: 0 Query OK, 859136 rows affected (28.16 sec) Records: 859136 Duplicates: 0 Warnings: 0 Query OK, 0 rows affected (0.03 sec) Query OK, 1718272 rows affected (1 min 9.22 sec) Records: 1718272 Duplicates: 0 Warnings: 0 +-----------+ | count(id) | +-----------+ | 1718272 | +-----------+ 1 row in set (1.75 sec)
Example 14.2 Speed and Efficiency of CREATE INDEX and DROP INDEX
        Here is a sequence of statements demonstrating the relative
        speed of CREATE INDEX and
        DROP INDEX statements. For a
        small table, the elapsed time is less than a second whether we
        use the fast or slow technique, so we look at the “rows
        affected” output to verify which operations can avoid the
        table rebuild. For a large table, the difference in efficiency
        is obvious because skipping the table rebuild saves substantial
        time.
      
\! clear \! echo "=== Create and drop index (small table, new/fast technique) ===" \! echo \! echo "Data size (kilobytes) before index created: " \! du -k data/test/small_table.ibd create index i_dtyp_small on small_table (data_type), algorithm=inplace; \! echo "Data size after index created: " \! du -k data/test/small_table.ibd drop index i_dtyp_small on small_table, algorithm=inplace; -- Compare against the older slower DDL. \! echo "=== Create and drop index (small table, old/slow technique) ===" \! echo \! echo "Data size (kilobytes) before index created: " \! du -k data/test/small_table.ibd create index i_dtyp_small on small_table (data_type), algorithm=copy; \! echo "Data size after index created: " \! du -k data/test/small_table.ibd drop index i_dtyp_small on small_table, algorithm=copy; -- In the above example, we examined the "rows affected" number, -- ideally looking for a zero figure. Let's try again with a larger -- sample size, where we'll see that the actual time taken can -- vary significantly. \! echo "=== Create and drop index (big table, new/fast technique) ===" \! echo \! echo "Data size (kilobytes) before index created: " \! du -k data/test/big_table.ibd create index i_dtyp_big on big_table (data_type), algorithm=inplace; \! echo "Data size after index created: " \! du -k data/test/big_table.ibd drop index i_dtyp_big on big_table, algorithm=inplace; \! echo "=== Create and drop index (big table, old/slow technique) ===" \! echo \! echo "Data size (kilobytes) before index created: " \! du -k data/test/big_table.ibd create index i_dtyp_big on big_table (data_type), algorithm=copy; \! echo "Data size after index created: " \! du -k data/test/big_table.ibd drop index i_dtyp_big on big_table, algorithm=copy;
Running this code gives this output, condensed for brevity and with the most important points bolded:
Query OK, 0 rows affected (0.00 sec) === Create and drop index (small table, new/fast technique) === Data size (kilobytes) before index created: 384 data/test/small_table.ibd Query OK, 0 rows affected (0.04 sec) Records: 0 Duplicates: 0 Warnings: 0 Data size after index created: 432 data/test/small_table.ibd Query OK, 0 rows affected (0.02 sec) Records: 0 Duplicates: 0 Warnings: 0 Query OK, 0 rows affected (0.00 sec) === Create and drop index (small table, old/slow technique) === Data size (kilobytes) before index created: 432 data/test/small_table.ibd Query OK, 1678 rows affected (0.12 sec) Records: 1678 Duplicates: 0 Warnings: 0 Data size after index created: 448 data/test/small_table.ibd Query OK, 1678 rows affected (0.10 sec) Records: 1678 Duplicates: 0 Warnings: 0 Query OK, 0 rows affected (0.00 sec) === Create and drop index (big table, new/fast technique) === Data size (kilobytes) before index created: 315392 data/test/big_table.ibd Query OK, 0 rows affected (33.32 sec) Records: 0 Duplicates: 0 Warnings: 0 Data size after index created: 335872 data/test/big_table.ibd Query OK, 0 rows affected (0.02 sec) Records: 0 Duplicates: 0 Warnings: 0 Query OK, 0 rows affected (0.00 sec) === Create and drop index (big table, old/slow technique) === Data size (kilobytes) before index created: 335872 data/test/big_table.ibd Query OK, 1718272 rows affected (1 min 5.01 sec) Records: 1718272 Duplicates: 0 Warnings: 0 Data size after index created: 348160 data/test/big_table.ibd Query OK, 1718272 rows affected (46.59 sec) Records: 1718272 Duplicates: 0 Warnings: 0
Example 14.3 Concurrent DML During CREATE INDEX and DROP INDEX
        Here are some snippets of code that I ran in separate
        mysql sessions connected to the same
        database, to illustrate DML statements (insert, update, or
        delete) running at the same time as CREATE
        INDEX and DROP INDEX.
      
/* CREATE INDEX statement to run against a table while insert/update/delete statements are modifying the column being indexed. */ -- We'll run this script in one session, while simultaneously creating and dropping -- an index on test/big_table.table_name in another session. use test; create index i_concurrent on big_table(table_name);
/* DROP INDEX statement to run against a table while insert/update/delete statements are modifying the column being indexed. */ -- We'll run this script in one session, while simultaneously creating and dropping -- an index on test/big_table.table_name in another session. use test; drop index i_concurrent on big_table;
/* Some queries and insert/update/delete statements to run against a table while an index is being created or dropped. Previously, these operations would have stalled during the index create/drop period and possibly timed out or deadlocked. */ -- We'll run this script in one session, while simultaneously creating and dropping -- an index on test/big_table.table_name in another session. -- In our test instance, that column has about 1.7M rows, with 136 different values. -- Sample values: COLUMNS (20480), ENGINES (6144), EVENTS (24576), FILES (38912), TABLES (21504), VIEWS (10240). set autocommit = 0; use test; select distinct character_set_name from big_table where table_name = 'FILES'; delete from big_table where table_name = 'FILES'; select distinct character_set_name from big_table where table_name = 'FILES'; -- I'll issue the final rollback interactively, not via script, -- the better to control the timing. -- rollback;
Running this code gives this output, condensed for brevity and with the most important points bolded:
mysql: source concurrent_ddl_create.sql Database changed Query OK, 0 rows affected (1 min 25.15 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql: source concurrent_ddl_drop.sql Database changed Query OK, 0 rows affected (24.98 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql: source concurrent_dml.sql Query OK, 0 rows affected (0.00 sec) Database changed +--------------------+ | character_set_name | +--------------------+ | NULL | | utf8 | +--------------------+ 2 rows in set (0.32 sec) Query OK, 38912 rows affected (1.84 sec) Empty set (0.01 sec) mysql: rollback; Query OK, 0 rows affected (1.05 sec)
Example 14.4 Renaming a Column
        Here is a demonstration of using ALTER
        TABLE to rename a column. We use the new, fast DDL
        mechanism to change the name, then the old, slow DDL mechanism
        (with old_alter_table=1) to restore the
        original column name.
      
Notes:
            Because the syntax for renaming a column also involves
            re-specifying the data type, be very careful to specify
            exactly the same data type to avoid a costly table rebuild.
            In this case, we checked the output of show create
            table  and
            copied any clauses such as table\GCHARACTER SET
            and NOT NULL from the original column
            definition.
          
Again, renaming a column for a small table is fast enough that we need to examine the “rows affected” number to verify that the new DDL mechanism is more efficient than the old one. With a big table, the difference in elapsed time makes the improvement obvious.
/* Run through a sequence of 'rename column' statements. Because this operation involves only metadata, not table data, it is fast for big and small tables, with new or old DDL mechanisms. */ \! clear \! echo "Rename column (fast technique, small table):" alter table small_table change `IS_NULLABLE` `NULLABLE` varchar(3) character set utf8 not null, algorithm=inplace; \! echo "Rename back to original name (slow technique):" alter table small_table change `NULLABLE` `IS_NULLABLE` varchar(3) character set utf8 not null, algorithm=copy; \! echo "Rename column (fast technique, big table):" alter table big_table change `IS_NULLABLE` `NULLABLE` varchar(3) character set utf8 not null, algorithm=inplace; \! echo "Rename back to original name (slow technique):" alter table big_table change `NULLABLE` `IS_NULLABLE` varchar(3) character set utf8 not null, algorithm=copy;
Running this code gives this output, condensed for brevity and with the most important points bolded:
Rename column (fast technique, small table): Query OK, 0 rows affected (0.05 sec) Query OK, 0 rows affected (0.13 sec) Records: 0 Duplicates: 0 Warnings: 0 Rename back to original name (slow technique): Query OK, 0 rows affected (0.00 sec) Query OK, 1678 rows affected (0.35 sec) Records: 1678 Duplicates: 0 Warnings: 0 Rename column (fast technique, big table): Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.11 sec) Records: 0 Duplicates: 0 Warnings: 0 Rename back to original name (slow technique): Query OK, 0 rows affected (0.00 sec) Query OK, 1718272 rows affected (1 min 0.00 sec) Records: 1718272 Duplicates: 0 Warnings: 0 Query OK, 0 rows affected (0.00 sec)
Example 14.5 Dropping Foreign Keys
Here is a demonstration of foreign keys, including improvement to the speed of dropping a foreign key constraint.
/* Demonstrate aspects of foreign keys that are or aren't affected by the DDL improvements. - Create a new table with only a few values to serve as the parent table. - Set up the 'small' and 'big' tables as child tables using a foreign key. - Verify that the ON DELETE CASCADE clause makes changes ripple from parent to child tables. - Drop the foreign key constraints, and optionally associated indexes. (This is the operation that is sped up.) */ \! clear -- Make sure foreign keys are being enforced, and allow -- rollback after doing some DELETEs that affect both -- parent and child tables. set foreign_key_checks = 1; set autocommit = 0; -- Create a parent table, containing values that we know are already present -- in the child tables. drop table if exists schema_names; create table schema_names (id int unsigned not null primary key auto_increment, schema_name varchar(64) character set utf8 not null, index i_schema (schema_name)) as select distinct table_schema schema_name from small_table; show create table schema_names\G show create table small_table\G show create table big_table\G -- Creating the foreign key constraint still involves a table rebuild when foreign_key_checks=1, -- as illustrated by the "rows affected" figure. alter table small_table add constraint small_fk foreign key i_table_schema (table_schema) references schema_names(schema_name) on delete cascade; alter table big_table add constraint big_fk foreign key i_table_schema (table_schema) references schema_names(schema_name) on delete cascade; show create table small_table\G show create table big_table\G select schema_name from schema_names order by schema_name; select count(table_schema) howmany, table_schema from small_table group by table_schema; select count(table_schema) howmany, table_schema from big_table group by table_schema; -- big_table is the parent table. -- schema_names is the parent table. -- big_table is the child table. -- (One row in the parent table can have many "children" in the child table.) -- Changes to the parent table can ripple through to the child table. -- For example, removing the value 'test' from schema_names.schema_name will -- result in the removal of 20K or so rows from big_table. delete from schema_names where schema_name = 'test'; select schema_name from schema_names order by schema_name; select count(table_schema) howmany, table_schema from small_table group by table_schema; select count(table_schema) howmany, table_schema from big_table group by table_schema; -- Because we've turned off autocommit, we can still get back those deleted rows -- if the DELETE was issued by mistake. rollback; select schema_name from schema_names order by schema_name; select count(table_schema) howmany, table_schema from small_table group by table_schema; select count(table_schema) howmany, table_schema from big_table group by table_schema; -- All of the cross-checking between parent and child tables would be -- deadly slow if there wasn't the requirement for the corresponding -- columns to be indexed! -- But we can get rid of the foreign key using a fast operation -- that doesn't rebuild the table. -- If we didn't specify a constraint name when setting up the foreign key, we would -- have to find the auto-generated name such as 'big_table_ibfk_1' in the -- output from 'show create table'. -- For the small table, we'll drop the foreign key and the associated index. -- Having an index on a small table is less critical. \! echo "DROP FOREIGN KEY and INDEX from small_table:" alter table small_table drop foreign key small_fk, drop index small_fk; -- For the big table, we'll drop the foreign key and leave the associated index. -- If we are still doing queries that reference the indexed column, the index is -- very important to avoid a full table scan of the big table. \! echo "DROP FOREIGN KEY from big_table:" alter table big_table drop foreign key big_fk; show create table small_table\G show create table big_table\G
Running this code gives this output, condensed for brevity and with the most important points bolded:
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 4 rows affected (0.03 sec)
Records: 4  Duplicates: 0  Warnings: 0
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`) 
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
Query OK, 1678 rows affected (0.10 sec)
Records: 1678  Duplicates: 0  Warnings: 0
Query OK, 1718272 rows affected (1 min 14.54 sec)
Records: 1718272  Duplicates: 0  Warnings: 0
*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `small_fk` (`TABLE_SCHEMA`), 
  CONSTRAINT `small_fk` FOREIGN KEY (`TABLE_SCHEMA`) REFERENCES `schema_names` (`schema_name`) ON DELETE CASCADE 
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.12 sec)
*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`), 
  CONSTRAINT `big_fk` FOREIGN KEY (`TABLE_SCHEMA`) REFERENCES `schema_names` (`schema_name`) ON DELETE CASCADE 
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.01 sec)
+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)
+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)
+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (2.10 sec)
Query OK, 1 row affected (1.52 sec)
+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
+---------+--------------------+
3 rows in set (0.00 sec)
+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
+---------+--------------------+
3 rows in set (1.74 sec)
Query OK, 0 rows affected (0.60 sec)
+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)
+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)
+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (1.59 sec)
DROP FOREIGN KEY and INDEX from small_table:
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0
DROP FOREIGN KEY from big_table:
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0
*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`)
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
Example 14.6 Changing Auto-Increment Value
        Here is an illustration of increasing the
        auto-increment lower
        limit for a table column, demonstrating how this operation now
        avoids a table rebuild, plus some other fun facts about
        InnoDB auto-increment columns.
      
/*
If this script is run after foreign_key.sql, the schema_names table is
already set up. But to allow this script to run multiple times without
running into duplicate ID errors, we set up the schema_names table
all over again.
*/
\! clear
\! echo "=== Adjusting the Auto-Increment Limit for a Table ==="
\! echo
drop table if exists schema_names;
create table schema_names (id int unsigned not null primary key auto_increment,
  schema_name varchar(64) character set utf8 not null, index i_schema (schema_name))
  as select distinct table_schema schema_name from small_table;
\! echo "Initial state of schema_names table. AUTO_INCREMENT is included in SHOW CREATE TABLE output."
\! echo "Note how MySQL reserved a block of IDs, but only needed 4 of them in this transaction, so the next inserted values would get IDs 8 and 9."
show create table schema_names\G
select * from schema_names order by id;
\! echo "Inserting even a tiny amount of data can produce gaps in the ID sequence."
insert into schema_names (schema_name) values ('eight'), ('nine');
\! echo "Bumping auto-increment lower limit to 20 (fast mechanism):"
alter table schema_names auto_increment=20, algorithm=inplace;
\! echo "Inserting 2 rows that should get IDs 20 and 21:"
insert into schema_names (schema_name) values ('foo'), ('bar');
commit;
\! echo "Bumping auto-increment lower limit to 30 (slow mechanism):"
alter table schema_names auto_increment=30, algorithm=copy;
\! echo "Inserting 2 rows that should get IDs 30 and 31:"
insert into schema_names (schema_name) values ('bletch'),('baz');
commit;
select * from schema_names order by id;
\! echo "Final state of schema_names table. AUTO_INCREMENT value shows the next inserted row would get ID=32."
show create table schema_names\G
Running this code gives this output, condensed for brevity and with the most important points bolded:
=== Adjusting the Auto-Increment Limit for a Table ===
Query OK, 0 rows affected (0.01 sec)
Query OK, 4 rows affected (0.02 sec)
Records: 4  Duplicates: 0  Warnings: 0
Initial state of schema_names table. AUTO_INCREMENT is included in SHOW CREATE TABLE output.
Note how MySQL reserved a block of IDs, but only needed 4 of them in this transaction, so the next inserted values would get IDs 8 and 9.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
+----+--------------------+
4 rows in set (0.00 sec)
Inserting even a tiny amount of data can produce gaps in the ID sequence.
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (0.00 sec)
Bumping auto-increment lower limit to 20 (fast mechanism):
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0
Inserting 2 rows that should get IDs 20 and 21:
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Bumping auto-increment lower limit to 30 (slow mechanism):
Query OK, 8 rows affected (0.02 sec)
Records: 8  Duplicates: 0  Warnings: 0
Inserting 2 rows that should get IDs 30 and 31:
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (0.01 sec)
+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
|  8 | eight              |
|  9 | nine               |
| 20 | foo                |
| 21 | bar                |
| 30 | bletch             |
| 31 | baz                |
+----+--------------------+
10 rows in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Final state of schema_names table. AUTO_INCREMENT value shows the next inserted row would get ID=32.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
Example 14.7 Controlling Concurrency with the LOCK Clause
        This example shows how to use the LOCK clause
        of the ALTER TABLE statement to
        allow or deny concurrent access to the table while an online DDL
        operation is in progress. The clause has settings that allow
        queries and DML statements
        (LOCK=NONE), just
        queries
        (LOCK=SHARED), or no concurrent access at all
        (LOCK=EXCLUSIVE).
      
        In one session, we run a succession of
        ALTER TABLE statements to create
        and drop an index, using different values for the
        LOCK clause to see what happens with waiting
        or deadlocking in either session. We are using the same
        BIG_TABLE table as in previous examples,
        starting with approximately 1.7 million rows. For illustration
        purposes, we will index and query the
        IS_NULLABLE column. (Although in real life it
        would be silly to make an index for a tiny column with only 2
        distinct values.)
      
mysql: desc big_table; +--------------------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------------------+---------------------+------+-----+---------+----------------+ | TABLE_CATALOG | varchar(512) | NO | | | | | TABLE_SCHEMA | varchar(64) | NO | | | | | TABLE_NAME | varchar(64) | NO | | | | | COLUMN_NAME | varchar(64) | NO | | | | | ORDINAL_POSITION | bigint(21) unsigned | NO | | 0 | | | COLUMN_DEFAULT | longtext | YES | | NULL | | | IS_NULLABLE | varchar(3) | NO | | | | ... +--------------------------+---------------------+------+-----+---------+----------------+ 21 rows in set (0.14 sec) mysql: alter table big_table add index i1(is_nullable); Query OK, 0 rows affected (20.71 sec) mysql: alter table big_table drop index i1; Query OK, 0 rows affected (0.02 sec) mysql: alter table big_table add index i1(is_nullable), lock=exclusive; Query OK, 0 rows affected (19.44 sec) mysql: alter table big_table drop index i1; Query OK, 0 rows affected (0.03 sec) mysql: alter table big_table add index i1(is_nullable), lock=shared; Query OK, 0 rows affected (16.71 sec) mysql: alter table big_table drop index i1; Query OK, 0 rows affected (0.05 sec) mysql: alter table big_table add index i1(is_nullable), lock=none; Query OK, 0 rows affected (12.26 sec) mysql: alter table big_table drop index i1; Query OK, 0 rows affected (0.01 sec) ... repeat statements like the above while running queries ... ... and DML statements at the same time in another session ...
        Nothing dramatic happens in the session running the DDL
        statements. Sometimes, an ALTER
        TABLE takes unusually long because it is waiting for
        another transaction to finish, when that transaction modified
        the table during the DDL or queried the table before the DDL:
      
mysql: alter table big_table add index i1(is_nullable), lock=none; Query OK, 0 rows affected (59.27 sec) mysql: -- The previous ALTER took so long because it was waiting for all the concurrent mysql: -- transactions to commit or roll back. mysql: alter table big_table drop index i1; Query OK, 0 rows affected (41.05 sec) mysql: -- Even doing a SELECT on the table in the other session first causes mysql: -- the ALTER TABLE above to stall until the transaction mysql: -- surrounding the SELECT is committed or rolled back.
        Here is the log from another session running concurrently, where
        we issue queries and DML statements against the table before,
        during, and after the DDL operations shown in the previous
        listings. This first listing shows queries only. We expect the
        queries to be allowed during DDL operations using
        LOCK=NONE or LOCK=SHARED,
        and for the query to wait until the DDL is finished if the
        ALTER TABLE statement includes
        LOCK=EXCLUSIVE.
      
mysql: show variables like 'autocommit'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | autocommit | ON | +---------------+-------+ 1 row in set (0.01 sec) mysql: -- A trial query before any ADD INDEX in the other session: mysql: -- Note: because autocommit is enabled, each mysql: -- transaction finishes immediately after the query. mysql: select distinct is_nullable from big_table; +-------------+ | is_nullable | +-------------+ | NO | | YES | +-------------+ 2 rows in set (4.49 sec) mysql: -- Index is being created with LOCK=EXCLUSIVE on the ALTER statement. mysql: -- The query waits until the DDL is finished before proceeding. mysql: select distinct is_nullable from big_table; +-------------+ | is_nullable | +-------------+ | NO | | YES | +-------------+ 2 rows in set (17.26 sec) mysql: -- Index is being created with LOCK=SHARED on the ALTER statement. mysql: -- The query returns its results while the DDL is in progress. mysql: -- The same thing happens with LOCK=NONE on the ALTER statement. mysql: select distinct is_nullable from big_table; +-------------+ | is_nullable | +-------------+ | NO | | YES | +-------------+ 2 rows in set (3.11 sec) mysql: -- Once the index is created, and with no DDL in progress, mysql: -- queries referencing the indexed column are very fast: mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 411648 | +----------+ 1 row in set (0.20 sec) mysql: select distinct is_nullable from big_table; +-------------+ | is_nullable | +-------------+ | NO | | YES | +-------------+ 2 rows in set (0.00 sec)
        Now in this concurrent session, we run some transactions
        including DML statements, or a combination of DML statements and
        queries. We use DELETE statements
        to illustrate predictable, verifiable changes to the table.
        Because the transactions in this part can span multiple
        statements, we run these tests with
        autocommit turned off.
      
mysql: set global autocommit = off; Query OK, 0 rows affected (0.00 sec) mysql: -- Count the rows that will be involved in our DELETE statements: mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 411648 | +----------+ 1 row in set (0.95 sec) mysql: -- After this point, any DDL statements back in the other session mysql: -- stall until we commit or roll back. mysql: delete from big_table where is_nullable = 'YES' limit 11648; Query OK, 11648 rows affected (0.14 sec) mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 400000 | +----------+ 1 row in set (1.04 sec) mysql: rollback; Query OK, 0 rows affected (0.09 sec) mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 411648 | +----------+ 1 row in set (0.93 sec) mysql: -- OK, now we're going to try that during index creation with LOCK=NONE. mysql: delete from big_table where is_nullable = 'YES' limit 11648; Query OK, 11648 rows affected (0.21 sec) mysql: -- We expect that now there will be 400000 'YES' rows left: mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 400000 | +----------+ 1 row in set (1.25 sec) mysql: -- In the other session, the ALTER TABLE is waiting before finishing, mysql: -- because _this_ transaction hasn't committed or rolled back yet. mysql: rollback; Query OK, 0 rows affected (0.11 sec) mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 411648 | +----------+ 1 row in set (0.19 sec) mysql: -- The ROLLBACK left the table in the same state we originally found it. mysql: -- Now let's make a permanent change while the index is being created, mysql: -- again with ALTER TABLE ... , LOCK=NONE. mysql: -- First, commit so the DROP INDEX in the other shell can finish; mysql: -- the previous SELECT started a transaction that accessed the table. mysql: commit; Query OK, 0 rows affected (0.00 sec) mysql: -- Now we add the index back in the other shell, then issue DML in this one mysql: -- while the DDL is running. mysql: delete from big_table where is_nullable = 'YES' limit 11648; Query OK, 11648 rows affected (0.23 sec) mysql: commit; Query OK, 0 rows affected (0.01 sec) mysql: -- In the other shell, the ADD INDEX has finished. mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 400000 | +----------+ 1 row in set (0.19 sec) mysql: -- At the point the new index is finished being created, it contains entries mysql: -- only for the 400000 'YES' rows left when all concurrent transactions are finished. mysql: mysql: -- Now we will run a similar test, while ALTER TABLE ... , LOCK=SHARED is running. mysql: -- We expect a query to complete during the ALTER TABLE, but for the DELETE mysql: -- to run into some kind of issue. mysql: commit; Query OK, 0 rows affected (0.00 sec) mysql: -- As expected, the query returns results while the LOCK=SHARED DDL is running: mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 400000 | +----------+ 1 row in set (2.07 sec) mysql: -- The DDL in the other session is not going to finish until this transaction mysql: -- is committed or rolled back. If we tried a DELETE now and it waited because mysql: -- of LOCK=SHARED on the DDL, both transactions would wait forever (deadlock). mysql: -- MySQL detects this condition and cancels the attempted DML statement. mysql: delete from big_table where is_nullable = 'YES' limit 100000; ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction mysql: -- The transaction here is still going, so in the other shell, the ADD INDEX operation mysql: -- is waiting for this transaction to commit or roll back. mysql: rollback; Query OK, 0 rows affected (0.00 sec) mysql: -- Now let's try issuing a query and some DML, on one line, while running mysql: -- ALTER TABLE ... , LOCK=EXCLUSIVE in the other shell. mysql: -- Notice how even the query is held up until the DDL is finished. mysql: -- By the time the DELETE is issued, there is no conflicting access mysql: -- to the table and we avoid the deadlock error. mysql: select count(*) from big_table where is_nullable = 'YES'; delete from big_table where is_nullable = 'YES' limit 100000; +----------+ | count(*) | +----------+ | 400000 | +----------+ 1 row in set (15.98 sec) Query OK, 100000 rows affected (2.81 sec) mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 300000 | +----------+ 1 row in set (0.17 sec) mysql: rollback; Query OK, 0 rows affected (1.36 sec) mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 400000 | +----------+ 1 row in set (0.19 sec) mysql: commit; Query OK, 0 rows affected (0.00 sec) mysql: -- Next, we try ALTER TABLE ... , LOCK=EXCLUSIVE in the other session mysql: -- and only issue DML, not any query, in the concurrent transaction here. mysql: delete from big_table where is_nullable = 'YES' limit 100000; Query OK, 100000 rows affected (16.37 sec) mysql: -- That was OK because the ALTER TABLE did not have to wait for the transaction mysql: -- here to complete. The DELETE in this session waited until the index was ready. mysql: select count(*) from big_table where is_nullable = 'YES'; +----------+ | count(*) | +----------+ | 300000 | +----------+ 1 row in set (0.16 sec) mysql: commit; Query OK, 0 rows affected (0.00 sec)
In the preceding example listings, we learned that:
            The LOCK clause for
            ALTER TABLE is set off from
            the rest of the statement by a comma.
          
Online DDL operations might wait before starting, until any prior transactions that access the table are committed or rolled back.
Online DDL operations might wait before completing, until any concurrent transactions that access the table are committed or rolled back.
            While an online DDL operation is running, concurrent queries
            are relatively straightforward, as long as the
            ALTER TABLE statement uses
            LOCK=NONE or
            LOCK=SHARED.
          
            Pay attention to whether
            autocommit is turned on or
            off. If it is turned off, be careful to end transactions in
            other sessions (even just queries) before performing DDL
            operations on the table.
          
            With LOCK=SHARED, concurrent transactions
            that mix queries and DML could encounter deadlock errors and
            have to be restarted after the DDL is finished.
          
            With LOCK=NONE, concurrent transactions
            can freely mix queries and DML. The DDL operation waits
            until the concurrent transactions are committed or rolled
            back.
          
            With LOCK=EXCLUSIVE, concurrent
            transactions can freely mix queries and DML, but those
            transactions wait until the DDL operation is finished before
            they can access the table.
Example 14.8 Schema Setup Code for Online DDL Experiments
        You can create multiple indexes on a table with one
        ALTER TABLE statement. This is
        relatively efficient, because the clustered index of the table
        needs to be scanned only once (although the data is sorted
        separately for each new index). For example:
      
CREATE TABLE T1(A INT PRIMARY KEY, B INT, C CHAR(1)) ENGINE=InnoDB; INSERT INTO T1 VALUES (1,2,'a'), (2,3,'b'), (3,2,'c'), (4,3,'d'), (5,2,'e'); COMMIT; ALTER TABLE T1 ADD INDEX (B), ADD UNIQUE INDEX (C);
        The above statements create table T1 with the
        primary key on column A, insert several rows,
        then build two new indexes on columns B and
        C. If there were many rows inserted into
        T1 before the ALTER
        TABLE statement, this approach is much more efficient
        than creating all the secondary indexes before loading the data.
      
        Because dropping InnoDB secondary indexes also does not require
        any copying of table data, it is equally efficient to drop
        multiple indexes with a single ALTER
        TABLE statement or multiple DROP
        INDEX statements:
      
ALTER TABLE T1 DROP INDEX B, DROP INDEX C;
or:
DROP INDEX B ON T1; DROP INDEX C ON T1;
Example 14.9 Creating and Dropping the Primary Key
        Restructuring the clustered
        index for an InnoDB table always
        requires copying the table data. Thus, it is best to define the
        primary key when you
        create a table, rather than issuing ALTER TABLE ... ADD
        PRIMARY KEY later, to avoid rebuilding the table.
      
        Defining a PRIMARY KEY later causes the data
        to be copied, as in the following example:
      
CREATE TABLE T2 (A INT, B INT); INSERT INTO T2 VALUES (NULL, 1); ALTER TABLE T2 ADD PRIMARY KEY (B);
        When you create a UNIQUE or PRIMARY
        KEY index, MySQL must do some extra work. For
        UNIQUE indexes, MySQL checks that the table
        contains no duplicate values for the key. For a PRIMARY
        KEY index, MySQL also checks that none of the
        PRIMARY KEY columns contains a
        NULL.
      
        
        When you add a primary key using the
        ALGORITHM=COPY clause, MySQL actually
        converts NULL values in the associated
        columns to default values: 0 for numbers, the empty string for
        character-based columns and BLOBs, and 0000-00-00 00:00:00 for
        DATETIME. This is a non-standard behavior
        that Oracle recommends you not rely on. Adding a primary key
        using ALGORITHM=INPLACE is only allowed when
        the SQL_MODE setting includes
        the strict_trans_tables or
        strict_all_tables flags; when the
        SQL_MODE setting is strict, ADD
        PRIMARY KEY ... , ALGORITHM=INPLACE is allowed, but
        the statement can still fail if the requested primary key
        columns contain any NULL values. The
        ALGORITHM=INPLACE behavior is more
        standard-compliant.
      
        The following examples show the different possibilities for the
        ADD PRIMARY KEY clause. With the
        ALGORITHM=COPY clause, the operation succeeds
        despite the presence of NULL values in the
        primary key columns; the data is silently changed, which could
        cause problems.
      
mysql> CREATE TABLE add_pk_via_copy (c1 INT, c2 VARCHAR(10), c3 DATETIME);
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO add_pk_via_copy VALUES (1,'a','2014-11-03 11:01:37'),(NULL,NULL,NULL);
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
mysql> SET sql_mode = '';
Query OK, 0 rows affected (0.00 sec)
mysql> ALTER TABLE add_pk_via_copy ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;
Query OK, 2 rows affected, 3 warnings (0.07 sec)
Records: 2  Duplicates: 0  Warnings: 3
mysql> SHOW WARNINGS;
+---------+------+-----------------------------------------+
| Level   | Code | Message                                 |
+---------+------+-----------------------------------------+
| Warning | 1265 | Data truncated for column 'c1' at row 2 |
| Warning | 1265 | Data truncated for column 'c2' at row 2 |
| Warning | 1265 | Data truncated for column 'c3' at row 2 |
+---------+------+-----------------------------------------+
3 rows in set (0.00 sec)
mysql> SELECT * FROM add_pk_via_copy;
+----+----+---------------------+
| c1 | c2 | c3                  |
+----+----+---------------------+
|  0 |    | 0000-00-00 00:00:00 |
|  1 | a  | 2014-11-03 11:01:37 |
+----+----+---------------------+
2 rows in set (0.00 sec)
        
      
        With the ALGORITHM=INPLACE clause, the
        operation could fail for different reasons, because this setting
        considers data integrity a high priority: the statement gives an
        error if the SQL_MODE setting
        is not “strict” enough, or if the primary key
        columns contain any NULL values. Once we
        address both of those requirements, the
        ALTER TABLE operation succeeds.
      
mysql> CREATE TABLE add_pk_via_inplace (c1 INT, c2 VARCHAR(10), c3 DATETIME); Query OK, 0 rows affected (0.02 sec) mysql> INSERT INTO add_pk_via_inplace VALUES (1,'a','2014-11-03 11:01:37'),(NULL,NULL,NULL); Query OK, 2 rows affected (0.00 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> SELECT * FROM add_pk_via_inplace; +------+------+---------------------+ | c1 | c2 | c3 | +------+------+---------------------+ | 1 | a | 2014-11-03 11:01:37 | | NULL | NULL | NULL | +------+------+---------------------+ 2 rows in set (0.00 sec) mysql> SET sql_mode = ''; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=INPLACE; ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: cannot silently convert NULL values, as required in this SQL_MODE. Try ALGORITHM=COPY. mysql> SET sql_mode ='strict_trans_tables'; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=INPLACE; ERROR 1138 (22004): Invalid use of NULL value mysql> DELETE FROM add_pk_via_inplace WHERE c1 IS NULL OR c2 IS NULL OR c3 IS NULL; Query OK, 1 row affected (0.01 sec) mysql> SELECT * FROM add_pk_via_inplace; +------+------+---------------------+ | c1 | c2 | c3 | +------+------+---------------------+ | 1 | a | 2014-11-03 11:01:37 | +------+------+---------------------+ 1 row in set (0.00 sec) mysql> ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=INPLACE; Query OK, 0 rows affected (0.09 sec) Records: 0 Duplicates: 0 Warnings: 0
        If you create a table without a primary key, InnoDB chooses one
        for you, which can be the first UNIQUE key
        defined on NOT NULL columns, or a
        system-generated key. To avoid any uncertainty and the potential
        space requirement for an extra hidden column, specify the
        PRIMARY KEY clause as part of the
        CREATE TABLE statement.
      Each ALTER TABLE operation for an
      InnoDB table is governed by several aspects:
Whether there is any change to the physical representation of the table, or whether it purely a change to metadata that can be done without touching the table itself.
Whether the volume of data in the table stays the same, increases, or decreases.
Whether a change in table data involves the clustered index, secondary indexes, or both.
          Whether there are any foreign
          key relationships between the table being altered and
          some other table. The mechanics differ depending on whether
          the foreign_key_checks
          configuration option is enabled or disabled.
        
          Whether the table is partitioned. Partitioning clauses of
          ALTER TABLE are turned into
          low-level operations involving one or more tables, and those
          operations follow the regular rules for online DDL.
        
Whether the table data must be copied, whether the table can be reorganized “in-place”, or a combination of both.
Whether the table contains any auto-increment columns.
          What degree of locking is
          required, either by the nature of the underlying database
          operations, or a LOCK clause that you
          specify in the ALTER TABLE
          statement.
      This section explains how these factors affect the different kinds
      of ALTER TABLE operations on
      InnoDB tables.
Here are the primary reasons why an online DDL operation could fail:
          If a LOCK clause specifies a low degree of
          locking (SHARED or NONE)
          that is not compatible with the particular type of DDL
          operation.
        
If a timeout occurs while waiting to get an exclusive lock on the table, which is needed briefly during the initial and final phases of the DDL operation.
          If the tmpdir file system
          runs out of disk space, while MySQL writes temporary sort
          files on disk during index creation.
        
          If the ALTER TABLE takes so
          long, and concurrent DML modifies the table so much, that the
          size of the temporary online log exceeds the value of the
          innodb_online_alter_log_max_size
          configuration option. This condition causes a
          DB_ONLINE_LOG_TOO_BIG error.
        
          If concurrent DML makes changes to the table that are allowed
          with the original table definition, but not with the new one.
          The operation only fails at the very end, when MySQL tries to
          apply all the changes from concurrent DML statements. For
          example, you might insert duplicate values into a column while
          a unique index is being created, or you might insert
          NULL values into a column while creating a
          primary key index on
          that column. The changes made by the concurrent DML take
          precedence, and the ALTER TABLE
          operation is effectively rolled
          back.
      Although the configuration option
      innodb_file_per_table has a
      dramatic effect on the representation for an
      InnoDB table, all online DDL operations work
      equally well whether that option is enabled or disabled, and
      whether the table is physically located in its own
      .ibd file or inside the
      system tablespace.
    
InnoDB has two types of indexes: the clustered index representing all the data in the table, and optional secondary indexes to speed up queries. Since the clustered index contains the data values in its B-tree nodes, adding or dropping a clustered index does involve copying the data, and creating a new copy of the table. A secondary index, however, contains only the index key and the value of the primary key. This type of index can be created or dropped without copying the data in the clustered index. Because each secondary index contains copies of the primary key values (used to access the clustered index when needed), when you change the definition of the primary key, all secondary indexes are recreated as well.
Dropping a secondary index is simple. Only the internal InnoDB system tables and the MySQL data dictionary tables are updated to reflect the fact that the index no longer exists. InnoDB returns the storage used for the index to the tablespace that contained it, so that new indexes or additional table rows can use the space.
To add a secondary index to an existing table, InnoDB scans the table, and sorts the rows using memory buffers and temporary files in order by the values of the secondary index key columns. The B-tree is then built in key-value order, which is more efficient than inserting rows into an index in random order. Because the B-tree nodes are split when they fill, building the index in this way results in a higher fill-factor for the index, making it more efficient for subsequent access.
      Historically, the MySQL server and InnoDB have
      each kept their own metadata about table and index structures. The
      MySQL server stores this information in
      .frm files that are not
      protected by a transactional mechanism, while
      InnoDB has its own
      data dictionary as
      part of the system
      tablespace. If a DDL operation was interrupted by a crash
      or other unexpected event partway through, the metadata could be
      left inconsistent between these two locations, causing problems
      such as startup errors or inability to access the table that was
      being altered. Now that InnoDB is the default
      storage engine, addressing such issues is a high priority. These
      enhancements to DDL operations reduce the window of opportunity
      for such issues to occur.
      Although no data is lost if the server crashes while an
      ALTER TABLE statement is executing,
      the crash recovery
      process is different for
      clustered indexes and
      secondary indexes.
    
      If the server crashes while creating an InnoDB secondary index,
      upon recovery, MySQL drops any partially created indexes. You must
      re-run the ALTER TABLE or
      CREATE INDEX statement.
    
When a crash occurs during the creation of an InnoDB clustered index, recovery is more complicated, because the data in the table must be copied to an entirely new clustered index. Remember that all InnoDB tables are stored as clustered indexes. In the following discussion, we use the word table and clustered index interchangeably.
MySQL creates the new clustered index by copying the existing data from the original InnoDB table to a temporary table that has the desired index structure. Once the data is completely copied to this temporary table, the original table is renamed with a different temporary table name. The temporary table comprising the new clustered index is renamed with the name of the original table, and the original table is dropped from the database.
If a system crash occurs while creating a new clustered index, no data is lost, but you must complete the recovery process using the temporary tables that exist during the process. Since it is rare to re-create a clustered index or re-define primary keys on large tables, or to encounter a system crash during this operation, this manual does not provide information on recovering from this scenario.
      With the exception of ALTER TABLE
      partitioning clauses, online DDL operations for partitioned
      InnoDB tables follow the same rules that apply
      to regular InnoDB tables. Online DDL rules are
      outlined in Table 14.6, “Summary of Online Status for DDL Operations”.
    
      ALTER TABLE partitioning clauses do
      not go through the same internal online DDL API as regular
      non-partitioned InnoDB tables, and are only
      allowed in conjunction with ALGORITHM=DEFAULT
      and LOCK=DEFAULT.
    
      If you use an ALTER TABLE partitioning clause in an
      ALTER TABLE statement, the
      partitioned table will be “re-partitioned” using the
      ALTER TABLE COPY
      algorithm. In other words, a new partitioned table is created with
      the new partitioning scheme. The newly created table will include
      any changes applied by the ALTER
      TABLE statement and the table data will be copied into
      the new table structure.
    
      If you do not change the table's partitioning using
      ALTER TABLE partitioning clauses or
      perform any other partition management in your
      ALTER TABLE statement,
      ALTER TABLE will use the
      INPLACE algorithm on each table partition. Be
      aware, however, that when INPLACE
      ALTER TABLE operations are
      performed on each partition, there will be increased demand on
      system resources due to operations being performed on multiple
      partitions.
    
      Even though partitioning clauses of the ALTER
      TABLE statement do not go through the same internal
      online DDL API as regular non-partitioned
      InnoDB tables, MySQL still attempts to minimize
      data copying and locking where possible:
          ADD PARTITION and DROP
          PARTITION for tables partitioned by
          RANGE or LIST do not
          copy any existing data.
        
          TRUNCATE PARTITION does not copy any
          existing data, for all types of partitioned tables.
        
          Concurrent queries are allowed during ADD
          PARTITION and COALESCE PARTITION
          for tables partitioned by HASH or
          LIST. MySQL copies the data while holding a
          shared lock.
        
          For REORGANIZE PARTITION, REBUILD
          PARTITION, or ADD PARTITION or
          COALESCE PARTITION for a table partitioned
          by LINEAR HASH or LIST,
          concurrent queries are allowed. Data from the affected
          partitions is copied while holding a shared metadata (read)
          lock at the table level.
        Full-text search (FTS) and foreign keys are not supported by
        InnoDB partitioned tables. For more
        information, see Section 12.9.5, “Full-Text Restrictions” and
        Section 19.6.2, “Partitioning Limitations Relating to Storage Engines”.
Take the following limitations into account when running online DDL operations:
          During an online DDL operation that copies the table, files
          are written to the temporary directory
          ($TMPDIR on Unix, %TEMP%
          on Windows, or the directory specified by the
          --tmpdir configuration
          variable). Each temporary file is large enough to hold one
          column in the new table or index, and each one is removed as
          soon as it is merged into the final table or index.
        
          The table is copied, rather than using Fast Index Creation
          when you create an index on a TEMPORARY
          TABLE. This has been reported as MySQL Bug #39833.
        
          InnoDB handles error cases when users attempt to drop indexes
          needed for foreign keys. See
          Section 14.18.5, “InnoDB Error Codes” for information related
          to error 1553.
        
          The ALTER TABLE clause
          LOCK=NONE is not allowed if there are
          ON...CASCADE or ON...SET
          NULL constraints on the table.
        
          During each online DDL ALTER
          TABLE statement, regardless of the
          LOCK clause, there are brief periods at the
          beginning and end requiring an
          exclusive lock on
          the table (the same kind of lock specified by the
          LOCK=EXCLUSIVE clause). Thus, an online DDL
          operation might wait before starting if there is a
          long-running transaction performing inserts, updates, deletes,
          or SELECT ... FOR UPDATE on that table; and
          an online DDL operation might wait before finishing if a
          similar long-running transaction was started while the
          ALTER TABLE was in progress.
        
          When running an online ALTER
          TABLE operation, the thread that runs the
          ALTER TABLE operation will
          apply an “online log” of DML operations that were
          run concurrently on the same table from other connection
          threads. When the DML operations are applied, it is possible
          to encounter a duplicate key entry error (ERROR
          1062 (23000): Duplicate entry), even if the
          duplicate entry is only temporary and would be reverted by a
          later entry in the “online log”. This is similar
          to the idea of a foreign key constraint check in
          InnoDB in which constraints must hold
          during a transaction.
        
          OPTIMIZE TABLE for an
          InnoDB table is mapped to an
          ALTER TABLE operation to
          rebuild the table and update index statistics and free unused
          space in the clustered index. Prior to 5.6.17, there is no
          online DDL support
          for this operation. Secondary indexes are not created as
          efficiently because keys are inserted in the order they
          appeared in the primary key. As of 5.6.17,
          OPTIMIZE TABLE is supported
          with the addition of online
          DDL support for rebuilding regular and partitioned
          InnoDB tables. For additional information,
          see Section 14.10.1, “Overview of Online DDL”.
        
          InnoDB tables created before MySQL 5.6 do
          not support ALTER
          TABLE ... ALGORITHM=INPLACE for tables that include
          temporal columns (DATE,
          DATETIME or
          TIMESTAMP) and have not been
          rebuilt using
          ALTER TABLE ...
          ALGORITHM=COPY. In this case, an
          ALTER TABLE ...
          ALGORITHM=INPLACE operation returns the following
          error:
        
ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY.
          System variables that are true or false can be enabled at
          server startup by naming them, or disabled by using a
          --skip- prefix. For example, to enable or
          disable the InnoDB adaptive hash index, you
          can use
          --innodb_adaptive_hash_index or
          --skip-innodb_adaptive_hash_index
          on the command line, or
          innodb_adaptive_hash_index or
          skip-innodb_adaptive_hash_index in an
          option file.
        
          System variables that take a numeric value can be specified as
          --
          on the command line or as
          var_name=valuevar_name=value
Many system variables can be changed at runtime (see Section 5.1.5.2, “Dynamic System Variables”).
          For information about GLOBAL and
          SESSION variable scope modifiers, refer to
          the
          SET
          statement documentation.
        
          Certain options control the locations and layout of the
          InnoDB data files.
          Section 14.3, “InnoDB Configuration” explains how to use
          these options.
        
          Some options, which you might not use initially, help tune
          InnoDB performance characteristics based on
          machine capacity and your database
          workload.
        
For more information on specifying options and system variables, see Section 4.2.3, “Specifying Program Options”.
Table 14.7 InnoDB Option/Variable
Reference
| Deprecated | 5.5.22 | ||
| Command-Line Format | --ignore-builtin-innodb | ||
| System Variable | Name | ignore_builtin_innodb | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
          In MySQL 5.1, this option caused the server to behave as if
          the built-in InnoDB were not present, which
          enabled InnoDB Plugin to be used instead.
          In MySQL 5.6, InnoDB is the
          default storage engine and InnoDB Plugin is
          not used, so this option has no effect. As of MySQL 5.6.5, it
          is ignored.
        
| Deprecated | 5.6.21 | ||
| Command-Line Format | --innodb[=value] | ||
| Permitted Values | Type | enumeration | |
| Default | ON | ||
| Valid Values | OFF | ||
| ON | |||
| FORCE | |||
          Controls loading of the InnoDB storage
          engine, if the server was compiled with
          InnoDB support. This option has a tristate
          format, with possible values of OFF,
          ON, or FORCE. See
          Section 5.1.8.1, “Installing and Uninstalling Plugins”.
        
          To disable InnoDB, use
          --innodb=OFF
          or
          --skip-innodb.
          In this case, because the default storage engine is
          InnoDB, the server will not start
          unless you also use
          --default-storage-engine and
          --default-tmp-storage-engine to
          set the default to some other engine for both permanent and
          TEMPORARY tables.
        
          As of MySQL 5.6.21,
          --innodb=OFF
          and
          --skip-innodb
          options are deprecated and their use results in a warning.
          These options will be removed in a future MySQL release.
        
| Command-Line Format | --innodb-status-file | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Controls whether InnoDB creates a file
          named
          innodb_status.
          in the MySQL data directory. If enabled,
          pidInnoDB periodically writes the output of
          SHOW ENGINE
          INNODB STATUS to this file.
        
          By default, the file is not created. To create it, start
          mysqld with the
          --innodb-status-file=1 option. The file is
          deleted during normal shutdown.
        
          Disable the InnoDB storage engine. See the
          description of --innodb.
          
          daemon_memcached_enable_binlog
| Introduced | 5.6.6 | ||
| Command-Line Format | --daemon_memcached_enable_binlog=# | ||
| System Variable | Name | daemon_memcached_enable_binlog | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | false | ||
See Section 14.17, “InnoDB Integration with memcached” for usage details for this option.
          
          daemon_memcached_engine_lib_name
| Introduced | 5.6.6 | ||
| Command-Line Format | --daemon_memcached_engine_lib_name=library | ||
| System Variable | Name | daemon_memcached_engine_lib_name | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | file name | |
| Default | innodb_engine.so | ||
          Specifies the shared library that implements the
          InnoDB memcached plugin.
        
See Section 14.17, “InnoDB Integration with memcached” for usage details for this option.
          
          daemon_memcached_engine_lib_path
| Introduced | 5.6.6 | ||
| Command-Line Format | --daemon_memcached_engine_lib_path=directory | ||
| System Variable | Name | daemon_memcached_engine_lib_path | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | directory name | |
| Default | NULL | ||
          The path of the directory containing the shared library that
          implements the InnoDB
          memcached plugin. The default value is
          NULL, representing the MySQL plugin directory. You should not
          need to modify this parameter unless specifying a different
          storage engine memcached plugin that is
          located outside of the MySQL plugin directory.
        
See Section 14.17, “InnoDB Integration with memcached” for usage details for this option.
| Introduced | 5.6.6 | ||
| Command-Line Format | --daemon_memcached_option=options | ||
| System Variable | Name | daemon_memcached_option | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | string | |
| Default |  | ||
Used to pass space-separated memcached options to the underlying memcached memory object caching daemon on startup. For example, you might change the port that memcached listens on, reduce the maximum number of simultaneous connections, change the maximum memory size for a key/value pair, or enable debugging messages for the error log.
See Section 14.17, “InnoDB Integration with memcached” for usage details for this option. For information about memcached options, refer to the memcached man page.
| Introduced | 5.6.6 | ||
| Command-Line Format | --daemon_memcached_r_batch_size=# | ||
| System Variable | Name | daemon_memcached_r_batch_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 1 | ||
          Specifies how many memcached read
          operations (get) to perform before doing a
          COMMIT to start a new
          transaction. Counterpart of
          daemon_memcached_w_batch_size.
        
This value is set to 1 by default, so that any changes made to the table through SQL statements are immediately visible to the memcached operations. You might increase it to reduce the overhead from frequent commits on a system where the underlying table is only being accessed through the memcached interface. If you set the value too large, the amount of undo or redo data could impose some storage overhead, as with any long-running transaction.
See Section 14.17, “InnoDB Integration with memcached” for usage details for this option.
| Introduced | 5.6.6 | ||
| Command-Line Format | --daemon_memcached_w_batch_size=# | ||
| System Variable | Name | daemon_memcached_w_batch_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 1 | ||
          Specifies how many memcached write
          operations, such as add,
          set, or incr, to perform
          before doing a COMMIT to start
          a new transaction. Counterpart of
          daemon_memcached_r_batch_size.
        
          This value is set to 1 by default, on the assumption that any
          data being stored is important to preserve in case of an
          outage and should immediately be committed. When storing
          non-critical data, you might increase this value to reduce the
          overhead from frequent commits; but then the last
          N-1 uncommitted write operations
          could be lost in case of a crash.
        
See Section 14.17, “InnoDB Integration with memcached” for usage details for this option.
| Deprecated | 5.5.22 | ||
| Command-Line Format | --ignore-builtin-innodb | ||
| System Variable | Name | ignore_builtin_innodb | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
          See the description of
          --ignore-builtin-innodb under
          “InnoDB Command Options”
          earlier in this section.
        
| Command-Line Format | --innodb_adaptive_flushing=# | ||
| System Variable | Name | innodb_adaptive_flushing | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          Specifies whether to dynamically adjust the rate of flushing
          dirty pages in the
          InnoDB
          buffer pool based on
          the workload. Adjusting the flush rate dynamically is intended
          to avoid bursts of I/O activity. This setting is enabled by
          default. See
          Section 14.3.3.2, “Configuring the Rate of InnoDB Buffer Pool Flushing” for
          more information. For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_adaptive_flushing_lwm=# | ||
| System Variable | Name | innodb_adaptive_flushing_lwm | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 10 | ||
| Min Value | 0 | ||
| Max Value | 70 | ||
Low water mark representing percentage of redo log capacity at which adaptive flushing is enabled.
| Command-Line Format | --innodb_adaptive_hash_index=# | ||
| System Variable | Name | innodb_adaptive_hash_index | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          Whether the InnoDB
          adaptive hash
          index is enabled or disabled. It may be desirable,
          depending on your workload, to dynamically enable or disable
          adaptive hash
          indexing to improve query performance. Because the
          adaptive hash index may not be useful for all workloads,
          conduct benchmarks with it both enabled and disabled, using
          realistic workloads. See
          Section 14.2.6.6, “Adaptive Hash Indexes” for details.
        
          This variable is enabled by default. You can modify this
          parameter using the SET GLOBAL statement,
          without restarting the server. Changing the setting requires
          the SUPER privilege. You can also use
          --skip-innodb_adaptive_hash_index at server
          startup to disable it.
        
Disabling the adaptive hash index empties the hash table immediately. Normal operations can continue while the hash table is emptied, and executing queries that were using the hash table access the index B-trees directly instead. When the adaptive hash index is re-enabled, the hash table is populated again during normal operation.
          
          innodb_adaptive_max_sleep_delay
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_adaptive_max_sleep_delay=# | ||
| System Variable | Name | innodb_adaptive_max_sleep_delay | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 150000 | ||
| Min Value | 0 | ||
| Max Value | 1000000 | ||
          Allows InnoDB to automatically adjust the
          value of
          innodb_thread_sleep_delay up
          or down according to the current workload. Any non-zero value
          enables automated, dynamic adjustment of the
          innodb_thread_sleep_delay value, up to the
          maximum value specified in the
          innodb_adaptive_max_sleep_delay option. The
          value represents the number of microseconds. This option can
          be useful in busy systems, with greater than 16
          InnoDB threads. (In practice, it is most
          valuable for MySQL systems with hundreds or thousands of
          simultaneous connections.)
        
For more information, see Section 14.3.6, “Configuring Thread Concurrency for InnoDB”.
          
          
          innodb_additional_mem_pool_size
| Deprecated | 5.6.3 | ||
| Command-Line Format | --innodb_additional_mem_pool_size=# | ||
| System Variable | Name | innodb_additional_mem_pool_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 8388608 | ||
| Min Value | 2097152 | ||
| Max Value | 4294967295 | ||
          The size in bytes of a memory pool InnoDB
          uses to store data
          dictionary information and other internal data
          structures. The more tables you have in your application, the
          more memory you allocate here. If InnoDB
          runs out of memory in this pool, it starts to allocate memory
          from the operating system and writes warning messages to the
          MySQL error log. The default value is 8MB.
        
          This variable relates to the InnoDB
          internal memory allocator, which is unused if
          innodb_use_sys_malloc is
          enabled. As of MySQL 5.6.3,
          innodb_additional_mem_pool_size is
          deprecated and will be removed in a future MySQL release.
        
| Introduced | 5.6.7 | ||
| Command-Line Format | --innodb_api_bk_commit_interval=# | ||
| System Variable | Name | innodb_api_bk_commit_interval | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 5 | ||
| Min Value | 1 | ||
| Max Value | 1073741824  | ||
          How often to auto-commit idle connections that use the
          InnoDB memcached
          interface, in seconds. See Section 14.17, “InnoDB Integration with memcached”
          for usage details for this option.
        
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_api_disable_rowlock=# | ||
| System Variable | Name | innodb_api_disable_rowlock | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Use this variable to disable row locks when
          InnoDB memcached
          performs DML operations. By default,
          innodb_api_disable_rowlock is set to
          OFF which means that
          memcached requests row locks for get and
          set operations. When
          innodb_api_disable_rowlock is set to
          ON, memcached requests a
          table lock instead of row locks.
        
          The innodb_api_disable_rowlock option is
          not dynamic. It must be specified on the
          mysqld command line or entered in the MySQL
          configuration file. Configuration takes effect when the plugin
          is installed, which you do each time the MySQL server is
          started.
        
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_api_enable_binlog=# | ||
| System Variable | Name | innodb_api_enable_binlog | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Lets you use the InnoDB
          memcached plugin with the MySQL
          binary log. See
          Section 14.17, “InnoDB Integration with memcached” for usage details for this
          option.
        
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_api_enable_mdl=# | ||
| System Variable | Name | innodb_api_enable_mdl | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Locks the table used by the InnoDB
          memcached plugin, so that it cannot be
          dropped or altered by DDL
          through the SQL interface. See
          Section 14.17, “InnoDB Integration with memcached” for usage details for this
          option.
        
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_api_trx_level=# | ||
| System Variable | Name | innodb_api_trx_level | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
Lets you control the transaction isolation level on queries processed by the memcached interface. See Section 14.17, “InnoDB Integration with memcached” for usage details for this option. The constants corresponding to the familiar names are:
              0 = READ UNCOMMITTED
            
              1 = READ COMMITTED
            
              2 = REPEATABLE READ
            
              3 = SERIALIZABLE
| Command-Line Format | --innodb_autoextend_increment=# | ||
| System Variable | Name | innodb_autoextend_increment | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.5) | Type | integer | |
| Default | 8 | ||
| Min Value | 1 | ||
| Max Value | 1000 | ||
| Permitted Values (>= 5.6.6) | Type | integer | |
| Default | 64 | ||
| Min Value | 1 | ||
| Max Value | 1000 | ||
          The increment size (in MB) for extending the size of an
          auto-extend InnoDB
          system
          tablespace file when it becomes full. The default value
          is 64 as of MySQL 5.6.6, 8 before that. This variable does not
          affect the per-table tablespace files that are created if you
          use innodb_file_per_table=1.
          Those files are auto-extending regardless of the value of
          innodb_autoextend_increment.
          The initial extensions are by small amounts, after which
          extensions occur in increments of 4MB.
        
| Command-Line Format | --innodb_autoinc_lock_mode=# | ||
| System Variable | Name | innodb_autoinc_lock_mode | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 1 | ||
| Valid Values | 0 | ||
| 1 | |||
| 2 | |||
The lock mode to use for generating auto-increment values. The permissible values are 0, 1, or 2, for “traditional”, “consecutive”, or “interleaved” lock mode, respectively. Section 14.5.5, “AUTO_INCREMENT Handling in InnoDB”, describes the characteristics of these modes.
This variable has a default of 1 (“consecutive” lock mode).
          
          innodb_buffer_pool_dump_at_shutdown
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_buffer_pool_dump_at_shutdown=# | ||
| System Variable | Name | innodb_buffer_pool_dump_at_shutdown | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Specifies whether to record the pages cached in the InnoDB
          buffer pool when the
          MySQL server is shut down, to shorten the
          warmup process at the next
          restart. Typically used in combination with
          innodb_buffer_pool_load_at_startup.
        
For related information, see Section 14.3.3.5, “Preloading the InnoDB Buffer Pool for Faster Restart”.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_buffer_pool_dump_now=# | ||
| System Variable | Name | innodb_buffer_pool_dump_now | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Immediately records the pages cached in the InnoDB
          buffer pool. Typically
          used in combination with
          innodb_buffer_pool_load_now.
        
For related information, see Section 14.3.3.5, “Preloading the InnoDB Buffer Pool for Faster Restart”.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_buffer_pool_filename=file | ||
| System Variable | Name | innodb_buffer_pool_filename | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | file name | |
| Default | ib_buffer_pool | ||
          Specifies the name of the file that holds the list of
          tablespace IDs and page IDs produced by
          innodb_buffer_pool_dump_at_shutdown
          or
          innodb_buffer_pool_dump_now.
          Tablespace IDs and page IDs are saved in the following format:
          space, page_id. By default, the file is
          located in the InnoDB data directory.
        
For related information, see Section 14.3.3.5, “Preloading the InnoDB Buffer Pool for Faster Restart”.
| Command-Line Format | --innodb_buffer_pool_instances=# | ||
| System Variable | Name | innodb_buffer_pool_instances | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (<= 5.6.5) | Type | integer | |
| Default | 1 | ||
| Min Value | 1 | ||
| Max Value | 64 | ||
| Permitted Values (Windows, 32-bit platforms, >= 5.6.6) | Type | integer | |
| Default | (autosized) | ||
| Min Value | 1 | ||
| Max Value | 64 | ||
| Permitted Values (Other, >= 5.6.6) | Type | integer | |
| Default | 8 (or 1 if innodb_buffer_pool_size < 1GB | ||
| Min Value | 1 | ||
| Max Value | 64 | ||
          The number of regions that the InnoDB
          buffer pool is divided
          into. For systems with buffer pools in the multi-gigabyte
          range, dividing the buffer pool into separate instances can
          improve concurrency, by reducing contention as different
          threads read and write to cached pages. Each page that is
          stored in or read from the buffer pool is assigned to one of
          the buffer pool instances randomly, using a hashing function.
          Each buffer pool manages its own free lists,
          flush lists,
          LRUs, and all other data
          structures connected to a buffer pool, and is protected by its
          own buffer pool mutex.
        
          This option takes effect only when you set the
          innodb_buffer_pool_size to a
          size of 1GB or more. The total size you specify is divided
          among all the buffer pools. For best efficiency, specify a
          combination of
          innodb_buffer_pool_instances
          and innodb_buffer_pool_size
          so that each buffer pool instance is at least 1GB.
        
          Before MySQL 5.6.6, the default is 1. The default value in
          MySQL 5.6.6 and higher on 32-bit Windows systems depends on
          the value of
          innodb_buffer_pool_size, as
          described below:
              If
              innodb_buffer_pool_size
              is greater than 1.3GB, the default for
              innodb_buffer_pool_instances
              is
              innodb_buffer_pool_size/128MB,
              with individual memory allocation requests for each chunk.
              1.3GB was chosen as the boundary at which there is
              significant risk for 32-bit Windows to be unable to
              allocate the contiguous address space needed for a single
              buffer pool.
            
Otherwise, the default is 1.
          On all other platforms, the default value in MySQL 5.6.6 and
          higher is 8 when
          innodb_buffer_pool_size is
          greater than or equal to 1GB. Otherwise, the default is 1.
            A bug in MySQL 5.6 causes SHOW
            VARIABLES to report an
            innodb_buffer_pool_instances
            value of 8 when
            innodb_buffer_pool_size is
            less than 1GB and only one buffer pool instance is present
            (Bug #18343670). As an alternative, you can use
            SHOW ENGINE
            INNODB STATUS to check the number of buffer pool
            instances. If there are multiple buffer pool instances,
            SHOW ENGINE
            INNODB STATUS output includes an
            INDIVIDUAL BUFFER POOL INFO section.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_buffer_pool_load_abort=# | ||
| System Variable | Name | innodb_buffer_pool_load_abort | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Interrupts the process of restoring InnoDB
          buffer pool contents
          triggered by
          innodb_buffer_pool_load_at_startup
          or
          innodb_buffer_pool_load_now.
        
For related information, see Section 14.3.3.5, “Preloading the InnoDB Buffer Pool for Faster Restart”.
          
          innodb_buffer_pool_load_at_startup
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_buffer_pool_load_at_startup=# | ||
| System Variable | Name | innodb_buffer_pool_load_at_startup | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Specifies that, on MySQL server startup, the InnoDB
          buffer pool is
          automatically warmed up by
          loading the same pages it held at an earlier time. Typically
          used in combination with
          innodb_buffer_pool_dump_at_shutdown.
        
For related information, see Section 14.3.3.5, “Preloading the InnoDB Buffer Pool for Faster Restart”.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_buffer_pool_load_now=# | ||
| System Variable | Name | innodb_buffer_pool_load_now | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
Immediately warms up the InnoDB buffer pool by loading a set of data pages, without waiting for a server restart. Can be useful to bring cache memory back to a known state during benchmarking, or to ready the MySQL server to resume its normal workload after running queries for reports or maintenance.
For related information, see Section 14.3.3.5, “Preloading the InnoDB Buffer Pool for Faster Restart”.
| Command-Line Format | --innodb_buffer_pool_size=# | ||
| System Variable | Name | innodb_buffer_pool_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (32-bit platforms) | Type | integer | |
| Default | 134217728 | ||
| Min Value | 5242880 | ||
| Max Value | 2**32-1 | ||
| Permitted Values (64-bit platforms) | Type | integer | |
| Default | 134217728 | ||
| Min Value | 5242880 | ||
| Max Value | 2**64-1 | ||
          The size in bytes of the
          buffer pool, the
          memory area where InnoDB caches table and
          index data. The default value is 128MB. The maximum value
          depends on the CPU architecture; the maximum is 4294967295
          (232-1) on 32-bit systems and
          18446744073709551615 (264-1) on
          64-bit systems. On 32-bit systems, the CPU architecture and
          operating system may impose a lower practical maximum size
          than the stated maximum. When the size of the buffer pool is
          greater than 1GB, setting
          innodb_buffer_pool_instances
          to a value greater than 1 can improve the scalability on a
          busy server.
        
The larger you set this value, the less disk I/O is needed to access the same data in tables more than once. On a dedicated database server, you might set this to up to 80% of the machine physical memory size. Be prepared to scale back this value if these other issues occur:
Competition for physical memory might cause paging in the operating system.
              InnoDB reserves additional memory for
              buffers and control structures, so that the total
              allocated space is approximately 10% greater than the
              specified size.
            
The address space must be contiguous, which can be an issue on Windows systems with DLLs that load at specific addresses.
The time to initialize the buffer pool is roughly proportional to its size. On large installations, this initialization time might be significant. For example, on a modern Linux x86_64 server, initialization of a 10GB buffer pool takes approximately 6 seconds. See Section 8.10.1, “The InnoDB Buffer Pool”.
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_change_buffer_max_size=# | ||
| System Variable | Name | innodb_change_buffer_max_size | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 25 | ||
| Min Value | 0 | ||
| Max Value | 50 | ||
Maximum size for the InnoDB change buffer, as a percentage of the total size of the buffer pool. You might increase this value for a MySQL server with heavy insert, update, and delete activity, or decrease it for a MySQL server with unchanging data used for reporting. For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
| Command-Line Format | --innodb_change_buffering=# | ||
| System Variable | Name | innodb_change_buffering | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | enumeration | |
| Default | all | ||
| Valid Values | inserts | ||
| deletes | |||
| purges | |||
| changes | |||
| all | |||
| none | |||
          Whether InnoDB performs
          change buffering,
          an optimization that delays write operations to secondary
          indexes so that the I/O operations can be performed
          sequentially. The permitted values are
          inserts (buffer insert operations),
          deletes (buffer delete operations; strictly
          speaking, the writes that mark index records for later
          deletion during a purge operation), changes
          (buffer insert and delete-marking operations),
          purges (buffer
          purge operations, the writes
          when deleted index entries are finally garbage-collected),
          all (buffer insert, delete-marking, and
          purge operations) and none (do not buffer
          any operations). The default is all. For
          details, see
          Section 14.3.5, “Configuring InnoDB Change Buffering”. For
          general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_checksum_algorithm=# | ||
| System Variable | Name | innodb_checksum_algorithm | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.5) | Type | enumeration | |
| Default | innodb | ||
| Valid Values | innodb | ||
| crc32 | |||
| none | |||
| strict_innodb | |||
| strict_crc32 | |||
| strict_none | |||
| Permitted Values (5.6.6) | Type | enumeration | |
| Default | crc32 | ||
| Valid Values | innodb | ||
| crc32 | |||
| none | |||
| strict_innodb | |||
| strict_crc32 | |||
| strict_none | |||
| Permitted Values (>= 5.6.7) | Type | enumeration | |
| Default | innodb | ||
| Valid Values | innodb | ||
| crc32 | |||
| none | |||
| strict_innodb | |||
| strict_crc32 | |||
| strict_none | |||
          Specifies how to generate and verify the
          checksum stored in each
          disk block of each InnoDB
          tablespace.
        
          innodb_checksum_algorithm replaces the
          innodb_checksums option as of
          MySQL 5.6.3. The following values are provided for
          compatibility:
              innodb_checksum_algorithm=innodb is the
              same as innodb_checksums=ON
            
              innodb_checksum_algorithm=none is the
              same as innodb_checksums=OFF
          To avoid conflicts, remove references to
          innodb_checksums from your
          configuration file and MySQL startup scripts.
        
          The value innodb is backward-compatible
          with all versions of MySQL. The value crc32
          uses an algorithm that is faster to compute the checksum for
          every modified block, and to check the checksums for each disk
          read. It scans blocks 32 bits at a time, which is faster the
          innodb checksum algorithm, which scans
          blocks 8 bits at a time. The value none
          writes a constant value in the checksum field rather than
          computing a value based on the block data. The blocks in a
          tablespace can use a mix of old, new, and no checksum values,
          being updated gradually as the data is modified; once any
          blocks in a tablespace are modified to use the
          crc32 algorithm, the associated tables
          cannot be read by earlier versions of MySQL.
        
          The strict_* forms work the same as
          innodb, crc32, and
          none, except that InnoDB
          halts if it encounters a mix of checksum values in the same
          tablespace. You can only use these options in a completely new
          instance, to set up all tablespaces for the first time. The
          strict_* settings are somewhat faster,
          because they do not need to compute both new and old checksum
          values to accept both during disk reads.
            As of MySQL 5.6.25, InnoDB no longer
            halts if it encounters a valid
            non-matching checksum. Instead, a message is printed to the
            error log and the page is accepted as valid if it matches an
            innodb, crc32 or
            none checksum.
          The following table illustrates the difference between the
          none, innodb, and
          crc32 option values, and their
          strict_ counterparts.
          none, innodb, and
          crc32 write the specified type checksum
          value into each data block, but for compatibility accept any
          of the other checksum values when verifying a block during a
          read operation. The strict_ form of each
          parameter only recognizes one kind of checksum, which makes
          verification faster but requires that all
          InnoDB data files in an instance be created
          under the identical
          innodb_checksum_algorithm value.
Table 14.8 Allowed Settings for innodb_checksum_algorithm
| Value | Generated checksum (when writing) | Allowed checksums (when reading) | 
|---|---|---|
| none | A constant number. | Any of the checksums generated by none,innodb, orcrc32. | 
| innodb | A checksum calculated in software, using the original algorithm from InnoDB. | Any of the checksums generated by none,innodb, orcrc32. | 
| crc32 | A checksum calculated using the crc32algorithm,
                  possibly done with a hardware assist. | Any of the checksums generated by none,innodb, orcrc32. | 
| strict_none | A constant number | Only the checksum generated by none. | 
| strict_innodb | A checksum calculated in software, using the original algorithm from InnoDB. | Only the checksum generated by innodb. | 
| strict_crc32 | A checksum calculated using the crc32algorithm,
possibly done with a hardware assist. | Only the checksum generated by crc32. | 
          The default value for
          innodb_checksum_algorithm was changed from
          innodb to crc32 in MySQL
          5.6.6, but switched back to innodb in 5.6.7
          for improved compatibility of InnoDB data
          files during a downgrade to an earlier MySQL version, and for
          use with MySQL
          Enterprise Backup. The limitations encountered
          included:
              .ibd files containing CRC32 checksums
              could cause problems downgrading to MySQL versions prior
              to 5.6.3. MySQL 5.6.3 and up recognizes either the new or
              old checksum values for the block as correct when reading
              the block from disk, ensuring that data blocks are
              compatible during upgrade and downgrade regardless of the
              algorithm setting. If data written with new checksum
              values is processed by a level of MySQL earlier than
              5.6.3, it could be reported as corrupted.
            
Versions of MySQL Enterprise Backup up to 3.8.0 do not support backing up tablespaces that use CRC32 checksums. MySQL Enterprise Backup adds CRC32 checksum support in 3.8.1, with some limitations. Refer to the MySQL Enterprise Backup 3.8.1 Change History for more information.
| Deprecated | 5.6.3 | ||
| Command-Line Format | --innodb_checksums | ||
| System Variable | Name | innodb_checksums | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          InnoDB can use
          checksum validation on
          all tablespace pages read from the disk to ensure extra fault
          tolerance against hardware faults or corrupted data files.
          This validation is enabled by default. Under specialized
          circumstances (such as when running benchmarks) this extra
          safety feature can be disabled with
          --skip-innodb-checksums. You can specify the
          method of calculating the checksum with
          innodb_checksum_algorithm.
        
          In MySQL 5.6.3 and higher, this option is deprecated, replaced
          by innodb_checksum_algorithm.
          innodb_checksum_algorithm=innodb is the
          same as innodb_checksums=ON (the default).
          innodb_checksum_algorithm=none is the same
          as innodb_checksums=OFF. Remove any
          innodb_checksums options from your
          configuration files and startup scripts, to avoid conflicts
          with innodb_checksum_algorithm:
          innodb_checksums=OFF would automatically
          set innodb_checksum_algorithm=none;
          innodb_checksums=ON would be ignored and
          overridden by any other setting for
          innodb_checksum_algorithm.
        
| Introduced | 5.6.7 | ||
| Command-Line Format | --innodb_cmp_per_index_enabled=# | ||
| System Variable | Name | innodb_cmp_per_index_enabled | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
| Valid Values | OFF | ||
| ON | |||
          Enables per-index compression-related statistics in the
          INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX
          table. Because these statistics can be expensive to gather,
          only enable this option on development, test, or slave
          instances during performance tuning related to
          InnoDB
          compressed tables.
        
| Command-Line Format | --innodb_commit_concurrency=# | ||
| System Variable | Name | innodb_commit_concurrency | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 1000 | ||
The number of threads that can commit at the same time. A value of 0 (the default) permits any number of transactions to commit simultaneously.
          The value of innodb_commit_concurrency
          cannot be changed at runtime from zero to nonzero or vice
          versa. The value can be changed from one nonzero value to
          another.
        
          
          innodb_compression_failure_threshold_pct
| Introduced | 5.6.7 | ||
| Command-Line Format | --innodb_compression_failure_threshold_pct=# | ||
| System Variable | Name | innodb_compression_failure_threshold_pct | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 5 | ||
| Min Value | 0 | ||
| Max Value | 100 | ||
Sets the cutoff point at which MySQL begins adding padding within compressed pages to avoid expensive compression failures. A value of zero disables the mechanism that monitors compression efficiency and dynamically adjusts the padding amount.
For more information, see Section 14.6.6, “Compression for OLTP Workloads”.
| Introduced | 5.6.7 | ||
| Command-Line Format | --innodb_compression_level=# | ||
| System Variable | Name | innodb_compression_level | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 6 | ||
| Min Value | 0 | ||
| Max Value | 9 | ||
          Specifies the level of zlib compression to use for
          InnoDB
          compressed tables and
          indexes.
        
For more information, see Section 14.6.6, “Compression for OLTP Workloads”.
          
          innodb_compression_pad_pct_max
| Introduced | 5.6.7 | ||
| Command-Line Format | --innodb_compression_pad_pct_max=# | ||
| System Variable | Name | innodb_compression_pad_pct_max | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 50 | ||
| Min Value | 0 | ||
| Max Value | 75 | ||
          Specifies the maximum percentage that can be reserved as free
          space within each compressed
          page, allowing room to
          reorganize the data and modification log within the page when
          a compressed table or
          index is updated and the data might be recompressed. Only
          applies when
          innodb_compression_failure_threshold_pct
          is set to a non-zero value, and the rate of
          compression
          failures passes the cutoff point.
        
For more information, see Section 14.6.6, “Compression for OLTP Workloads”.
| Command-Line Format | --innodb_concurrency_tickets=# | ||
| System Variable | Name | innodb_concurrency_tickets | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.5) | Type | integer | |
| Default | 500 | ||
| Min Value | 1 | ||
| Max Value | 4294967295 | ||
| Permitted Values (>= 5.6.6) | Type | integer | |
| Default | 5000 | ||
| Min Value | 1 | ||
| Max Value | 4294967295 | ||
          Determines the number of
          threads that can enter
          InnoDB concurrently. A thread is placed in
          a queue when it tries to enter InnoDB if
          the number of threads has already reached the concurrency
          limit. When a thread is permitted to enter
          InnoDB, it is given a number of “free
          tickets” equal to the value of
          innodb_concurrency_tickets,
          and the thread can enter and leave InnoDB
          freely until it has used up its tickets. After that point, the
          thread again becomes subject to the concurrency check (and
          possible queuing) the next time it tries to enter
          InnoDB. The default value is 5000 as of
          MySQL 5.6.6, 500 before that.
        
          With a small innodb_concurrency_tickets
          value, small transactions that only need to process a few rows
          compete fairly with larger transactions that process many
          rows. The disadvantage of a small
          innodb_concurrency_tickets value is that
          large transactions must loop through the queue many times
          before they can complete, which extends the length of time
          required to complete their task.
        
          With a large innodb_concurrency_tickets
          value, large transactions spend less time waiting for a
          position at the end of the queue (controlled by
          innodb_thread_concurrency)
          and more time retrieving rows. Large transactions also require
          fewer trips through the queue to complete their task. The
          disadvantage of a large
          innodb_concurrency_tickets value is that
          too many large transactions running at the same time can
          starve smaller transactions by making them wait a longer time
          before executing.
        
          With a non-zero
          innodb_thread_concurrency
          value, you may need to adjust the
          innodb_concurrency_tickets value up or down
          to find the optimal balance between larger and smaller
          transactions. The SHOW ENGINE INNODB STATUS
          report shows the number of tickets remaining for an executing
          transaction in its current pass through the queue. This data
          may also be obtained from the
          TRX_CONCURRENCY_TICKETS column of the
          INFORMATION_SCHEMA.INNODB_TRX
          table.
        
For more information, see Section 14.3.6, “Configuring Thread Concurrency for InnoDB”.
| Command-Line Format | --innodb_data_file_path=name | ||
| System Variable | Name | innodb_data_file_path | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (<= 5.6.6) | Type | string | |
| Default | ibdata1:10M:autoextend | ||
| Permitted Values (>= 5.6.7) | Type | string | |
| Default | ibdata1:12M:autoextend | ||
          The paths to individual InnoDB
          data files and their
          sizes. The full directory path to each data file is formed by
          concatenating
          innodb_data_home_dir to each
          path specified here. The file sizes are specified KB, MB or GB
          (1024MB) by appending K,
          M or G to the size
          value. If specifying data file size in kilobytes (KB), do so
          in multiples of 1024. Otherwise, KB values are rounded off to
          nearest megabyte (MB) boundary. The sum of the sizes of the
          files must be at least slightly larger than 10MB. If you do
          not specify innodb_data_file_path, the
          default behavior is to create a single auto-extending data
          file, slightly larger than 12MB, named
          ibdata1. The size limit of individual
          files is determined by your operating system. You can set the
          file size to more than 4GB on those operating systems that
          support big files. You can also
          use raw disk partitions as
          data files. For detailed information on configuring
          InnoDB
          tablespace files, see
          Section 14.3, “InnoDB Configuration”.
        
| Command-Line Format | --innodb_data_home_dir=path | ||
| System Variable | Name | innodb_data_home_dir | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | directory name | |
          The common part of the directory path for all
          InnoDB data
          files in the
          system
          tablespace. This setting does not affect the location
          of file-per-table
          tablespaces when
          innodb_file_per_table is
          enabled. The default value is the MySQL
          data directory. If you specify the value
          as an empty string, you can use absolute file paths in
          innodb_data_file_path.
        
          
          
          innodb_disable_sort_file_cache
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_disable_sort_file_cache=# | ||
| System Variable | Name | innodb_disable_sort_file_cache | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          If enabled, this variable disables the operating system file
          system cache for merge-sort temporary files. The effect is to
          open such files with the equivalent of
          O_DIRECT. This variable was added in MySQL
          5.6.4.
        
| Command-Line Format | --innodb-doublewrite | ||
| System Variable | Name | innodb_doublewrite | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          If this variable is enabled (the default),
          InnoDB stores all data twice, first to the
          doublewrite
          buffer, then to the actual
          data files. This
          variable can be turned off with
          --skip-innodb_doublewrite for benchmarks or
          cases when top performance is needed rather than concern for
          data integrity or possible failures.
        
| Command-Line Format | --innodb_fast_shutdown[=#] | ||
| System Variable | Name | innodb_fast_shutdown | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 1 | ||
| Valid Values | 0 | ||
| 1 | |||
| 2 | |||
          The InnoDB
          shutdown mode. If the
          value is 0, InnoDB does a
          slow shutdown, a
          full purge and a change
          buffer merge before shutting down. If the value is 1 (the
          default), InnoDB skips these operations at
          shutdown, a process known as a
          fast shutdown. If
          the value is 2, InnoDB flushes its logs and
          shuts down cold, as if MySQL had crashed; no committed
          transactions are lost, but the
          crash recovery
          operation makes the next startup take longer.
        
The slow shutdown can take minutes, or even hours in extreme cases where substantial amounts of data are still buffered. Use the slow shutdown technique before upgrading or downgrading between MySQL major releases, so that all data files are fully prepared in case the upgrade process updates the file format.
          Use innodb_fast_shutdown=2 in emergency or
          troubleshooting situations, to get the absolute fastest
          shutdown if data is at risk of corruption.
        
| Command-Line Format | --innodb_file_format=# | ||
| System Variable | Name | innodb_file_format | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
| Default | Antelope | ||
| Valid Values | Antelope | ||
| Barracuda | |||
          The file format to use
          for new InnoDB tables. Currently,
          Antelope and Barracuda
          are supported. This applies only for tables that have their
          own tablespace, so for
          it to have an effect,
          innodb_file_per_table must be
          enabled. The Barracuda
          file format is required for certain InnoDB features such as
          table compression.
        
          Be aware that ALTER TABLE
          operations that recreate InnoDB tables
          (ALGORITHM=COPY) will use the current
          innodb_file_format setting (the conditions
          outlined above still apply).
        
| Command-Line Format | --innodb_file_format_check=# | ||
| System Variable | Name | innodb_file_format_check | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          This variable can be set to 1 or 0 at server startup to enable
          or disable whether InnoDB checks the
          file format tag in the
          system
          tablespace (for example, Antelope or
          Barracuda). If the tag is checked and is
          higher than that supported by the current version of
          InnoDB, an error occurs and
          InnoDB does not start. If the tag is not
          higher, InnoDB sets the value of
          innodb_file_format_max to the
          file format tag.
            Despite the default value sometimes being displayed as
            ON or OFF, always use
            the numeric values 1 or 0 to turn this option on or off in
            your configuration file or command line.
| Command-Line Format | --innodb_file_format_max=# | ||
| System Variable | Name | innodb_file_format_max | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
| Default | Antelope | ||
| Valid Values | Antelope | ||
| Barracuda | |||
          At server startup, InnoDB sets the value of
          this variable to the file
          format tag in the
          system
          tablespace (for example, Antelope or
          Barracuda). If the server creates or opens
          a table with a “higher” file format, it sets the
          value of
          innodb_file_format_max to
          that format.
        
| Command-Line Format | --innodb_file_per_table | ||
| System Variable | Name | innodb_file_per_table | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.5) | Type | boolean | |
| Default | OFF | ||
| Permitted Values (>= 5.6.6) | Type | boolean | |
| Default | ON | ||
          When innodb_file_per_table is enabled (the
          default in 5.6.6 and higher), InnoDB stores
          the data and indexes for each newly created table in a
          separate
          .ibd
          file, rather than in the system tablespace. The storage
          for these InnoDB tables is reclaimed when
          the tables are dropped or truncated. This setting enables
          several other InnoDB features, such as
          table compression. See
          Section 14.4.4, “InnoDB File-Per-Table Tablespaces” for details
          about such features as well as advantages and disadvantages of
          using file-per-table tablespaces.
        
          Be aware that enabling
          innodb_file_per_table also
          means that an ALTER TABLE
          operation will move InnoDB table from the
          system tablespace to an individual .ibd
          file in cases where ALTER TABLE
          recreates the table (ALGORITHM=COPY).
        
          When innodb_file_per_table is disabled,
          InnoDB stores the data for all tables and
          indexes in the ibdata
          files that make up the
          system
          tablespace. This setting reduces the performance
          overhead of filesystem operations for operations such as
          DROP TABLE or
          TRUNCATE TABLE. It is most
          appropriate for a server environment where entire storage
          devices are devoted to MySQL data. Because the system
          tablespace never shrinks, and is shared across all databases
          in an instance, avoid
          loading huge amounts of temporary data on a space-constrained
          system when innodb_file_per_table=OFF. Set
          up a separate instance in such cases, so that you can drop the
          entire instance to reclaim the space.
        
          By default, innodb_file_per_table is
          enabled as of MySQL 5.6.6, disabled before that. Consider
          disabling it if backward compatibility with MySQL 5.5 or 5.1
          is a concern. This will prevent ALTER
          TABLE from moving
          InnoDB tables from the system
          tablespace to individual .ibd files.
        
          innodb_file_per_table is
          dynamic and can be set ON or
          OFF using SET GLOBAL.
          You can also set this parameter in the MySQL
          configuration
          file (my.cnf or
          my.ini) but this requires shutting down
          and restarting the server.
        
          Dynamically changing the value of this parameter requires the
          SUPER privilege and immediately affects the
          operation of all connections.
        
| Introduced | 5.6.6 | ||
| System Variable | Name | innodb_flush_log_at_timeout | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 1 | ||
| Min Value | 1 | ||
| Max Value | 2700 | ||
          Write and flush the logs every N
          seconds.
          innodb_flush_log_at_timeout
          was introduced in MySQL 5.6.6. It allows the timeout period
          between flushes to be increased in order to reduce flushing
          and avoid impacting performance of binary log group commit.
          Prior to MySQL 5.6.6, flushing frequency was once per second.
          The default setting for
          innodb_flush_log_at_timeout
          is also once per second.
        
          
          
          innodb_flush_log_at_trx_commit
| Command-Line Format | --innodb_flush_log_at_trx_commit[=#] | ||
| System Variable | Name | innodb_flush_log_at_trx_commit | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | enumeration | |
| Default | 1 | ||
| Valid Values | 0 | ||
| 1 | |||
| 2 | |||
Controls the balance between strict ACID compliance for commit operations, and higher performance that is possible when commit-related I/O operations are rearranged and done in batches. You can achieve better performance by changing the default value, but then you can lose up to a second of transactions in a crash.
              The default value of 1 is required for full ACID
              compliance. With this value, the contents of the
              InnoDB
              log buffer are
              written out to the log
              file at each transaction commit and the log file is
              flushed to disk.
            
              With a value of 0, the contents of the
              InnoDB log buffer are written to the
              log file approximately once per second and the log file is
              flushed to disk. No writes from the log buffer to the log
              file are performed at transaction commit. Once-per-second
              flushing is not 100% guaranteed to happen every second,
              due to process scheduling issues. Because the flush to
              disk operation only occurs approximately once per second,
              you can lose up to a second of transactions with any
              mysqld process crash.
            
              With a value of 2, the contents of the
              InnoDB log buffer are written to the
              log file after each transaction commit and the log file is
              flushed to disk approximately once per second.
              Once-per-second flushing is not 100% guaranteed to happen
              every second, due to process scheduling issues. Because
              the flush to disk operation only occurs approximately once
              per second, you can lose up to a second of transactions in
              an operating system crash or a power outage.
            
              As of MySQL 5.6.6, InnoDB log
              flushing frequency is controlled by
              innodb_flush_log_at_timeout,
              which allows you to set log flushing frequency to
              N seconds (where
              N is 1 ...
              2700, with a default value of 1). However, any
              mysqld process crash can erase up to
              N seconds of transactions.
            
              DDL changes and other internal InnoDB
              activities flush the InnoDB log
              independent of the
              innodb_flush_log_at_trx_commit setting.
            
              InnoDB's
              crash recovery
              works regardless of the
              innodb_flush_log_at_trx_commit setting.
              Transactions are either applied entirely or erased
              entirely.
          For durability and consistency in a replication setup that
          uses InnoDB with transactions:
              If binary logging is enabled, set
              sync_binlog=1.
            
              Always set
              innodb_flush_log_at_trx_commit=1.
            Many operating systems and some disk hardware fool the
            flush-to-disk operation. They may tell
            mysqld that the flush has taken place,
            even though it has not. Then the durability of transactions
            is not guaranteed even with the setting 1, and in the worst
            case a power outage can even corrupt
            InnoDB data. Using a battery-backed disk
            cache in the SCSI disk controller or in the disk itself
            speeds up file flushes, and makes the operation safer. You
            can also try using the Unix command
            hdparm to disable the caching of disk
            writes in hardware caches, or use some other command
            specific to the hardware vendor.
| Command-Line Format | --innodb_flush_method=name | ||
| System Variable | Name | innodb_flush_method | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (Windows) | Type | string | |
| Default | async_unbuffered | ||
| Permitted Values (Unix, <= 5.6.6) | Type | string | |
| Default | fsync | ||
| Valid Values | fsync | ||
| O_DSYNC | |||
| O_DIRECT | |||
| Permitted Values (Unix, >= 5.6.7) | Type | string | |
| Default | fsync | ||
| Valid Values | fsync | ||
| O_DSYNC | |||
| O_DIRECT | |||
| O_DIRECT_NO_FSYNC | |||
          Defines the method used to
          flush data to the
          InnoDB data
          files and log
          files, which can affect I/O throughput. This variable
          is only configurable on Unix and Linux systems. On Windows
          systems, the flush method is always
          async_unbuffered and cannot be changed.
        
          The innodb_flush_method options include:
              fsync: InnoDB uses
              the fsync() system call to flush both
              the data and log files. fsync is the
              default setting.
            
              O_DSYNC: InnoDB uses
              O_SYNC to open and flush the log files,
              and fsync() to flush the data files.
              InnoDB does not use
              O_DSYNC directly because there have
              been problems with it on many varieties of Unix.
            
              O_DIRECT: InnoDB
              uses O_DIRECT (or
              directio() on Solaris) to open the data
              files, and uses fsync() to flush both
              the data and log files. This option is available on some
              GNU/Linux versions, FreeBSD, and Solaris.
            
              O_DIRECT_NO_FSYNC:
              InnoDB uses O_DIRECT
              during flushing I/O, but skips the
              fsync() system call afterwards. This
              setting is suitable for some types of file systems but not
              others. For example, it is not suitable for XFS. If you
              are not sure whether the file system you use requires an
              fsync(), for example to preserve all
              file metadata, use O_DIRECT instead.
              This option was introduced in MySQL 5.6.7 (Bug #11754304,
              Bug #45892).
          How each settings affects performance depends on hardware
          configuration and workload. Benchmark your particular
          configuration to decide which setting to use, or whether to
          keep the default setting. Examine the
          Innodb_data_fsyncs status
          variable to see the overall number of
          fsync() calls for each setting. The mix of
          read and write operations in your workload can affect how a
          setting performs. For example, on a system with a hardware
          RAID controller and battery-backed write cache,
          O_DIRECT can help to avoid double buffering
          between the InnoDB buffer pool and the
          operating system's file system cache. On some systems where
          InnoDB data and log files are located on a
          SAN, the default value or O_DSYNC might be
          faster for a read-heavy workload with mostly
          SELECT statements. Always test this
          parameter with hardware and workload that reflect your
          production environment. For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_flush_neighbors | ||
| System Variable | Name | innodb_flush_neighbors | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | enumeration | |
| Default | 1 | ||
| Valid Values | 0 | ||
| 1 | |||
| 2 | |||
Specifies whether flushing a page from the InnoDB buffer pool also flushes other dirty pages in the same extent.
The default value of 1 flushes contiguous dirty pages in the same extent from the buffer pool.
              A setting of 0 turns
              innodb_flush_neighbors off and no other
              dirty pages are flushed from the buffer pool.
            
A setting of 2 flushes dirty pages in the same extent from the buffer pool.
When the table data is stored on a traditional HDD storage device, flushing such neighbor pages in one operation reduces I/O overhead (primarily for disk seek operations) compared to flushing individual pages at different times. For table data stored on SSD, seek time is not a significant factor and you can turn this setting off to spread out the write operations. For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_flushing_avg_loops=# | ||
| System Variable | Name | innodb_flushing_avg_loops | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 30 | ||
| Min Value | 1 | ||
| Max Value | 1000 | ||
Number of iterations for which InnoDB keeps the previously calculated snapshot of the flushing state, controlling how quickly adaptive flushing responds to changing workloads. Increasing the value makes the rate of flush operations change smoothly and gradually as the workload changes. Decreasing the value makes adaptive flushing adjust quickly to workload changes, which can cause spikes in flushing activity if the workload increases and decreases suddenly.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_force_load_corrupted | ||
| System Variable | Name | innodb_force_load_corrupted | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
Lets InnoDB load tables at startup that are marked as corrupted. Use only during troubleshooting, to recover data that is otherwise inaccessible. When troubleshooting is complete, turn this setting back off and restart the server.
| Command-Line Format | --innodb_force_recovery=# | ||
| System Variable | Name | innodb_force_recovery | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 6 | ||
          The crash recovery
          mode, typically only changed in serious troubleshooting
          situations. Possible values are from 0 to 6. For the meanings
          of these values and important information about
          innodb_force_recovery, see
          Section 14.18.2, “Forcing InnoDB Recovery”.
            Only set this variable to a value greater than 0 in an
            emergency situation, so that you can start
            InnoDB and dump your tables. As a safety
            measure, InnoDB prevents
            INSERT,
            UPDATE, or
            DELETE operations when
            innodb_force_recovery is greater than 0.
            Also, as of 5.6.15, an
            innodb_force_recovery setting of 4 or
            greater places InnoDB into read-only
            mode.
          
            These restrictions may cause replication administration
            commands to fail with an error, as replication options such
            as
            --relay-log-info-repository=TABLE
            and
            --master-info-repository=TABLE
            store information in tables in InnoDB.
| Introduced | 5.6.4 | ||
| System Variable | Name | innodb_ft_aux_table | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
          Specifies the qualified name of an InnoDB
          table containing a FULLTEXT index. This
          variable is intended for diagnostic purposes and can only be
          set at runtime. For example:
        
mysql> set global innodb_ft_aux_table = 'test/t1';
          Attempting to set this variable at startup will result in a
          “mysqld: option '--innodb-ft-aux-table'
          cannot take an argument” error and startup
          will abort. After you set this variable to a name in the
          format
          db_name/table_nameINFORMATION_SCHEMA tables
          INNODB_FT_INDEX_TABLE,
          INNODB_FT_INDEX_CACHE,
          INNODB_FT_CONFIG,
          INNODB_FT_DELETED, and
          INNODB_FT_BEING_DELETED will show
          information about the search index for the specified table.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_cache_size=# | ||
| System Variable | Name | innodb_ft_cache_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (>= 5.6.4, <= 5.6.9) | Type | integer | |
| Default | 32000000 | ||
| Permitted Values (>= 5.6.10) | Type | integer | |
| Default | 8000000 | ||
| Min Value | 1600000 | ||
| Max Value | 80000000 | ||
          The memory allocated, in bytes, for the
          InnoDB FULLTEXT search index cache, which
          holds a parsed document in memory while creating an
          InnoDB FULLTEXT index.
          Index inserts and updates are only committed to disk when the
          innodb_ft_cache_size size limit is reached.
          innodb_ft_cache_size defines the cache size
          on a per table basis. To set a global limit for all tables,
          see
          innodb_ft_total_cache_size.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_enable_diag_print=# | ||
| System Variable | Name | innodb_ft_enable_diag_print | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.6) | Type | boolean | |
| Default | ON | ||
| Permitted Values (>= 5.6.7) | Type | boolean | |
| Default | OFF | ||
Whether to enable additional full-text search (FTS) diagnostic output. This option is primarily intended for advanced FTS debugging and will not be of interest to most users. Output is printed to the error log and includes information such as:
FTS index sync progress (when the FTS cache limit is reached). For example:
FTS SYNC for table test, deleted count: 100 size: 10000 bytes SYNC words: 100
FTS optimize progress. For example:
FTS start optimize test FTS_OPTIMIZE: optimize "mysql" FTS_OPTIMIZE: processed "mysql"
FTS index build progress. For example:
Number of doc processed: 1000
For FTS queries, the query parsing tree, word weight, query processing time, and memory usage are printed. For example:
FTS Search Processing time: 1 secs: 100 millisec: row(s) 10000 Full Search Memory: 245666 (bytes), Row: 10000
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_enable_stopword=# | ||
| System Variable | Name | innodb_ft_enable_stopword | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          Specifies that a set of
          stopwords is associated
          with an InnoDB FULLTEXT
          index at the time the index is created. If the
          innodb_ft_user_stopword_table
          option is set, the stopwords are taken from that table. Else,
          if the
          innodb_ft_server_stopword_table
          option is set, the stopwords are taken from that table.
          Otherwise, a built-in set of default stopwords is used.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_max_token_size=# | ||
| System Variable | Name | innodb_ft_max_token_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 84 | ||
| Min Value | 10 | ||
| Max Value | 252 | ||
          Maximum length of words that are stored in an InnoDB
          FULLTEXT index. Setting a limit on this
          value reduces the size of the index, thus speeding up queries,
          by omitting long keywords or arbitrary collections of letters
          that are not real words and are not likely to be search terms.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_min_token_size=# | ||
| System Variable | Name | innodb_ft_min_token_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 3 | ||
| Min Value | 0 | ||
| Max Value | 16 | ||
          Minimum length of words that are stored in an InnoDB
          FULLTEXT index. Increasing this value
          reduces the size of the index, thus speeding up queries, by
          omitting common word that are unlikely to be significant in a
          search context, such as the English words “a” and
          “to”. For content using a CJK (Chinese, Japanese,
          Korean) character set, specify a value of 1.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_num_word_optimize=# | ||
| System Variable | Name | innodb_ft_num_word_optimize | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 2000 | ||
          Number of words to process during each
          OPTIMIZE TABLE operation on an
          InnoDB FULLTEXT index.
          Because a bulk insert or update operation to a table
          containing a full-text search index could require substantial
          index maintenance to incorporate all changes, you might do a
          series of OPTIMIZE TABLE
          statements, each picking up where the last left off.
        
| Introduced | 5.6.13 | ||
| Command-Line Format | --innodb_ft_result_cache_limit=# | ||
| System Variable | Name | innodb_ft_result_cache_limit | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (Unix, 32-bit platforms, >= 5.6.13, <= 5.6.16) | Type | integer | |
| Default | 2000000000 | ||
| Min Value | 1000000 | ||
| Max Value | 2**32-1 | ||
| Permitted Values (Unix, 64-bit platforms, >= 5.6.13, <= 5.6.16) | Type | integer | |
| Default | 2000000000 | ||
| Min Value | 1000000 | ||
| Max Value | 2**64-1 | ||
| Permitted Values (Windows, >= 5.6.13, <= 5.6.16) | Type | integer | |
| Default | 2000000000 | ||
| Min Value | 1000000 | ||
| Max Value | 2**32-1 | ||
| Permitted Values (>= 5.6.17) | Type | integer | |
| Default | 2000000000 | ||
| Min Value | 1000000 | ||
| Max Value | 2**32-1 | ||
          The InnoDB FULLTEXT search (FTS) query
          result cache limit (defined in bytes) per FTS query or per
          thread. Intermediate and final InnoDB FTS
          query results are handled in memory. Use
          innodb_ft_result_cache_limit to place a
          size limit on the InnoDB FTS query result
          cache to avoid excessive memory consumption in case of very
          large InnoDB FTS query results (millions or
          hundreds of millions of rows, for example). Memory is
          allocated as required when an FTS query is processed. If the
          result cache size limit is reached, an error is returned
          indicating that the query exceeds the maximum allowed memory.
        
          As of MySQL 5.6.17, the maximum value of
          innodb_ft_result_cache_limit for all
          platform types and platform bit sizes is 2**32-1. Bug #71554.
        
          
          innodb_ft_server_stopword_table
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_server_stopword_table=db_name/table_name | ||
| System Variable | Name | innodb_ft_server_stopword_table | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
| Default | NULL | ||
          This option is used to specify your own
          InnoDB FULLTEXT index
          stopword list for all InnoDB tables. To
          configure your own stopword list for a specific
          InnoDB table, use
          innodb_ft_user_stopword_table.
        
          Set innodb_ft_server_stopword_table to the
          name of the table containing a list of stopwords, in the
          format
          db_name/table_name
          The stopword table must exist before you configure
          innodb_ft_server_stopword_table.
          innodb_ft_enable_stopword must be enabled
          and innodb_ft_server_stopword_table option
          must be configured before you create the
          FULLTEXT index.
        
          The stopword table must be an InnoDB table,
          containing a single VARCHAR column named
          VALUE.
        
For more information, see Section 12.9.4, “Full-Text Stopwords”.
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_sort_pll_degree=# | ||
| System Variable | Name | innodb_ft_sort_pll_degree | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 2 | ||
| Min Value | 1 | ||
| Max Value | 32 | ||
          Number of threads used in parallel to index and tokenize text
          in an InnoDB FULLTEXT
          index, when building a
          search index. See
          innodb_sort_buffer_size for
          additional usage information.
        
| Introduced | 5.6.13 | ||
| Command-Line Format | --innodb_ft_total_cache_size=# | ||
| System Variable | Name | innodb_ft_total_cache_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 640000000 | ||
| Min Value | 32000000 | ||
| Max Value | 1600000000 | ||
          The total memory allocated, in bytes, for the
          InnoDB FULLTEXT search
          index cache for all tables. Creating numerous tables, each
          with a full-text search index, could consume a significant
          portion of available memory.
          innodb_ft_total_cache_size,
          defines a global memory limit for all full-text search indexes
          to help avoid excessive memory consumption. If the global
          limit is reached by an index operation, a force sync is
          triggered.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_ft_user_stopword_table=db_name/table_name | ||
| System Variable | Name | innodb_ft_user_stopword_table | |
| Variable Scope | Global, Session | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
| Default | NULL | ||
          This option is used to specify your own
          InnoDB FULLTEXT index
          stopword list on a specific table. To configure your own
          stopword list for all InnoDB tables, use
          innodb_ft_server_stopword_table.
        
          Set innodb_ft_user_stopword_table to the
          name of the table containing a list of stopwords, in the
          format
          db_name/table_name
          The stopword table must exist before you configure
          innodb_ft_user_stopword_table.
          innodb_ft_enable_stopword must be enabled
          and innodb_ft_user_stopword_table must be
          configured before you create the FULLTEXT
          index.
        
          The stopword table must be an InnoDB table,
          containing a single VARCHAR column named
          VALUE.
        
For more information, see Section 12.9.4, “Full-Text Stopwords”.
| Command-Line Format | --innodb_io_capacity=# | ||
| System Variable | Name | innodb_io_capacity | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (32-bit platforms) | Type | integer | |
| Default | 200 | ||
| Min Value | 100 | ||
| Max Value | 2**32-1 | ||
| Permitted Values (64-bit platforms) | Type | integer | |
| Default | 200 | ||
| Min Value | 100 | ||
| Max Value | 2**64-1 | ||
          The innodb_io_capacity
          parameter sets an upper limit on the I/O activity performed by
          the InnoDB background tasks, such as
          flushing pages from the
          buffer pool and
          merging data from the
          change buffer. The
          default value is 200. For busy systems capable of higher I/O
          rates, you can set a higher value at server startup, to help
          the server handle the background maintenance work associated
          with a high rate of row changes.
        
          The innodb_io_capacity limit
          is a total limit for all buffer pool instances. When dirty
          pages are flushed, the
          innodb_io_capacity limit is
          divided equally among buffer pool instances.
        
          For systems with individual 5400 RPM or 7200 RPM drives, you
          might lower the value to the former default of
          100.
        
This parameter should be set to approximately the number of I/O operations that the system can perform per second. Ideally, keep this setting as low as practical, but not so low that these background activities fall behind. If the value is too high, data is removed from the buffer pool and insert buffer too quickly to provide significant benefit from the caching.
The value represents an estimated proportion of the I/O operations per second (IOPS) available to older-generation disk drives that could perform about 100 IOPS. The current default of 200 reflects that modern storage devices are capable of much higher I/O rates.
          In general, you can increase the value as a function of the
          number of drives used for InnoDB
          I/O, particularly fast drives capable of high numbers of IOPS.
          For example, systems that use multiple disks or solid-state
          disks for InnoDB are likely to
          benefit from the ability to control this parameter.
        
Although you can specify a very high number, in practice such large values have little if any benefit; for example, a value of one million would be considered very high.
          You can set the innodb_io_capacity value to
          any number 100 or greater to a maximum defined by
          innodb_io_capacity_max. The
          default value is 200. You can set the value
          of this parameter in the MySQL option file
          (my.cnf or my.ini) or
          change it dynamically with the SET GLOBAL
          command, which requires the SUPER
          privilege.
        
See Section 14.3.8, “Configuring the InnoDB Master Thread I/O Rate” for more guidelines about this option. For general information about InnoDB I/O performance, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_io_capacity_max=# | ||
| System Variable | Name | innodb_io_capacity_max | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (32-bit platforms) | Type | integer | |
| Default | see description | ||
| Min Value | 2000 | ||
| Max Value | 2**32-1 | ||
| Permitted Values (Unix, 64-bit platforms) | Type | integer | |
| Default | see description | ||
| Min Value | 2000 | ||
| Max Value | 2**64-1 | ||
| Permitted Values (Windows, 64-bit platforms) | Type | integer | |
| Default | 2000 | ||
| Min Value | 2000 | ||
| Max Value | 2**32-1 | ||
          The limit up to which InnoDB is allowed to
          extend the innodb_io_capacity
          setting in case of emergency. If you specify an
          innodb_io_capacity setting at
          startup and do not specify a value for
          innodb_io_capacity_max, the
          innodb_io_capacity_max value defaults to
          twice the value of
          innodb_io_capacity, with a
          lower limit of 2000. 2000 is also the initial default
          innodb_io_capacity_max configuration value.
        
          The innodb_io_capacity_max
          setting is a total limit for all buffer pool instances.
        
          For a brief period during MySQL 5.6 development, this variable
          was known as innodb_max_io_capacity. In
          MySQL 5.6.7, it was renamed to
          innodb_io_capacity_max, to
          emphasize its relationship to the
          innodb_io_capacity option.
        
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_large_prefix | ||
| System Variable | Name | innodb_large_prefix | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Enable this option to allow index key prefixes longer than 767
          bytes (up to 3072 bytes), for InnoDB tables
          that use the
          DYNAMIC
          and
          COMPRESSED
          row formats. (Creating such tables also requires the option
          values
          innodb_file_format=barracuda
          and
          innodb_file_per_table=true.)
          See Section 14.5.7, “Limits on InnoDB Tables” for the relevant
          maximums associated with index key prefixes under various
          settings.
        
          For tables using the
          REDUNDANT
          and
          COMPACT
          row formats, this option does not affect the allowed key
          prefix length.
        
| Command-Line Format | --innodb_lock_wait_timeout=# | ||
| System Variable | Name | innodb_lock_wait_timeout | |
| Variable Scope | Global, Session | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 50 | ||
| Min Value | 1 | ||
| Max Value | 1073741824 | ||
          The length of time in seconds an InnoDB
          transaction waits for
          a row lock before giving
          up. The default value is 50 seconds. A transaction that tries
          to access a row that is locked by another
          InnoDB transaction waits at most this many
          seconds for write access to the row before issuing the
          following error:
        
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
          When a lock wait timeout occurs, the current statement is
          rolled back (not the
          entire transaction). To have the entire transaction roll back,
          start the server with the
          --innodb_rollback_on_timeout
          option. See also Section 14.18.4, “InnoDB Error Handling”.
        
You might decrease this value for highly interactive applications or OLTP systems, to display user feedback quickly or put the update into a queue for processing later. You might increase this value for long-running back-end operations, such as a transform step in a data warehouse that waits for other large insert or update operations to finish.
          innodb_lock_wait_timeout applies to
          InnoDB row locks only. A MySQL
          table lock does not
          happen inside InnoDB and this timeout does
          not apply to waits for table locks.
        
          The lock wait timeout value does not apply to
          deadlocks, because
          InnoDB detects them immediately and rolls
          back one of the deadlocked transactions.
        
          innodb_lock_wait_timeout can
          be set at runtime with the SET GLOBAL or
          SET SESSION statement. Changing the
          GLOBAL setting requires the
          SUPER privilege and affects the operation
          of all clients that subsequently connect. Any client can
          change the SESSION setting for
          innodb_lock_wait_timeout,
          which affects only that client.
        
          
          
          innodb_locks_unsafe_for_binlog
| Deprecated | 5.6.3 | ||
| Command-Line Format | --innodb_locks_unsafe_for_binlog | ||
| System Variable | Name | innodb_locks_unsafe_for_binlog | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          This variable affects how InnoDB uses
          gap locking for searches
          and index scans. As of MySQL 5.6.3,
          innodb_locks_unsafe_for_binlog is
          deprecated and will be removed in a future MySQL release.
        
          Normally, InnoDB uses an algorithm called
          next-key locking that combines index-row locking with
          gap locking.
          InnoDB performs row-level locking in such a
          way that when it searches or scans a table index, it sets
          shared or exclusive locks on the index records it encounters.
          Thus, the row-level locks are actually index-record locks. In
          addition, a next-key lock on an index record also affects the
          “gap” before that index record. That is, a
          next-key lock is an index-record lock plus a gap lock on the
          gap preceding the index record. If one session has a shared or
          exclusive lock on record R in an index,
          another session cannot insert a new index record in the gap
          immediately before R in the index order.
          See Section 14.2.2.4, “InnoDB Record, Gap, and Next-Key Locks”.
        
          By default, the value of
          innodb_locks_unsafe_for_binlog is 0
          (disabled), which means that gap locking is enabled:
          InnoDB uses next-key locks for searches and
          index scans. To enable the variable, set it to 1. This causes
          gap locking to be disabled: InnoDB uses
          only index-record locks for searches and index scans.
        
          Enabling innodb_locks_unsafe_for_binlog
          does not disable the use of gap locking for foreign-key
          constraint checking or duplicate-key checking.
        
          The effect of enabling
          innodb_locks_unsafe_for_binlog is similar
          to but not identical to setting the transaction isolation
          level to READ COMMITTED:
              Enabling
              innodb_locks_unsafe_for_binlog
              is a global setting and affects all sessions, whereas the
              isolation level can be set globally for all sessions, or
              individually per session.
            
              innodb_locks_unsafe_for_binlog
              can be set only at server startup, whereas the isolation
              level can be set at startup or changed at runtime.
          READ COMMITTED therefore
          offers finer and more flexible control than
          innodb_locks_unsafe_for_binlog.
          For additional details about the effect of isolation level on
          gap locking, see Section 13.3.6, “SET TRANSACTION Syntax”.
        
          Enabling innodb_locks_unsafe_for_binlog may
          cause phantom problems because other sessions can insert new
          rows into the gaps when gap locking is disabled. Suppose that
          there is an index on the id column of the
          child table and that you want to read and
          lock all rows from the table having an identifier value larger
          than 100, with the intention of updating some column in the
          selected rows later:
        
SELECT * FROM child WHERE id > 100 FOR UPDATE;
          The query scans the index starting from the first record where
          id is greater than 100. If the locks set on
          the index records in that range do not lock out inserts made
          in the gaps, another session can insert a new row into the
          table. Consequently, if you were to execute the same
          SELECT again within the same
          transaction, you would see a new row in the result set
          returned by the query. This also means that if new items are
          added to the database, InnoDB does not
          guarantee serializability. Therefore, if
          innodb_locks_unsafe_for_binlog is enabled,
          InnoDB guarantees at most an isolation
          level of READ COMMITTED.
          (Conflict serializability is still guaranteed.) For additional
          information about phantoms, see
          Section 14.2.2.5, “Avoiding the Phantom Problem Using Next-Key Locking”.
        
          Enabling innodb_locks_unsafe_for_binlog has
          additional effects:
              For UPDATE or
              DELETE statements,
              InnoDB holds locks only for rows that
              it updates or deletes. Record locks for nonmatching rows
              are released after MySQL has evaluated the
              WHERE condition. This greatly reduces
              the probability of deadlocks, but they can still happen.
            
              For UPDATE statements, if a
              row is already locked, InnoDB performs
              a “semi-consistent” read, returning the
              latest committed version to MySQL so that MySQL can
              determine whether the row matches the
              WHERE condition of the
              UPDATE. If the row matches
              (must be updated), MySQL reads the row again and this time
              InnoDB either locks it or waits for a
              lock on it.
Consider the following example, beginning with this table:
CREATE TABLE t (a INT NOT NULL, b INT) ENGINE = InnoDB; INSERT INTO t VALUES (1,2),(2,3),(3,2),(4,3),(5,2); COMMIT;
In this case, table has no indexes, so searches and index scans use the hidden clustered index for record locking (see Section 14.2.6.2, “Clustered and Secondary Indexes”).
          Suppose that one client performs an
          UPDATE using these statements:
        
SET autocommit = 0; UPDATE t SET b = 5 WHERE b = 3;
          Suppose also that a second client performs an
          UPDATE by executing these
          statements following those of the first client:
        
SET autocommit = 0; UPDATE t SET b = 4 WHERE b = 2;
          As InnoDB executes each
          UPDATE, it first acquires an
          exclusive lock for each row, and then determines whether to
          modify it. If InnoDB does not modify the
          row and innodb_locks_unsafe_for_binlog is
          enabled, it releases the lock. Otherwise,
          InnoDB retains the lock until the end of
          the transaction. This affects transaction processing as
          follows.
        
          If innodb_locks_unsafe_for_binlog is
          disabled, the first UPDATE
          acquires x-locks and does not release any of them:
        
x-lock(1,2); retain x-lock x-lock(2,3); update(2,3) to (2,5); retain x-lock x-lock(3,2); retain x-lock x-lock(4,3); update(4,3) to (4,5); retain x-lock x-lock(5,2); retain x-lock
          The second UPDATE blocks as
          soon as it tries to acquire any locks (because first update
          has retained locks on all rows), and does not proceed until
          the first UPDATE commits or
          rolls back:
        
x-lock(1,2); block and wait for first UPDATE to commit or roll back
          If innodb_locks_unsafe_for_binlog is
          enabled, the first UPDATE
          acquires x-locks and releases those for rows that it does not
          modify:
        
x-lock(1,2); unlock(1,2) x-lock(2,3); update(2,3) to (2,5); retain x-lock x-lock(3,2); unlock(3,2) x-lock(4,3); update(4,3) to (4,5); retain x-lock x-lock(5,2); unlock(5,2)
          For the second UPDATE,
          InnoDB does a
          “semi-consistent” read, returning the latest
          committed version of each row to MySQL so that MySQL can
          determine whether the row matches the WHERE
          condition of the UPDATE:
        
x-lock(1,2); update(1,2) to (1,4); retain x-lock x-lock(2,3); unlock(2,3) x-lock(3,2); update(3,2) to (3,4); retain x-lock x-lock(4,3); unlock(4,3) x-lock(5,2); update(5,2) to (5,4); retain x-lock
| Command-Line Format | --innodb_log_buffer_size=# | ||
| System Variable | Name | innodb_log_buffer_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 8388608 | ||
| Min Value | 262144 | ||
| Max Value | 4294967295 | ||
          The size in bytes of the buffer that InnoDB
          uses to write to the log
          files on disk. The default value is 8MB. A large
          log buffer enables
          large transactions to
          run without a need to write the log to disk before the
          transactions commit. Thus,
          if you have transactions that update, insert, or delete many
          rows, making the log buffer larger saves disk I/O. For general
          I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        
| Introduced | 5.6.11 | ||
| Command-Line Format | --innodb_log_compressed_pages=# | ||
| System Variable | Name | innodb_log_compressed_pages | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          Specifies whether images of
          re-compressed
          pages are stored in
          InnoDB redo
          logs.
        
This variable was added in MySQL 5.6.11.
| Command-Line Format | --innodb_log_file_size=# | ||
| System Variable | Name | innodb_log_file_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (<= 5.6.2) | Type | integer | |
| Default | 5242880 | ||
| Min Value | 1048576 | ||
| Max Value | 4GB / innodb_log_files_in_group | ||
| Permitted Values (>= 5.6.3, <= 5.6.7) | Type | integer | |
| Default | 5242880 | ||
| Min Value | 1048576 | ||
| Max Value | 512GB / innodb_log_files_in_group | ||
| Permitted Values (>= 5.6.8) | Type | integer | |
| Default | 50331648 | ||
| Min Value | 1048576 | ||
| Max Value | 512GB / innodb_log_files_in_group | ||
          The size in bytes of each log
          file in a log
          group. The combined size of log files
          (innodb_log_file_size *
          innodb_log_files_in_group)
          cannot exceed a maximum value that is slightly less than
          512GB. A pair of 255 GB log files, for example, would allow
          you to approach the limit but not exceed it. The default value
          is 48MB. Sensible values range from 1MB to
          1/N-th of the size of the
          buffer pool, where
          N is the number of log files in the
          group. The larger the value, the less checkpoint flush
          activity is needed in the buffer pool, saving disk I/O. Larger
          log files also make crash
          recovery slower, although improvements to recovery
          performance in MySQL 5.5 and higher make the log file size
          less of a consideration. For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
            Due to Bug #69477, redo log writes for large, externally
            stored BLOB fields could
            overwrite the most recent checkpoint. To address this bug, a
            patch introduced in MySQL 5.6.20 limits the size of redo log
            BLOB writes to 10% of the
            redo log file size. As a result of this limit,
            innodb_log_file_size should
            be set to a value greater than 10 times the largest
            BLOB data size found in the
            rows of your tables plus the length of other variable length
            fields (VARCHAR,
            VARBINARY, and
            TEXT type fields).
          
            In MySQL 5.6.22, the redo log
            BLOB write limit is relaxed
            to 10% of the total redo log size
            (innodb_log_file_size *
            innodb_log_files_in_group).
            (Bug #19498877)
| Command-Line Format | --innodb_log_files_in_group=# | ||
| System Variable | Name | innodb_log_files_in_group | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 2 | ||
| Min Value | 2 | ||
| Max Value | 100 | ||
          The number of log files
          in the log group.
          InnoDB writes to the files in a circular
          fashion. The default (and recommended) value is 2. The
          location of these files is specified by
          innodb_log_group_home_dir.
          The combined size of log files
          (innodb_log_file_size *
          innodb_log_files_in_group) can be up to
          512GB.
        
| Command-Line Format | --innodb_log_group_home_dir=path | ||
| System Variable | Name | innodb_log_group_home_dir | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | directory name | |
          The directory path to the InnoDB
          redo log files, whose
          number is specified by
          innodb_log_files_in_group. If
          you do not specify any InnoDB log
          variables, the default is to create two files named
          ib_logfile0 and
          ib_logfile1 in the MySQL data directory.
          Their size is given by the size of the
          innodb_log_file_size system
          variable.
        
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_lru_scan_depth=# | ||
| System Variable | Name | innodb_lru_scan_depth | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (32-bit platforms) | Type | integer | |
| Default | 1024 | ||
| Min Value | 100 | ||
| Max Value | 2**32-1 | ||
| Permitted Values (64-bit platforms) | Type | integer | |
| Default | 1024 | ||
| Min Value | 100 | ||
| Max Value | 2**64-1 | ||
          A parameter that influences the algorithms and heuristics for
          the flush operation for the
          InnoDB
          buffer pool. Primarily
          of interest to performance experts tuning I/O-intensive
          workloads. It specifies, per buffer pool instance, how far
          down the buffer pool LRU list the
          page_cleaner thread scans looking for
          dirty pages to flush.
          This is a background operation performed once a second. If you
          have spare I/O capacity under a typical workload, increase the
          value. If a write-intensive workload saturates your I/O
          capacity, decrease the value, especially if you have a large
          buffer pool. For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        
| Command-Line Format | --innodb_max_dirty_pages_pct=# | ||
| System Variable | Name | innodb_max_dirty_pages_pct | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | numeric | |
| Default | 75 | ||
| Min Value | 0 | ||
| Max Value | 99 | ||
          InnoDB tries to
          flush data from the
          buffer pool so that
          the percentage of dirty
          pages does not exceed this value. Specify an integer in
          the range from 0 to 99. The default value is 75.
        
          The
          innodb_max_dirty_pages_pct
          setting establishes a target for flushing activity. It does
          not affect the rate of flushing. For information about
          managing the rate of flushing, see
          Section 14.3.3.2, “Configuring the Rate of InnoDB Buffer Pool Flushing”.
        
For additional information about this variable, see Section 14.3.3.6, “Tuning InnoDB Buffer Pool Flushing”. For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
          
          innodb_max_dirty_pages_pct_lwm
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_max_dirty_pages_pct_lwm=# | ||
| System Variable | Name | innodb_max_dirty_pages_pct_lwm | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | numeric | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 99 | ||
Low water mark representing percentage of dirty pages where preflushing is enabled to control the dirty page ratio. The default of 0 disables the pre-flushing behavior entirely. For additional information about this variable, see Section 14.3.3.6, “Tuning InnoDB Buffer Pool Flushing”.
| Command-Line Format | --innodb_max_purge_lag=# | ||
| System Variable | Name | innodb_max_purge_lag | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 4294967295 | ||
          This variable controls how to delay
          INSERT,
          UPDATE, and
          DELETE operations when
          purge operations are lagging
          (see Section 14.2.3, “InnoDB Multi-Versioning”). The default
          value is 0 (no delays).
        
          The InnoDB transaction system maintains a
          list of transactions that have index records delete-marked by
          UPDATE or
          DELETE operations. The length
          of this list represents the
          purge_lag value. When
          purge_lag exceeds
          innodb_max_purge_lag, each
          INSERT,
          UPDATE, and
          DELETE operation is delayed.
        
          To prevent excessive delays in extreme situations where
          purge_lag becomes huge, you can put
          a cap on the amount of delay by setting the
          innodb_max_purge_lag_delay
          configuration option. The delay is computed at the beginning
          of a purge batch.
        
          A typical setting for a problematic workload might be 1
          million, assuming that transactions are small, only 100 bytes
          in size, and it is permissible to have 100MB of unpurged
          InnoDB table rows.
        
          The lag value is displayed as the history list length in the
          TRANSACTIONS section of InnoDB Monitor
          output. For example, if the output includes the following
          lines, the lag value is 20:
        
------------ TRANSACTIONS ------------ Trx id counter 0 290328385 Purge done for trx's n:o < 0 290315608 undo n:o < 0 17 History list length 20
For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
| Introduced | 5.6.5 | ||
| Command-Line Format | --innodb_max_purge_lag_delay=# | ||
| System Variable | Name | innodb_max_purge_lag_delay | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
          Specifies the maximum delay in milliseconds for the delay
          imposed by the
          innodb_max_purge_lag
          configuration option. Any non-zero value represents an upper
          limit on the delay period computed from the formula based on
          the value of innodb_max_purge_lag. The
          default of zero means that there is no upper limit imposed on
          the delay interval.
        
For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
Has no effect. This variable is deprecated as of MySQL 5.6.11 and will be removed in a future MySQL release.
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_monitor_disable=[counter|module|pattern|all] | ||
| System Variable | Name | innodb_monitor_disable | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
          Turns off one or more
          counters in the
          INFORMATION_SCHEMA.INNODB_METRICS
          table. For usage information, see
          Section 21.29.19, “The INFORMATION_SCHEMA INNODB_METRICS Table”.
        
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_monitor_enable=[counter|module|pattern|all] | ||
| System Variable | Name | innodb_monitor_enable | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
          Turns on one or more
          counters in the
          INFORMATION_SCHEMA.INNODB_METRICS
          table. For usage information, see
          Section 21.29.19, “The INFORMATION_SCHEMA INNODB_METRICS Table”.
        
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_monitor_reset=[counter|module|pattern|all] | ||
| System Variable | Name | innodb_monitor_reset | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
          Resets to zero the count value for one or more
          counters in the
          INFORMATION_SCHEMA.INNODB_METRICS
          table. For usage information, see
          Section 21.29.19, “The INFORMATION_SCHEMA INNODB_METRICS Table”.
        
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_monitor_reset_all=[counter|module|pattern|all] | ||
| System Variable | Name | innodb_monitor_reset_all | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | string | |
          Resets all values (minimum, maximum, and so on) for one or
          more counters in the
          INFORMATION_SCHEMA.INNODB_METRICS
          table. For usage information, see
          Section 21.29.19, “The INFORMATION_SCHEMA INNODB_METRICS Table”.
        
| Command-Line Format | --innodb_old_blocks_pct=# | ||
| System Variable | Name | innodb_old_blocks_pct | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 37 | ||
| Min Value | 5 | ||
| Max Value | 95 | ||
          Specifies the approximate percentage of the
          InnoDB
          buffer pool used for
          the old block sublist. The
          range of values is 5 to 95. The default value is 37 (that is,
          3/8 of the pool). Often used in combination with
          innodb_old_blocks_time. See
          Section 14.3.3.3, “Making the Buffer Pool Scan Resistant” for
          more information. See Section 8.10.1, “The InnoDB Buffer Pool” for
          information about buffer pool management, such as the
          LRU algorithm and
          eviction policies.
        
| Command-Line Format | --innodb_old_blocks_time=# | ||
| System Variable | Name | innodb_old_blocks_time | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.5) | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 2**32-1 | ||
| Permitted Values (>= 5.6.6) | Type | integer | |
| Default | 1000 | ||
| Min Value | 0 | ||
| Max Value | 2**32-1 | ||
Non-zero values protect against the buffer pool being filled up by data that is referenced only for a brief period, such as during a full table scan. Increasing this value offers more protection against full table scans interfering with data cached in the buffer pool.
Specifies how long in milliseconds (ms) a block inserted into the old sublist must stay there after its first access before it can be moved to the new sublist. If the value is 0, a block inserted into the old sublist moves immediately to the new sublist the first time it is accessed, no matter how soon after insertion the access occurs. If the value is greater than 0, blocks remain in the old sublist until an access occurs at least that many ms after the first access. For example, a value of 1000 causes blocks to stay in the old sublist for 1 second after the first access before they become eligible to move to the new sublist.
The default value is 1000 as of MySQL 5.6.6, 0 before that.
          This variable is often used in combination with
          innodb_old_blocks_pct. See
          Section 14.3.3.3, “Making the Buffer Pool Scan Resistant” for
          more information. See Section 8.10.1, “The InnoDB Buffer Pool” for
          information about buffer pool management, such as the
          LRU algorithm and
          eviction policies.
        
          
          innodb_online_alter_log_max_size
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_online_alter_log_max_size=# | ||
| System Variable | Name | innodb_online_alter_log_max_size | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 134217728 | ||
| Min Value | 65536 | ||
| Max Value | 2**64-1 | ||
          Specifies an upper limit on the size of the temporary log
          files used during online
          DDL operations for InnoDB tables.
          There is one such log file for each index being created or
          table being altered. This log file stores data inserted,
          updated, or deleted in the table during the DDL operation. The
          temporary log file is extended when needed by the value of
          innodb_sort_buffer_size, up
          to the maximum specified by
          innodb_online_alter_log_max_size. If any
          temporary log file exceeds the upper size limit, the
          ALTER TABLE operation fails and
          all uncommitted concurrent DML operations are rolled back.
          Thus, a large value for this option allows more DML to happen
          during an online DDL operation, but also causes a longer
          period at the end of the DDL operation when the table is
          locked to apply the data from the log.
        
| Command-Line Format | --innodb_open_files=# | ||
| System Variable | Name | innodb_open_files | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (<= 5.6.5) | Type | integer | |
| Default | 300 | ||
| Min Value | 10 | ||
| Max Value | 4294967295 | ||
| Permitted Values (>= 5.6.6) | Type | integer | |
| Default | -1 (autosized) | ||
| Min Value | 10 | ||
| Max Value | 4294967295 | ||
          This variable is relevant only if you use multiple
          InnoDB
          tablespaces. It
          specifies the maximum number of
          .ibd
          files that MySQL can keep open at one time. The minimum
          value is 10. As of MySQL 5.6.6, the default value is 300 if
          innodb_file_per_table is not
          enabled, and the higher of 300 and
          table_open_cache otherwise.
          Before 5.6.6, the default value is 300.
        
          The file descriptors used for .ibd files
          are for InnoDB tables only. They are
          independent of those specified by the
          --open-files-limit server
          option, and do not affect the operation of the table cache.
          For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_optimize_fulltext_only=# | ||
| System Variable | Name | innodb_optimize_fulltext_only | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Changes the way the OPTIMIZE
          TABLE statement operates on
          InnoDB tables. Intended to be enabled
          temporarily, during maintenance operations for
          InnoDB tables with
          FULLTEXT indexes.
        
          By default, OPTIMIZE TABLE reorganizes the
          data in the clustered
          index of the table. When this option is enabled,
          OPTIMIZE TABLE skips this reorganization of
          the table data, and instead processes the newly added,
          deleted, and updated token data for a
          FULLTEXT index, See
          Section 14.2.6.3, “InnoDB FULLTEXT Indexes” for more information
          about FULLTEXT indexes for
          InnoDB tables.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_page_size=#k | ||
| System Variable | Name | innodb_page_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (>= 5.6.4) | Type | enumeration | |
| Default | 16384 | ||
| Valid Values | 4k | ||
| 8k | |||
| 16k | |||
| 4096 | |||
| 8192 | |||
| 16384 | |||
          Specifies the page size
          for all InnoDB
          tablespaces in a MySQL
          instance. This value is
          set when the instance is created and remains constant
          afterwards. You can specify page size using the values
          16k (the default), 8k,
          or 4k. Alternatively, you can specify page
          size in bytes (4096, 8192, 16384).
        
          The default, with the largest page size, is appropriate for a
          wide range of workloads,
          particularly for queries involving table scans and DML
          operations involving bulk updates. Smaller page sizes might be
          more efficient for OLTP
          workloads involving many small writes, where contention can be
          an issue when a single page contains many rows. Smaller pages
          might also be efficient with
          SSD storage devices, which
          typically use small block sizes. Keeping the
          InnoDB page size close to the storage
          device block size minimizes the amount of unchanged data that
          is rewritten to disk. For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_print_all_deadlocks=# | ||
| System Variable | Name | innodb_print_all_deadlocks | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          When this option is enabled, information about all
          deadlocks in
          InnoDB user transactions is recorded in the
          mysqld error
          log. Otherwise, you see information about only the last
          deadlock, using the SHOW ENGINE INNODB
          STATUS command. An occasional
          InnoDB deadlock is not necessarily an
          issue, because InnoDB detects the condition
          immediately, and rolls back one of the transactions
          automatically. You might use this option to troubleshoot why
          deadlocks are happening if an application does not have
          appropriate error-handling logic to detect the rollback and
          retry its operation. A large number of deadlocks might
          indicate the need to restructure transactions that issue
          DML or SELECT ... FOR
          UPDATE statements for multiple tables, so that each
          transaction accesses the tables in the same order, thus
          avoiding the deadlock condition.
        
| Command-Line Format | --innodb_purge_batch_size=# | ||
| System Variable | Name | innodb_purge_batch_size | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.2) | Type | integer | |
| Default | 20 | ||
| Min Value | 1 | ||
| Max Value | 5000 | ||
| Permitted Values (>= 5.6.3) | Type | integer | |
| Default | 300 | ||
| Min Value | 1 | ||
| Max Value | 5000 | ||
          The granularity of changes, expressed in units of
          redo log records, that
          trigger a purge operation,
          flushing the changed buffer
          pool blocks to disk. This option is intended for tuning
          performance in combination with the setting
          innodb_purge_threads=,
          and typical users do not need to modify it.
        n
| Command-Line Format | --innodb_purge_threads=# | ||
| System Variable | Name | innodb_purge_threads | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (<= 5.6.1) | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 1 | ||
| Permitted Values (>= 5.6.2, <= 5.6.4) | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 32 | ||
| Permitted Values (>= 5.6.5) | Type | integer | |
| Default | 1 | ||
| Min Value | 1 | ||
| Max Value | 32 | ||
The number of background threads devoted to the InnoDB purge operation. The new default and minimum value of 1 in MySQL 5.6.5 signifies that the purge operation is always performed by background threads, never as part of the master thread. Non-zero values runs the purge operation in one or more background threads, which can reduce internal contention within InnoDB, improving scalability. Increasing the value to greater than 1 creates that many separate purge threads, which can improve efficiency on systems where DML operations are performed on multiple tables. The maximum is 32.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_random_read_ahead=# | ||
| System Variable | Name | innodb_random_read_ahead | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Enables the random
          read-ahead technique
          for optimizing InnoDB I/O. Random
          read-ahead functionality was removed from the InnoDB
          Plugin (version 1.0.4) and was therefore not
          included in MySQL 5.5.0 when InnoDB Plugin
          became the “built-in” version of
          InnoDB. Random read-ahead was reintroduced
          in MySQL 5.1.59 and 5.5.16 and higher along with the
          innodb_random_read_ahead configuration
          option, which is disabled by default.
        
See Section 14.3.3.1, “Configuring InnoDB Buffer Pool Prefetching (Read-Ahead)” for details about the performance considerations for the different types of read-ahead requests. For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
| Command-Line Format | --innodb_read_ahead_threshold=# | ||
| System Variable | Name | innodb_read_ahead_threshold | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 56 | ||
| Min Value | 0 | ||
| Max Value | 64 | ||
          Controls the sensitivity of linear
          read-ahead that
          InnoDB uses to prefetch pages into the
          buffer pool. If
          InnoDB reads at least
          innodb_read_ahead_threshold pages
          sequentially from an extent
          (64 pages), it initiates an asynchronous read for the entire
          following extent. The permissible range of values is 0 to 64.
          As of MySQL 5.6.1, a value of 0 disables read-ahead. Prior to
          5.6.1, a value of 0 would trigger a read-ahead upon reading
          the boundary page of a 64 page extent. For the default of 56,
          InnoDB must read at least 56 pages
          sequentially from an extent to initiate an asynchronous read
          for the following extent.
        
          Knowing how many pages are read through this read-ahead
          mechanism, and how many of them are evicted from the buffer
          pool without ever being accessed, can be useful to help
          fine-tune the
          innodb_read_ahead_threshold
          parameter. As of MySQL 5.5,
          SHOW ENGINE
          INNODB STATUS output displays counter information
          from the
          Innodb_buffer_pool_read_ahead
          and
          Innodb_buffer_pool_read_ahead_evicted
          global status variables. These variables indicate the number
          of pages brought into the
          buffer pool by
          read-ahead requests, and the number of such pages
          evicted from the buffer
          pool without ever being accessed respectively. These counters
          provide global values since the last server restart.
        
          SHOW ENGINE INNODB STATUS also shows the
          rate at which the read-ahead pages are read in and the rate at
          which such pages are evicted without being accessed. The
          per-second averages are based on the statistics collected
          since the last invocation of SHOW ENGINE INNODB
          STATUS and are displayed in the BUFFER POOL
          AND MEMORY section of the output.
        
See Section 14.3.3.1, “Configuring InnoDB Buffer Pool Prefetching (Read-Ahead)” for more information. For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
| Command-Line Format | --innodb_read_io_threads=# | ||
| System Variable | Name | innodb_read_io_threads | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 4 | ||
| Min Value | 1 | ||
| Max Value | 64 | ||
          The number of I/O threads for read operations in
          InnoDB. The default value is 4. Its
          counterpart for write threads is
          innodb_write_io_threads. See
          Section 14.3.7, “Configuring the Number of Background InnoDB I/O Threads” for
          more information. For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
            On Linux systems, running multiple MySQL servers (typically
            more than 12) with default settings for
            innodb_read_io_threads,
            innodb_write_io_threads,
            and the Linux aio-max-nr setting can
            exceed system limits. Ideally, increase the
            aio-max-nr setting; as a workaround, you
            might reduce the settings for one or both of the MySQL
            configuration options.
| Introduced | 5.6.7 | ||
| Command-Line Format | --innodb_read_only=# | ||
| System Variable | Name | innodb_read_only | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
Starts the server in read-only mode. For distributing database applications or data sets on read-only media. Can also be used in data warehouses to share the same data directory between multiple instances. See Section 14.3.2, “Configuring InnoDB for Read-Only Operation” for usage instructions.
| Command-Line Format | --innodb_replication_delay=# | ||
| System Variable | Name | innodb_replication_delay | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 4294967295 | ||
          The replication thread delay (in ms) on a slave server if
          innodb_thread_concurrency is
          reached.
        
| Command-Line Format | --innodb_rollback_on_timeout | ||
| System Variable | Name | innodb_rollback_on_timeout | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          In MySQL 5.6, InnoDB
          rolls back only the last
          statement on a transaction timeout by default. If
          --innodb_rollback_on_timeout is
          specified, a transaction timeout causes
          InnoDB to abort and roll back the entire
          transaction (the same behavior as in MySQL 4.1).
        
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_rollback_segments=# | ||
| System Variable | Name | innodb_rollback_segments | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 128 | ||
| Min Value | 1 | ||
| Max Value | 128 | ||
          Defines how many of the
          rollback segments
          in the system
          tablespace are used for InnoDB
          transactions. This
          setting, while still valid, is replaced by
          innodb_undo_logs.
        
| Introduced | 5.6.4 | ||
| Command-Line Format | --innodb_sort_buffer_size=# | ||
| System Variable | Name | innodb_sort_buffer_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values (5.6.4) | Type | integer | |
| Default | 1048576 | ||
| Min Value | 524288 | ||
| Max Value | 67108864 | ||
| Permitted Values (>= 5.6.5) | Type | integer | |
| Default | 1048576 | ||
| Min Value | 65536 | ||
| Max Value | 67108864 | ||
          Specifies the size of sort buffers used for sorting data
          during creation of an InnoDB index. The
          size specified defines the amount of data filled in memory for
          an internal sort and written out to disk, which can be
          referred to as a “run”. During the merge phase,
          pairs of buffers of the specified size are “read
          in” and merged. The larger the setting, the fewer
          “runs” and merges there are, which is important
          to understand from a tuning perspective.
        
This sort area is only used for merge sorts during index creation, not during later index maintenance operations. Buffers are deallocated when index creation completes.
The value of this option also controls the amount by which the temporary log file is extended, to record concurrent DML during online DDL operations.
Before this setting was made configurable, the size was hardcoded to 1048576 bytes (1MB), and that value remains the default.
          During an ALTER TABLE or
          CREATE TABLE statement that
          creates an index, 3 buffers are allocated, each with a size
          defined by this option. Additionally, auxiliary pointers are
          allocated to rows in the sort buffer so that the sort can run
          on pointers (as opposed to moving rows during the sort
          operation).
        
For a typical sort operation, a formula such as this can be used to estimate memory consumption:
(6 /*FTS_NUM_AUX_INDEX*/ * (3*@@global.innodb_sort_buffer_size) + 2 * ( @@global.innodb_sort_buffer_size/dict_index_get_min_size(index)*/) * 8 /*64-bit sizeof *buf->tuples*/")
          “@@global.innodb_sort_buffer_size/dict_index_get_min_size(index)”
          indicates the maximum tuples held. “2 *
          (@@global.innodb_sort_buffer_size/*dict_index_get_min_size(index)*/)
          * 8 /*64-bit size of *buf->tuples*/ indicates
          auxiliary pointers allocated.”.
For 32-bit, multiply by 4 instead of 8.
          For parallel sorts on a full-text index, multiply by the
          innodb_ft_sort_pll_degree
          setting:
        
(6 /*FTS_NUM_AUX_INDEX*/ @@global.innodb_ft_sort_pll_degree)
| Command-Line Format | --innodb_spin_wait_delay=# | ||
| System Variable | Name | innodb_spin_wait_delay | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 6 | ||
| Min Value | 0 | ||
| Max Value | 4294967295 | ||
The maximum delay between polls for a spin lock. The low-level implementation of this mechanism varies depending on the combination of hardware and operating system, so the delay does not correspond to a fixed time interval. The default value is 6. See Section 14.3.9, “Configuring Spin Lock Polling” for more information.
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_stats_auto_recalc=# | ||
| System Variable | Name | innodb_stats_auto_recalc | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          Causes InnoDB to automatically recalculate
          persistent
          statistics after the data in a table is changed
          substantially. The threshold value is currently 10% of the
          rows in the table. This setting applies to tables created when
          the innodb_stats_persistent
          option is enabled, or where the clause
          STATS_PERSISTENT=1 is enabled by a
          CREATE TABLE or
          ALTER TABLE statement. The
          amount of data sampled to produce the statistics is controlled
          by the
          innodb_stats_persistent_sample_pages
          configuration option.
        
          For additional information about
          innodb_stats_auto_recalc, see
          Section 14.3.11.1, “Configuring Persistent Optimizer Statistics Parameters”.
        
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_stats_method=name | ||
| System Variable | Name | innodb_stats_method | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | enumeration | |
| Default | nulls_equal | ||
| Valid Values | nulls_equal | ||
| nulls_unequal | |||
| nulls_ignored | |||
          How the server treats NULL values when
          collecting statistics
          about the distribution of index values for
          InnoDB tables. This variable has three
          possible values, nulls_equal,
          nulls_unequal, and
          nulls_ignored. For
          nulls_equal, all NULL
          index values are considered equal and form a single value
          group that has a size equal to the number of
          NULL values. For
          nulls_unequal, NULL
          values are considered unequal, and each
          NULL forms a distinct value group of size
          1. For nulls_ignored,
          NULL values are ignored.
        
The method that is used for generating table statistics influences how the optimizer chooses indexes for query execution, as described in Section 8.3.7, “InnoDB and MyISAM Index Statistics Collection”.
| Command-Line Format | --innodb_stats_on_metadata | ||
| System Variable | Name | innodb_stats_on_metadata | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (<= 5.6.5) | Type | boolean | |
| Default | ON | ||
| Permitted Values (>= 5.6.6) | Type | boolean | |
| Default | OFF | ||
          When this variable is enabled, InnoDB
          updates statistics when
          metadata statements such as SHOW TABLE
          STATUS or SHOW INDEX
          are run, or when accessing the
          INFORMATION_SCHEMA tables
          TABLES or
          STATISTICS. (These updates are
          similar to what happens for ANALYZE
          TABLE.) When disabled, InnoDB
          does not update statistics during these operations. Leaving
          this setting disabled can improve access speed for schemas
          that have a large number of tables or indexes. It can also
          improve the stability of
          execution
          plans for queries that involve
          InnoDB tables.
        
          To change the setting, issue the statement SET GLOBAL
          innodb_stats_on_metadata=,
          where modemodeON or OFF (or
          1 or 0). Changing this
          setting requires the SUPER privilege and
          immediately affects the operation of all connections.
        
This variable is disabled by default as of MySQL 5.6.6, enabled before that.
| Introduced | 5.6.6 | ||
| Command-Line Format | --innodb_stats_persistent=setting | ||
| System Variable | Name | innodb_stats_persistent | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
| Valid Values | OFF | ||
| ON | |||
| 0 | |||
| 1 | |||
          Specifies whether the InnoDB index
          statistics are persisted to disk. Otherwise, statistics may be
          recalculated frequently which can lead to variations in
          query execution
          plans. This setting is stored with each table when the
          table is created. You can set
          innodb_stats_persistent at the global level
          before creating a table, or use the
          STATS_PERSISTENT clause of the
          CREATE TABLE and
          ALTER TABLE statements to
          override the system-wide setting and configure persistent
          statistics for individual tables.
        
For more information about this option, see Section 14.3.11.1, “Configuring Persistent Optimizer Statistics Parameters”.
          
          innodb_stats_persistent_sample_pages
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_stats_persistent_sample_pages=# | ||
| System Variable | Name | innodb_stats_persistent_sample_pages | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 20 | ||
          The number of index pages to
          sample when estimating
          cardinality and other
          statistics for an
          indexed column, such as those calculated by
          ANALYZE TABLE. Increasing the
          value improves the accuracy of index statistics, which can
          improve the query
          execution plan, at the expense of increased I/O during
          the execution of ANALYZE TABLE
          for an InnoDB table. For additional
          information, see Section 14.3.11.1, “Configuring Persistent Optimizer Statistics Parameters”.
            Setting a high value for
            innodb_stats_persistent_sample_pages
            could result in lengthy ANALYZE
            TABLE execution time. To estimate the number of
            database pages that will be accessed, see
            Section 14.3.11.3, “Estimating ANALYZE TABLE Complexity for InnoDB Tables”.
          This option only applies when the
          innodb_stats_persistent
          setting is turned on for a table; when that option is turned
          off for a table, the
          innodb_stats_transient_sample_pages
          setting applies instead.
        
| Deprecated | 5.6.3 | ||
| Command-Line Format | --innodb_stats_sample_pages=# | ||
| System Variable | Name | innodb_stats_sample_pages | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 8 | ||
| Min Value | 1 | ||
| Max Value | 2**64-1 | ||
          Deprecated, use
          innodb_stats_transient_sample_pages
          instead.
        
          
          innodb_stats_transient_sample_pages
| Introduced | 5.6.2 | ||
| Command-Line Format | --innodb_stats_transient_sample_pages=# | ||
| System Variable | Name | innodb_stats_transient_sample_pages | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 8 | ||
          The number of index pages to
          sample when estimating
          cardinality and other
          statistics for an
          indexed column, such as those calculated by
          ANALYZE TABLE. The default
          value is 8. Increasing the value improves the accuracy of
          index statistics, which can improve the
          query execution
          plan, at the expense of increased I/O when opening an
          InnoDB table or recalculating statistics.
          For additional information, see
          Section 14.3.11.2, “Configuring Non-Persistent Optimizer Statistics Parameters”.
            Setting a high value for
            innodb_stats_transient_sample_pages could
            result in lengthy ANALYZE
            TABLE execution time. To estimate the number of
            database pages that will be accessed, see
            Section 14.3.11.3, “Estimating ANALYZE TABLE Complexity for InnoDB Tables”.
          This option only applies when the
          innodb_stats_persistent
          setting is turned off for a table; when this option is turned
          on for a table, the
          innodb_stats_persistent_sample_pages
          setting applies instead. Takes the place of the
          innodb_stats_sample_pages option.
        
| Introduced | 5.6.16 | ||
| Command-Line Format | --innodb_status_output | ||
| System Variable | Name | innodb_status_output | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Enables or disables periodic output for the standard
          InnoDB Monitor. Also used in combination
          with
          innodb_status_output_locks to
          enable or disable periodic output for the
          InnoDB Lock Monitor. See
          Section 14.14, “InnoDB Monitors” for additional information.
        
| Introduced | 5.6.16 | ||
| Command-Line Format | --innodb_status_output_locks | ||
| System Variable | Name | innodb_status_output_locks | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          Enables or disables the InnoDB Lock
          Monitor. When enabled, the InnoDB Lock
          Monitor prints additional information about locks in
          SHOW ENGINE INNODB STATUS output and in
          periodic output printed to the MySQL error log. Periodic
          output for the InnoDB Lock Monitor is
          printed as part of the standard InnoDB
          Monitor output. The standard InnoDB Monitor
          must therefore be enabled for the InnoDB
          Lock Monitor to print data to the MySQL error log
          periodically. See Section 14.14, “InnoDB Monitors” for more
          information.
        
| Command-Line Format | --innodb_strict_mode=# | ||
| System Variable | Name | innodb_strict_mode | |
| Variable Scope | Global, Session | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | OFF | ||
          When innodb_strict_mode is
          ON, InnoDB returns
          errors rather than warnings for certain conditions. The
          default value is OFF.
        
          Strict mode helps
          guard against ignored typos and syntax errors in SQL, or other
          unintended consequences of various combinations of operational
          modes and SQL statements. When
          innodb_strict_mode is
          ON, InnoDB raises error
          conditions in certain cases, rather than issuing a warning and
          processing the specified statement (perhaps with unintended
          behavior). This is analogous to
          sql_mode in
          MySQL, which controls what SQL syntax MySQL accepts, and
          determines whether it silently ignores errors, or validates
          input syntax and data values.
        
          The innodb_strict_mode setting affects the
          handling of syntax errors for CREATE
          TABLE, ALTER TABLE
          and CREATE INDEX statements.
          innodb_strict_mode also enables a record
          size check, so that an INSERT or
          UPDATE never fails due to the record being
          too large for the selected page size.
        
          Oracle recommends enabling
          innodb_strict_mode when using
          ROW_FORMAT and
          KEY_BLOCK_SIZE clauses on
          CREATE TABLE,
          ALTER TABLE, and
          CREATE INDEX statements. When
          innodb_strict_mode is
          OFF, InnoDB ignores
          conflicting clauses and creates the table or index, with only
          a warning in the message log. The resulting table might have
          different behavior than you intended, such as having no
          compression when you tried to create a compressed table. When
          innodb_strict_mode is
          ON, such problems generate an immediate
          error and the table or index is not created, avoiding a
          troubleshooting session later.
        
          You can turn innodb_strict_mode
          ON or OFF on the command
          line when you start mysqld, or in the
          configuration
          file my.cnf or
          my.ini. You can also enable or disable
          innodb_strict_mode at runtime with the
          statement SET [GLOBAL|SESSION]
          innodb_strict_mode=,
          where modemodeON or OFF.
          Changing the GLOBAL setting requires the
          SUPER privilege and affects the operation
          of all clients that subsequently connect. Any client can
          change the SESSION setting for
          innodb_strict_mode, and the setting affects
          only that client.
        
| Command-Line Format | --innodb_support_xa | ||
| System Variable | Name | innodb_support_xa | |
| Variable Scope | Global, Session | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | TRUE | ||
          Enables InnoDB support for two-phase commit
          in XA transactions, causing an
          extra disk flush for transaction preparation. This setting is
          the default. The XA mechanism is used internally and is
          essential for any server that has its binary log turned on and
          is accepting changes to its data from more than one thread. If
          you turn it off, transactions can be written to the binary log
          in a different order from the one in which the live database
          is committing them. This can produce different data when the
          binary log is replayed in disaster recovery or on a
          replication slave. Do not turn it off on a replication master
          server unless you have an unusual setup where only one thread
          is able to change data.
        
          For a server that is accepting data changes from only one
          thread, it is safe and recommended to turn off this option to
          improve performance for InnoDB tables. For
          example, you can turn it off on replication slaves where only
          the replication SQL thread is changing data.
        
You can also turn off this option if you do not need it for safe binary logging or replication, and you also do not use an external XA transaction manager.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_sync_array_size=# | ||
| System Variable | Name | innodb_sync_array_size | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 1 | ||
| Min Value | 1 | ||
| Max Value | 1024 | ||
Splits an internal data structure used to coordinate threads, for higher concurrency in workloads with large numbers of waiting threads. This setting must be configured when the MySQL instance is starting up, and cannot be changed afterward. Increasing this option value is recommended for workloads that frequently produce a large number of waiting threads, typically greater than 768.
| Command-Line Format | --innodb_sync_spin_loops=# | ||
| System Variable | Name | innodb_sync_spin_loops | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 30 | ||
| Min Value | 0 | ||
| Max Value | 4294967295 | ||
          The number of times a thread waits for an
          InnoDB mutex to be freed before the thread
          is suspended. The default value is 30.
        
| Command-Line Format | --innodb_table_locks | ||
| System Variable | Name | innodb_table_locks | |
| Variable Scope | Global, Session | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | boolean | |
| Default | TRUE | ||
          If autocommit = 0,
          InnoDB honors LOCK
          TABLES; MySQL does not return from LOCK
          TABLES ... WRITE until all other threads have
          released all their locks to the table. The default value of
          innodb_table_locks is 1,
          which means that LOCK TABLES
          causes InnoDB to lock a table internally if
          autocommit = 0.
        
          In MySQL 5.6,
          innodb_table_locks = 0 has no
          effect for tables locked explicitly with
          LOCK TABLES ...
          WRITE. It does have an effect for tables locked for
          read or write by
          LOCK TABLES ...
          WRITE implicitly (for example, through triggers) or
          by LOCK TABLES
          ... READ.
        
| Command-Line Format | --innodb_thread_concurrency=# | ||
| System Variable | Name | innodb_thread_concurrency | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 1000 | ||
          InnoDB tries to keep the number of
          operating system threads concurrently inside
          InnoDB less than or equal to the limit
          given by this variable (InnoDB uses
          operating system threads to process user transactions). Once
          the number of threads reaches this limit, additional threads
          are placed into a wait state within a “First In, First
          Out” (FIFO) queue for execution. Threads waiting for
          locks are not counted in the number of concurrently executing
          threads.
        
          The range of this variable is 0 to 1000. A value of 0 (the
          default) is interpreted as infinite concurrency (no
          concurrency checking). Disabling thread concurrency checking
          enables InnoDB to create as many threads as
          it needs. A value of 0 also disables the queries
          inside InnoDB and queries in queue
          counters in the ROW OPERATIONS
          section of SHOW ENGINE INNODB STATUS
          output.
        
          Consider setting this variable if your MySQL instance shares
          CPU resources with other applications, or if your workload or
          number of concurrent users is growing. The correct setting
          depends on workload, computing environment, and the version of
          MySQL that you are running. You will need to test a range of
          values to determine the setting that provides the best
          performance. innodb_thread_concurrency is a
          dynamic variable, which allows you to experiment with
          different settings on a live test system. If a particular
          setting performs poorly, you can quickly set
          innodb_thread_concurrency back to 0.
        
Use the following guidelines to help find and maintain an appropriate setting:
              If the number of concurrent user threads for a workload is
              less than 64, set
              innodb_thread_concurrency=0.
            
              If your workload is consistently heavy or occasionally
              spikes, start by setting
              innodb_thread_concurrency=128, and
              lowering the value to 96, 80, 64, and so on, until you
              find the number of threads that provides the best
              performance. For example, suppose your system typically
              has 40 to 50 users, but periodically the number increases
              to 60, 70, or even 200. You find that performance is
              stable at 80 concurrent users but starts to show a
              regression above this number. In this case, you would set
              innodb_thread_concurrency=80 to avoid
              impacting performance.
            
              If you do not want InnoDB to use more
              than a certain number of vCPUs for user threads (20 vCPUs
              for example), set
              innodb_thread_concurrency to this
              number (or possibly lower, depending on performance
              results). If your goal is to isolate MySQL from other
              applications, you may consider binding the
              mysqld process exclusively to the
              vCPUs. Be aware, however, that exclusive binding could
              result in non-optimal hardware usage if the
              mysqld process is not consistently
              busy. In this case, you might bind the
              mysqld process to the vCPUs but also
              allow other applications to use some or all of the vCPUs.
                From an operating system perspective, using a resource
                management solution (if available) to manage how CPU
                time is shared among applications may be preferable to
                binding the mysqld process. For
                example, you could assign 90% of vCPU time to a given
                application while other critical process are
                not running, and scale that value back to 40%
                when other critical processes are
                running.
              innodb_thread_concurrency values that
              are too high can cause performance regression due to
              increased contention on system internals and resources.
            
              In some cases, the optimal
              innodb_thread_concurrency setting can
              be smaller than the number of vCPUs.
            
              Monitor and analyze your system regularly. Changes to
              workload, number of users, or computing environment may
              require that you adjust the
              innodb_thread_concurrency setting.
For related information, see Section 14.3.6, “Configuring Thread Concurrency for InnoDB”.
| Command-Line Format | --innodb_thread_sleep_delay=# | ||
| System Variable | Name | innodb_thread_sleep_delay | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values (32-bit platforms, <= 5.6.16) | Type | integer | |
| Default | 10000 | ||
| Min Value | 0 | ||
| Max Value | 4294967295 | ||
| Permitted Values (64-bit platforms, <= 5.6.16) | Type | integer | |
| Default | 10000 | ||
| Min Value | 0 | ||
| Max Value | 18446744073709551615 | ||
| Permitted Values (>= 5.6.17) | Type | integer | |
| Default | 10000 | ||
| Min Value | 0 | ||
| Max Value | 1000000 | ||
          How long InnoDB threads sleep before
          joining the InnoDB queue, in microseconds.
          The default value is 10000. A value of 0 disables sleep. In
          MySQL 5.6.3 and higher, you can set the configuration option
          innodb_adaptive_max_sleep_delay
          to the highest value you would allow for
          innodb_thread_sleep_delay, and InnoDB
          automatically adjusts
          innodb_thread_sleep_delay up or down
          depending on the current thread-scheduling activity. This
          dynamic adjustment helps the thread scheduling mechanism to
          work smoothly during times when the system is lightly loaded
          and when it is operating near full capacity.
        
For more information, see Section 14.3.6, “Configuring Thread Concurrency for InnoDB”.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_undo_directory=dir_name | ||
| System Variable | Name | innodb_undo_directory | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | directory name | |
| Default | . | ||
          The relative or absolute directory path where
          InnoDB creates separate tablespaces for the
          undo logs. Typically used to place those logs on a different
          storage device. Used in conjunction with
          innodb_undo_logs and
          innodb_undo_tablespaces,
          which determine the disk layout of the undo logs outside the
          system
          tablespace. The default value of
          “.” represents the same
          directory where InnoDB creates its other
          log files by default.
An absolute directory path must be set for embedded MySQL installations. Otherwise, the server may not be able to locate undo tablespaces that are created when the MySQL instance is initialized.
For more information about configuring separate tablespaces for undo logs, see Section 14.4.7, “Storing InnoDB Undo Logs in Separate Tablespaces”.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_undo_logs=# | ||
| System Variable | Name | innodb_undo_logs | |
| Variable Scope | Global | ||
| Dynamic Variable | Yes | ||
| Permitted Values | Type | integer | |
| Default | 128 | ||
| Min Value | 0 | ||
| Max Value | 128 | ||
          Defines the number of undo logs (otherwise referred to as
          rollback
          segments) used by InnoDB for
          data-modifying transactions that generate undo records. Each
          undo log can host up to a maximum of 1024 transactions.
        
          This setting is appropriate for tuning performance if you
          observe mutex contention related to the undo logs. The
          innodb_undo_logs option
          replaces
          innodb_rollback_segments. For
          the total number of available undo logs, rather than the
          number of active ones, see the
          Innodb_available_undo_logs
          status variable.
        
          Although you can increase or decrease the number of undo logs
          used by InnoDB, the number of undo logs
          physically present in the system never decreases. Thus you
          might start with a low value for this parameter and gradually
          increase it, to avoid allocating undo logs that are not
          required. If innodb_undo_logs
          is not set, it defaults to the maximum value of 128.
        
For more information about undo logs, see Section 14.2.3, “InnoDB Multi-Versioning”. For information about configuring separate tablespaces for undo logs, see Section 14.4.7, “Storing InnoDB Undo Logs in Separate Tablespaces”.
| Introduced | 5.6.3 | ||
| Command-Line Format | --innodb_undo_tablespaces=# | ||
| System Variable | Name | innodb_undo_tablespaces | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 0 | ||
| Min Value | 0 | ||
| Max Value | 126 | ||
          The number of
          tablespace files that
          undo logs are divided
          between when you use a non-zero
          innodb_undo_logs setting. By
          default, all undo logs are part of the
          system
          tablespace, and the system tablespace always contains
          one undo tablespace in addition to those configured by
          innodb_undo_tablespaces.
        
          Because undo logs can become large during long-running
          transactions, having undo logs in multiple tablespaces reduces
          the maximum size of any one tablespace. The undo tablespace
          files are created in the location defined by
          innodb_undo_directory, with
          names in the form of
          undo, where
          NN is a sequential series of
          integers (including leading zeros). The default size of an
          undo tablespace file is 10M. The number of
          innodb_undo_tablespaces must
          be set prior to initializing InnoDB.
          Attempting to restart InnoDB with a greater
          number of undo tablespaces than you specified when you first
          created the database will result in a failed start and an
          error stating that InnoDB did not find the
          expected number of undo tablespaces.
        
For more information about configuring separate tablespaces for undo logs, see Section 14.4.7, “Storing InnoDB Undo Logs in Separate Tablespaces”.
| Command-Line Format | --innodb_use_native_aio=# | ||
| System Variable | Name | innodb_use_native_aio | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
Specifies whether to use the Linux asynchronous I/O subsystem. This variable applies to Linux systems only, and cannot be changed while the server is running. Normally, you do not need to touch this option, because it is enabled by default.
          As of MySQL 5.5, the
          asynchronous I/O
          capability that InnoDB has on Windows
          systems is available on Linux systems. (Other Unix-like
          systems continue to use synchronous I/O calls.) This feature
          improves the scalability of heavily I/O-bound systems, which
          typically show many pending reads/writes in the output of the
          command SHOW ENGINE INNODB STATUS\G.
        
          Running with a large number of InnoDB I/O
          threads, and especially running multiple such instances on the
          same server machine, can exceed capacity limits on Linux
          systems. In this case, you may receive the following error:
        
EAGAIN: The specified maxevents exceeds the user's limit of available events.
          You can typically address this error by writing a higher limit
          to /proc/sys/fs/aio-max-nr.
        
          However, if a problem with the asynchronous I/O subsystem in
          the OS prevents InnoDB from starting, you
          can start the server with
          innodb_use_native_aio=0
          disabled (use
          innodb_use_native_aio=0 in
          the option file). This option may also be turned off
          automatically during startup if InnoDB
          detects a potential problem such as a combination of
          tmpdir location, tmpfs
          filesystem, and Linux kernel that does not support AIO on
          tmpfs.
        
| Deprecated | 5.6.3 | ||
| Command-Line Format | --innodb_use_sys_malloc=# | ||
| System Variable | Name | innodb_use_sys_malloc | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | boolean | |
| Default | ON | ||
          Whether InnoDB uses the operating system
          memory allocator (ON) or its own
          (OFF). The default value is
          ON. See
          Section 14.3.4, “Configuring the Memory Allocator for InnoDB” for more
          information.
        
          As of MySQL 5.6.3,
          innodb_use_sys_malloc is
          deprecated and will be removed in a future MySQL release.
        
          The InnoDB version number. Starting in
          5.6.11, the separate numbering for InnoDB
          is discontinued and this value is the same as for the
          version variable.
        
| Command-Line Format | --innodb_write_io_threads=# | ||
| System Variable | Name | innodb_write_io_threads | |
| Variable Scope | Global | ||
| Dynamic Variable | No | ||
| Permitted Values | Type | integer | |
| Default | 4 | ||
| Min Value | 1 | ||
| Max Value | 64 | ||
          The number of I/O threads for write operations in
          InnoDB. The default value is 4. Its
          counterpart for read threads is
          innodb_read_io_threads. See
          Section 14.3.7, “Configuring the Number of Background InnoDB I/O Threads” for
          more information. For general I/O tuning advice, see
          Section 8.5.8, “Optimizing InnoDB Disk I/O”.
            On Linux systems, running multiple MySQL servers (typically
            more than 12) with default settings for
            innodb_read_io_threads,
            innodb_write_io_threads, and the Linux
            aio-max-nr setting can exceed system
            limits. Ideally, increase the aio-max-nr
            setting; as a workaround, you might reduce the settings for
            one or both of the MySQL configuration options.
      You should also take into consideration the value of
      sync_binlog, which controls
      synchronization of the binary log to disk.
    
For general I/O tuning advice, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
    This section provides information and usage examples for
    InnoDB
    INFORMATION_SCHEMA
    tables.
  
    InnoDB INFORMATION_SCHEMA
    tables provide metadata, status information, and statistics about
    various aspects of the InnoDB storage engine. You
    can view a list of InnoDB
    INFORMATION_SCHEMA tables by issuing a
    SHOW TABLES statement on the
    INFORMATION_SCHEMA database:
  
mysql> SHOW TABLES FROM INFORMATION_SCHEMA LIKE 'INNODB%';
    For table definitions, see Section 21.29, “INFORMATION_SCHEMA Tables for InnoDB”. For
    general information regarding the MySQL
    INFORMATION_SCHEMA database, see
    Chapter 21, INFORMATION_SCHEMA Tables.
      There are two pairs of InnoDB
      INFORMATION_SCHEMA tables about compression
      that can provide insight into how well compression is working
      overall:
          INNODB_CMP and
          INNODB_CMP_RESET
          contain information about the number of compression operations
          and the amount of time spent performing compression.
        
          INNODB_CMPMEM and
          INNODB_CMP_RESET
          contain information about the way memory is allocated for
          compression.
        The INNODB_CMP and
        INNODB_CMP_RESET
        tables contain status information about operations related to
        compressed tables, which are described in
        Section 14.6, “InnoDB Table Compression”. The
        PAGE_SIZE column reports the compressed
        page size.
      
        These two tables have identical contents, but reading from
        INNODB_CMP_RESET
        resets the statistics on compression and uncompression
        operations. For example, if you archive the output of
        INNODB_CMP_RESET
        every 60 minutes, you see the statistics for each hourly period.
        If you monitor the output of
        INNODB_CMP (making sure never to
        read
        INNODB_CMP_RESET),
        you see the cumulated statistics since InnoDB was started.
      
For the table definition, see Section 21.29.1, “The INFORMATION_SCHEMA INNODB_CMP and INNODB_CMP_RESET Tables”.
        The INNODB_CMPMEM and
        INNODB_CMPMEM_RESET
        tables contain status information about compressed pages that
        reside in the buffer pool. Please consult
        Section 14.6, “InnoDB Table Compression” for further information on
        compressed tables and the use of the buffer pool. The
        INNODB_CMP and
        INNODB_CMP_RESET
        tables should provide more useful statistics on compression.
        InnoDB uses a
        buddy allocator
        system to manage memory allocated to
        pages of various sizes,
        from 1KB to 16KB. Each row of the two tables described here
        corresponds to a single page size.
      
        The INNODB_CMPMEM and
        INNODB_CMPMEM_RESET
        tables have identical contents, but reading from
        INNODB_CMPMEM_RESET
        resets the statistics on relocation operations. For example, if
        every 60 minutes you archived the output of
        INNODB_CMPMEM_RESET,
        it would show the hourly statistics. If you never read
        INNODB_CMPMEM_RESET
        and monitored the output of
        INNODB_CMPMEM instead, it would
        show the cumulated statistics since InnoDB
        was started.
      
For the table definition, see Section 21.29.3, “The INFORMATION_SCHEMA INNODB_CMPMEM and INNODB_CMPMEM_RESET Tables”.
Example 14.10 Using the Compression Information Schema Tables
          The following is sample output from a database that contains
          compressed tables (see Section 14.6, “InnoDB Table Compression”,
          INNODB_CMP,
          INNODB_CMP_PER_INDEX, and
          INNODB_CMPMEM).
        
          The following table shows the contents of
          INFORMATION_SCHEMA.INNODB_CMP
          under a light workload.
          The only compressed page size that the buffer pool contains is
          8K. Compressing or uncompressing pages has consumed less than
          a second since the time the statistics were reset, because the
          columns COMPRESS_TIME and
          UNCOMPRESS_TIME are zero.
| page size | compress ops | compress ops ok | compress time | uncompress ops | uncompress time | 
|---|---|---|---|---|---|
| 1024 | 0 | 0 | 0 | 0 | 0 | 
| 2048 | 0 | 0 | 0 | 0 | 0 | 
| 4096 | 0 | 0 | 0 | 0 | 0 | 
| 8192 | 1048 | 921 | 0 | 61 | 0 | 
| 16384 | 0 | 0 | 0 | 0 | 0 | 
          According to INNODB_CMPMEM, there
          are 6169 compressed 8KB pages in the
          buffer pool. The only
          other allocated block size is 64 bytes. The smallest
          PAGE_SIZE in
          INNODB_CMPMEM is used for block
          descriptors of those compressed pages for which no
          uncompressed page exists in the buffer pool. We see that there
          are 5910 such pages. Indirectly, we see that 259 (6169-5910)
          compressed pages also exist in the buffer pool in uncompressed
          form.
        
          The following table shows the contents of
          INFORMATION_SCHEMA.INNODB_CMPMEM
          under a light workload.
          Some memory is unusable due to fragmentation of the memory
          allocator for compressed pages:
          SUM(PAGE_SIZE*PAGES_FREE)=6784. This is
          because small memory allocation requests are fulfilled by
          splitting bigger blocks, starting from the 16K blocks that are
          allocated from the main buffer pool, using the buddy
          allocation system. The fragmentation is this low because some
          allocated blocks have been relocated (copied) to form bigger
          adjacent free blocks. This copying of
          SUM(PAGE_SIZE*RELOCATION_OPS) bytes has
          consumed less than a second
          (SUM(RELOCATION_TIME)=0).
      Three InnoDB
      INFORMATION_SCHEMA tables make it easy to
      monitor transactions and diagnose possible locking problems. The
      three tables are INNODB_TRX,
      INNODB_LOCKS, and
      INNODB_LOCK_WAITS.
          
          INNODB_TRX: Contains information
          about every transaction currently executing inside
          InnoDB, including whether the transaction
          is waiting for a lock, when the transaction started, and the
          particular SQL statement the transaction is executing.
        
          
          INNODB_LOCKS: Each transaction in
          InnoDB that is waiting for another transaction to release a
          lock (INNODB_TRX.TRX_STATE='LOCK WAIT') is
          blocked by exactly one “blocking lock request”.
          That blocking lock request is for a row or table lock held by
          another transaction in an incompatible mode. The waiting or
          blocked transaction cannot proceed until the other transaction
          commits or rolls back, thereby releasing the requested lock.
          For every blocked transaction,
          INNODB_LOCKS contains one row
          that describes each lock the transaction has requested, and
          for which it is waiting.
          INNODB_LOCKS also contains one
          row for each lock that is blocking another transaction,
          whatever the state of the transaction that holds the lock
          ('RUNNING', 'LOCK WAIT',
          'ROLLING BACK' or
          'COMMITTING'). The lock that is blocking a
          transaction is always held in a mode (read vs. write, shared
          vs. exclusive) incompatible with the mode of requested lock.
        
          
          INNODB_LOCK_WAITS: Using this
          table, you can tell which transactions are waiting for a given
          lock, or for which lock a given transaction is waiting. This
          table contains one or more rows for each
          blocked transaction, indicating the lock
          it has requested and any locks that are blocking that request.
          The REQUESTED_LOCK_ID refers to the lock
          that a transaction is requesting, and the
          BLOCKING_LOCK_ID refers to the lock (held
          by another transaction) that is preventing the first
          transaction from proceeding. For any given blocked
          transaction, all rows in
          INNODB_LOCK_WAITS have the same
          value for REQUESTED_LOCK_ID and different
          values for BLOCKING_LOCK_ID.
Example 14.11 Identifying Blocking Transactions
          It is sometimes helpful to be able to identify which
          transaction is blocking another. You can use the
          INFORMATION_SCHEMA tables to find out which
          transaction is waiting for another, and which resource is
          being requested.
        
Suppose you have the following scenario, with three users running concurrently. Each user (or session) corresponds to a MySQL thread, and executes one transaction after another. Consider the state of the system when these users have issued the following commands, but none has yet committed its transaction:
               User A: 
            
BEGIN; SELECT a FROM t FOR UPDATE; SELECT SLEEP(100);
               User B: 
            
SELECT b FROM t FOR UPDATE;
               User C: 
            
SELECT c FROM t FOR UPDATE;
In this scenario, you can use this query to see who is waiting for whom:
SELECT r.trx_id waiting_trx_id,  
       r.trx_mysql_thread_id waiting_thread,
       r.trx_query waiting_query,
       b.trx_id blocking_trx_id, 
       b.trx_mysql_thread_id blocking_thread,
       b.trx_query blocking_query
   FROM       information_schema.innodb_lock_waits w
   INNER JOIN information_schema.innodb_trx b  ON  
    b.trx_id = w.blocking_trx_id
  INNER JOIN information_schema.innodb_trx r  ON  
    r.trx_id = w.requesting_trx_id;
| waiting trx id | waiting thread | waiting query | blocking trx id | blocking thread | blocking query | 
|---|---|---|---|---|---|
| A4 | 6 | SELECT b FROM t FOR UPDATE | A3 | 5 | SELECT SLEEP(100) | 
| A5 | 7 | SELECT c FROM t FOR UPDATE | A3 | 5 | SELECT SLEEP(100) | 
| A5 | 7 | SELECT c FROM t FOR UPDATE | A4 | 6 | SELECT b FROM t FOR UPDATE | 
In the above result, you can identify users by the “waiting query” or “blocking query”. As you can see:
              User B (trx id 'A4', thread
              6) and User C (trx id
              'A5', thread 7) are
              both waiting for User A (trx id 'A3',
              thread 5).
            
User C is waiting for User B as well as User A.
          You can see the underlying data in the tables
          INNODB_TRX,
          INNODB_LOCKS,
          and
          INNODB_LOCK_WAITS.
        
          The following table shows some sample contents of
          INFORMATION_SCHEMA.INNODB_TRX.
| trx id | trx state | trx started | trx requested lock id | trx wait started | trx weight | trx mysql thread id | trx query | 
|---|---|---|---|---|---|---|---|
| A3 | RUNNING | 2008-01-15 16:44:54 | NULL | NULL | 2 | 5 | SELECT SLEEP(100) | 
| A4 | LOCK WAIT | 2008-01-15 16:45:09 | A4:1:3:2 | 2008-01-15 16:45:09 | 2 | 6 | SELECT b FROM t FOR UPDATE | 
| A5 | LOCK WAIT | 2008-01-15 16:45:14 | A5:1:3:2 | 2008-01-15 16:45:14 | 2 | 7 | SELECT c FROM t FOR UPDATE | 
          The following table shows some sample contents of
          INFORMATION_SCHEMA.INNODB_LOCKS.
| lock id | lock trx id | lock mode | lock type | lock table | lock index | lock space | lock page | lock rec | lock data | 
|---|---|---|---|---|---|---|---|---|---|
| A3:1:3:2 | A3 | X | RECORD | `test`.`t` | `PRIMARY` | 1 | 3 | 2 | 0x0200 | 
| A4:1:3:2 | A4 | X | RECORD | `test`.`t` | `PRIMARY` | 1 | 3 | 2 | 0x0200 | 
| A5:1:3:2 | A5 | X | RECORD | `test`.`t` | `PRIMARY` | 1 | 3 | 2 | 0x0200 | 
          The following table shows some sample contents of
          INFORMATION_SCHEMA.INNODB_LOCK_WAITS.
Example 14.12 More Complex Example of Transaction Data in Information Schema Tables
          Sometimes you would like to correlate the internal
          InnoDB locking information with
          session-level information maintained by MySQL. For example,
          you might like to know, for a given InnoDB
          transaction ID, the corresponding MySQL session ID and name of
          the user that may be holding a lock, and thus blocking another
          transaction.
        
          The following output from the
          INFORMATION_SCHEMA tables is taken from a
          somewhat loaded system.
        
As can be seen in the following tables, there are several transactions running.
          The following INNODB_LOCKS and
          INNODB_LOCK_WAITS tables shows that:
              Transaction
              77F
              (executing an INSERT) is waiting for
              transactions
              77E,
              77D
              and
              77B
              to commit.
            
              Transaction
              77E
              (executing an INSERT) is waiting for transactions
              77D
              and
              77B
              to commit.
            
              Transaction
              77D
              (executing an INSERT) is waiting for transaction
              77B
              to commit.
            
              Transaction
              77B
              (executing an INSERT) is waiting for transaction
              77A
              to commit.
            
              Transaction
              77A
              is running, currently executing SELECT.
            
              Transaction
              E56
              (executing an INSERT) is waiting for
              transaction
              E55
              to commit.
            
              Transaction
              E55
              (executing an INSERT) is waiting for
              transaction
              19C
              to commit.
            
              Transaction
              19C
              is running, currently executing an
              INSERT.
          There may be an inconsistency between queries shown in the two
          tables INNODB_TRX.TRX_QUERY and
          PROCESSLIST.INFO. The current transaction
          ID for a thread, and the query being executed in that
          transaction, may be different in these two tables for any
          given thread. See
          Section 14.12.2.3.1, “Potential Inconsistency with PROCESSLIST Data”
          for an explanation.
        
          The following table shows the contents of
          INFORMATION_SCHEMA.PROCESSLIST in
          a system running a heavy
          workload.
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | 
|---|---|---|---|---|---|---|---|
| 384 | root | localhost | test | Query | 10 | update | insert into t2 values … | 
| 257 | root | localhost | test | Query | 3 | update | insert into t2 values … | 
| 130 | root | localhost | test | Query | 0 | update | insert into t2 values … | 
| 61 | root | localhost | test | Query | 1 | update | insert into t2 values … | 
| 8 | root | localhost | test | Query | 1 | update | insert into t2 values … | 
| 4 | root | localhost | test | Query | 0 | preparing | SELECT * FROM processlist | 
| 2 | root | localhost | test | Sleep | 566 |  | NULL | 
          The following table shows the contents of
          INFORMATION_SCHEMA.INNODB_TRX in
          a system running a heavy
          workload.
| trx id | trx state | trx started | trx requested lock id | trx wait started | trx weight | trx mysql thread id | trx query | 
|---|---|---|---|---|---|---|---|
| 77F | LOCK WAIT | 2008-01-15 13:10:16 | 77F:806 | 2008-01-15 13:10:16 | 1 | 876 | insert into t09 (D, B, C) values … | 
| 77E | LOCK WAIT | 2008-01-15 13:10:16 | 77E:806 | 2008-01-15 13:10:16 | 1 | 875 | insert into t09 (D, B, C) values … | 
| 77D | LOCK WAIT | 2008-01-15 13:10:16 | 77D:806 | 2008-01-15 13:10:16 | 1 | 874 | insert into t09 (D, B, C) values … | 
| 77B | LOCK WAIT | 2008-01-15 13:10:16 | 77B:733:12:1 | 2008-01-15 13:10:16 | 4 | 873 | insert into t09 (D, B, C) values … | 
| 77A | RUNNING | 2008-01-15 13:10:16 | NULL | NULL | 4 | 872 | select b, c from t09 where … | 
| E56 | LOCK WAIT | 2008-01-15 13:10:06 | E56:743:6:2 | 2008-01-15 13:10:06 | 5 | 384 | insert into t2 values … | 
| E55 | LOCK WAIT | 2008-01-15 13:10:06 | E55:743:38:2 | 2008-01-15 13:10:13 | 965 | 257 | insert into t2 values … | 
| 19C | RUNNING | 2008-01-15 13:09:10 | NULL | NULL | 2900 | 130 | insert into t2 values … | 
| E15 | RUNNING | 2008-01-15 13:08:59 | NULL | NULL | 5395 | 61 | insert into t2 values … | 
| 51D | RUNNING | 2008-01-15 13:08:47 | NULL | NULL | 9807 | 8 | insert into t2 values … | 
          The following table shows the contents of
          INFORMATION_SCHEMA.INNODB_LOCK_WAITS
          in a system running a heavy
          workload.
| requesting trx id | requested lock id | blocking trx id | blocking lock id | 
|---|---|---|---|
| 77F | 77F:806 | 77E | 77E:806 | 
| 77F | 77F:806 | 77D | 77D:806 | 
| 77F | 77F:806 | 77B | 77B:806 | 
| 77E | 77E:806 | 77D | 77D:806 | 
| 77E | 77E:806 | 77B | 77B:806 | 
| 77D | 77D:806 | 77B | 77B:806 | 
| 77B | 77B:733:12:1 | 77A | 77A:733:12:1 | 
| E56 | E56:743:6:2 | E55 | E55:743:6:2 | 
| E55 | E55:743:38:2 | 19C | 19C:743:38:2 | 
          The following table shows the contents of
          INFORMATION_SCHEMA.INNODB_LOCKS
          in a system running a heavy
          workload.
| lock id | lock trx id | lock mode | lock type | lock table | lock index | lock space | lock page | lock rec | lock data | 
|---|---|---|---|---|---|---|---|---|---|
| 77F:806 | 77F | AUTO_INC | TABLE | `test`.`t09` | NULL | NULL | NULL | NULL | NULL | 
| 77E:806 | 77E | AUTO_INC | TABLE | `test`.`t09` | NULL | NULL | NULL | NULL | NULL | 
| 77D:806 | 77D | AUTO_INC | TABLE | `test`.`t09` | NULL | NULL | NULL | NULL | NULL | 
| 77B:806 | 77B | AUTO_INC | TABLE | `test`.`t09` | NULL | NULL | NULL | NULL | NULL | 
| 77B:733:12:1 | 77B | X | RECORD | `test`.`t09` | `PRIMARY` | 733 | 12 | 1 | supremum pseudo-record | 
| 77A:733:12:1 | 77A | X | RECORD | `test`.`t09` | `PRIMARY` | 733 | 12 | 1 | supremum pseudo-record | 
| E56:743:6:2 | E56 | S | RECORD | `test`.`t2` | `PRIMARY` | 743 | 6 | 2 | 0, 0 | 
| E55:743:6:2 | E55 | X | RECORD | `test`.`t2` | `PRIMARY` | 743 | 6 | 2 | 0, 0 | 
| E55:743:38:2 | E55 | S | RECORD | `test`.`t2` | `PRIMARY` | 743 | 38 | 2 | 1922, 1922 | 
| 19C:743:38:2 | 19C | X | RECORD | `test`.`t2` | `PRIMARY` | 743 | 38 | 2 | 1922, 1922 | 
        When a transaction updates a row in a table, or locks it with
        SELECT FOR UPDATE, InnoDB
        establishes a list or queue of locks on that row. Similarly,
        InnoDB maintains a list of locks on a table
        for table-level locks. If a second transaction wants to update a
        row or lock a table already locked by a prior transaction in an
        incompatible mode, InnoDB adds a lock request
        for the row to the corresponding queue. For a lock to be
        acquired by a transaction, all incompatible lock requests
        previously entered into the lock queue for that row or table
        must be removed (the transactions holding or requesting those
        locks either commit or roll back).
      
        A transaction may have any number of lock requests for different
        rows or tables. At any given time, a transaction may request a
        lock that is held by another transaction, in which case it is
        blocked by that other transaction. The requesting transaction
        must wait for the transaction that holds the blocking lock to
        commit or rollback. If a transaction is not waiting for a lock,
        it is in a 'RUNNING' state. If a transaction
        is waiting for a lock, it is in a 'LOCK WAIT'
        state.
      
        The INNODB_LOCKS table holds one or
        more rows for each 'LOCK WAIT' transaction,
        indicating any lock requests that are preventing its progress.
        This table also contains one row describing each lock in a queue
        of locks pending for a given row or table. The
        INNODB_LOCK_WAITS table shows which
        locks already held by a transaction are blocking locks requested
        by other transactions.
        The data exposed by the transaction and locking tables
        (INNODB_TRX,
        INNODB_LOCKS, and
        INNODB_LOCK_WAITS) represent a
        glimpse into fast-changing data. This is not like other (user)
        tables, where the data changes only when application-initiated
        updates occur. The underlying data is internal system-managed
        data, and can change very quickly.
      
        For performance reasons, and to minimize the chance of
        misleading JOINs between the
        InnoDB transaction and locking
        INFORMATION_SCHEMA tables,
        InnoDB collects the required transaction and
        locking information into an intermediate buffer whenever a
        SELECT on any of the tables is issued. This
        buffer is refreshed only if more than 0.1 seconds has elapsed
        since the last time the buffer was read. The data needed to fill
        the three tables is fetched atomically and consistently and is
        saved in this global internal buffer, forming a point-in-time
        “snapshot”. If multiple table accesses occur within
        0.1 seconds (as they almost certainly do when MySQL processes a
        join among these tables), then the same snapshot is used to
        satisfy the query.
      
        A correct result is returned when you JOIN
        any of these tables together in a single query, because the data
        for the three tables comes from the same snapshot. Because the
        buffer is not refreshed with every query of any of these tables,
        if you issue separate queries against these tables within a
        tenth of a second, the results are the same from query to query.
        On the other hand, two separate queries of the same or different
        tables issued more than a tenth of a second apart may see
        different results, since the data come from different snapshots.
      
        Because InnoDB must temporarily stall while
        the transaction and locking data is collected, too frequent
        queries of these tables can negatively impact performance as
        seen by other users.
      
        As these tables contain sensitive information (at least
        INNODB_LOCKS.LOCK_DATA and
        INNODB_TRX.TRX_QUERY), for security reasons,
        only the users with the PROCESS privilege are
        allowed to SELECT from them.
          As described in
          Section 14.12.2.3, “Data Persistence and Consistency for InnoDB Transaction and Locking
        Tables”,
          the data that fills the InnoDB transaction
          and locking tables (INNODB_TRX,
          INNODB_LOCKS and
          INNODB_LOCK_WAITS) is fetched
          automatically and saved to an intermediate buffer that
          provides a “point-in-time” snapshot. The data
          across all three tables is consistent when queried from the
          same snapshot. However, the underlying data changes so fast
          that similar glimpses at other, similarly fast-changing data,
          may not be in synchrony. Thus, you should be careful when
          comparing data in the InnoDB transaction
          and locking tables with data in the
          PROCESSLIST table. The data from
          the PROCESSLIST table does not
          come from the same snapshot as the data about locking and
          transactions. Even if you issue a single
          SELECT (joining
          INNODB_TRX and
          PROCESSLIST, for example), the
          content of those tables is generally not consistent.
          INNODB_TRX may reference rows
          that are not present in
          PROCESSLIST or the currently
          executing SQL query of a transaction, shown in
          INNODB_TRX.TRX_QUERY may differ from the
          one in PROCESSLIST.INFO.
      As of MySQL 5.6, you can extract metadata about schema objects
      managed by InnoDB using
      InnoDB INFORMATION_SCHEMA
      system tables. This information comes from the
      InnoDB internal system tables (also referred to
      as the InnoDB data dictionary), which cannot be
      queried directly like regular InnoDB tables.
      Traditionally, you would get this type of information using the
      techniques from Section 14.14, “InnoDB Monitors”, setting up
      InnoDB monitors and parsing the output from the
      SHOW ENGINE INNODB
      STATUS command. The InnoDB
      INFORMATION_SCHEMA table interface allows you
      to query this data using SQL.
    
      With the exception of
      INNODB_SYS_TABLESTATS, for which
      there is no corresponding internal system table,
      InnoDB INFORMATION_SCHEMA
      system tables are populated with data read directly from internal
      InnoDB system tables rather than from metadata
      that is cached in memory.
    
      InnoDB INFORMATION_SCHEMA
      system tables include the tables listed below.
      INNODB_SYS_DATAFILES and
      INNODB_SYS_TABLESPACES were added in
      MySQL 5.6.6 with the introduction of support for the DATA
      DIRECTORY=' clause
      of the directory'CREATE TABLE statement,
      which allows InnoDB
      file-per-table
      tablespaces (.ibd files) to be created in a
      location outside the MySQL data directory.
    
mysql> SHOW TABLES FROM INFORMATION_SCHEMA LIKE 'INNODB_SYS%'; +--------------------------------------------+ | Tables_in_information_schema (INNODB_SYS%) | +--------------------------------------------+ | INNODB_SYS_DATAFILES | | INNODB_SYS_TABLESTATS | | INNODB_SYS_FOREIGN | | INNODB_SYS_COLUMNS | | INNODB_SYS_INDEXES | | INNODB_SYS_FIELDS | | INNODB_SYS_TABLESPACES | | INNODB_SYS_FOREIGN_COLS | | INNODB_SYS_TABLES | +--------------------------------------------+ 9 rows in set (0.00 sec)
The table names are indicative of the type of data provided:
          INNODB_SYS_TABLES provides
          metadata about InnoDB tables, equivalent to
          the information in the SYS_TABLES table in
          the InnoDB data dictionary.
        
          INNODB_SYS_COLUMNS provides
          metadata about InnoDB table columns,
          equivalent to the information in the
          SYS_COLUMNS table in the
          InnoDB data dictionary.
        
          INNODB_SYS_INDEXES provides
          metadata about InnoDB indexes, equivalent
          to the information in the SYS_INDEXES table
          in the InnoDB data dictionary.
        
          INNODB_SYS_FIELDS provides
          metadata about the key columns (fields) of
          InnoDB indexes, equivalent to the
          information in the SYS_FIELDS table in the
          InnoDB data dictionary.
        
          INNODB_SYS_TABLESTATS provides a
          view of low-level status information about
          InnoDB tables that is derived from
          in-memory data structures. There is no corresponding internal
          InnoDB system table.
        
          INNODB_SYS_DATAFILES provides
          data file path information for InnoDB
          file-per-table tablespaces, equivalent to information in the
          SYS_DATAFILES table in the
          InnoDB data dictionary.
        
          INNODB_SYS_TABLESPACES provides
          metadata about InnoDB file-per-table
          tablespaces, equivalent to the information in the
          SYS_TABLESPACES table in the
          InnoDB data dictionary.
        
          INNODB_SYS_FOREIGN provides
          metadata about foreign keys defined on
          InnoDB tables, equivalent to the
          information in the SYS_FOREIGN table in the
          InnoDB data dictionary.
        
          INNODB_SYS_FOREIGN_COLS provides
          metadata about the columns of foreign keys that are defined on
          InnoDB tables, equivalent to the
          information in the SYS_FOREIGN_COLS table
          in the InnoDB data dictionary.
      InnoDB INFORMATION_SCHEMA
      system tables can be joined together through fields such as
      TABLE_ID, INDEX_ID, and
      SPACE, allowing you to easily retrieve all
      available data for an object you want to study or monitor.
    
      Refer to the InnoDB
      INFORMATION_SCHEMA
      documentation for information about the columns of each table.
Example 14.13 InnoDB INFORMATION_SCHEMA System Tables
        This example uses a simple table (t1) with a
        single index (i1) to demonstrate the type of
        metadata found in the InnoDB
        INFORMATION_SCHEMA system tables.
            Create a test database and table t1:
          
mysql> CREATE DATABASE test; mysql> USE test; mysql> CREATE TABLE t1 ( col1 INT, col2 CHAR(10), col3 VARCHAR(10)) ENGINE = InnoDB; mysql> CREATE INDEX i1 ON t1(col1);
            After creating the table t1, query
            INNODB_SYS_TABLES to locate the
            metadata for test/t1:
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE NAME='test/t1' \G
*************************** 1. row ***************************
     TABLE_ID: 71
         NAME: test/t1
         FLAG: 1
       N_COLS: 6
        SPACE: 57
  FILE_FORMAT: Antelope
   ROW_FORMAT: Compact
ZIP_PAGE_SIZE: 0
10 rows in set (0.00 sec)
            Table t1 has a
            TABLE_ID of 71. The
            FLAG field provides bit level information
            about table format and storage characteristics. There are
            six columns, three of which are hidden columns created by
            InnoDB (DB_ROW_ID,
            DB_TRX_ID, and
            DB_ROLL_PTR). The ID of the table's
            SPACE is 57 (a value of 0 would indicate
            that the table resides in the system tablespace). The
            FILE_FORMAT is Antelope, and the
            ROW_FORMAT is Compact.
            ZIP_PAGE_SIZE only applies to tables with
            a Compressed row format.
          
            Using the TABLE_ID information from
            INNODB_SYS_TABLES, query the
            INNODB_SYS_COLUMNS table for
            information about the table's columns.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_COLUMNS where TABLE_ID = 71 \G 
*************************** 1. row ***************************
TABLE_ID: 71
    NAME: col1
     POS: 0
   MTYPE: 6
  PRTYPE: 1027
     LEN: 4
*************************** 2. row ***************************
TABLE_ID: 71
    NAME: col2
     POS: 1
   MTYPE: 2
  PRTYPE: 524542
     LEN: 10
*************************** 3. row ***************************
TABLE_ID: 71
    NAME: col3
     POS: 2
   MTYPE: 1
  PRTYPE: 524303
     LEN: 10
3 rows in set (0.00 sec)
            In addition to the TABLE_ID and column
            NAME,
            INNODB_SYS_COLUMNS provides the
            ordinal position (POS) of each column
            (starting from 0 and incrementing sequentially), the column
            MTYPE or “main type” (6 =
            INT, 2 = CHAR, 1 = VARCHAR), the PRTYPE
            or “precise type” (a binary value with bits
            that represent the MySQL data type, character set code, and
            nullability), and the column length
            (LEN).
          
            Using the TABLE_ID information from
            INNODB_SYS_TABLES once again,
            query INNODB_SYS_INDEXES for
            information about the indexes associated with table
            t1.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_INDEXES WHERE TABLE_ID = 71 \G
*************************** 1. row ***************************
INDEX_ID: 111
    NAME: GEN_CLUST_INDEX
TABLE_ID: 71
    TYPE: 1
N_FIELDS: 0
 PAGE_NO: 3
   SPACE: 57
*************************** 2. row ***************************
INDEX_ID: 112
    NAME: i1
TABLE_ID: 71
    TYPE: 0
N_FIELDS: 1
 PAGE_NO: 4
   SPACE: 57
2 rows in set (0.00 sec)
            INNODB_SYS_INDEXES returns data
            for two indexes. The first index is
            GEN_CLUST_INDEX, which is a clustered
            index created by InnoDB if the table does
            not have a user-defined clustered index. The second index
            (i1) is the user-defined secondary index.
          
            The INDEX_ID is an identifier for the
            index that is unique across all databases in an instance.
            The TABLE_ID identifies the table that
            the index is associated with. The index
            TYPE value indicates the type of index (1
            = Clustered Index, 0 = Secondary index). The
            N_FILEDS value is the number of fields
            that comprise the index. PAGE_NO is the
            root page number of the index B-tree, and
            SPACE is the ID of the tablespace where
            the index resides. A non-zero value indicates that the index
            does not reside in the system tablespace.
          
            Using the INDEX_ID information from
            INNODB_SYS_INDEXES, query
            INNODB_SYS_FIELDS for
            information about the fields of index i1.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FIELDS where INDEX_ID = 112 \G
*************************** 1. row ***************************
INDEX_ID: 112
    NAME: col1
     POS: 0
1 row in set (0.00 sec)
            INNODB_SYS_FIELDS provides the
            NAME of the indexed field and its ordinal
            position within the index. If the index (i1) had been
            defined on multiple fields,
            INNODB_SYS_FIELDS would provide
            metadata for each of the indexed fields.
          
            Using the SPACE information from
            INNODB_SYS_TABLES, query
            INNODB_SYS_TABLESPACES table
            for information about the table's tablespace.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES WHERE SPACE = 57 \G
*************************** 1. row ***************************
        SPACE: 57
         NAME: test/t1
         FLAG: 0
  FILE_FORMAT: Antelope
   ROW_FORMAT: Compact or Redundant
    PAGE_SIZE: 16384
ZIP_PAGE_SIZE: 0
1 row in set (0.00 sec)
            In addition to the SPACE ID of the
            tablespace and the NAME of the associated
            table, INNODB_SYS_TABLESPACES
            provides tablespace FLAG data, which is
            bit level information about tablespace format and storage
            characteristics. Also provided are tablespace
            FILE_FORMAT,
            ROW_FORMAT, PAGE_SIZE,
            and ZIP_PAGE_SIZE data
            (ZIP_PAGE_SIZE is applicable to
            tablespaces with a Compressed row
            format).
          
            Using the SPACE information from
            INNODB_SYS_TABLES once again,
            query INNODB_SYS_DATAFILES for
            the location of the tablespace data file.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_DATAFILES WHERE SPACE = 57 \G *************************** 1. row *************************** SPACE: 57 PATH: ./test/t1.ibd 1 row in set (0.01 sec)
            The datafile is located in the test
            directory under MySQL's data directory.
            If a
            file-per-table
            tablespace were created in a location outside the MySQL data
            directory using the DATA DIRECTORY clause
            of the CREATE TABLE
            statement, the tablespace PATH would be a
            fully qualified directory path.
          
            As a final step, insert a row into table
            t1 (TABLE_ID = 71) and
            view the data in the
            INNODB_SYS_TABLESTATS table.
            The data in this table is used by the MySQL optimizer to
            calculate which index to use when querying an
            InnoDB table. This information is derived
            from in-memory data structures. There is no corresponding
            internal InnoDB system table.
          
mysql> INSERT INTO t1 VALUES(5, 'abc', 'def');
Query OK, 1 row affected (0.06 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESTATS where TABLE_ID = 71 \G
*************************** 1. row ***************************
         TABLE_ID: 71
             NAME: test/t1
STATS_INITIALIZED: Initialized
         NUM_ROWS: 1
 CLUST_INDEX_SIZE: 1
 OTHER_INDEX_SIZE: 0
 MODIFIED_COUNTER: 1
          AUTOINC: 0
        REF_COUNT: 1
1 row in set (0.00 sec)
            The STATS_INITIALIZED field indicates
            whether or not statistics have been collected for the table.
            NUM_ROWS is the current estimated number
            of rows in the table. The
            CLUST_INDEX_SIZE and
            OTHER_INDEX_SIZE fields report the number
            of pages on disk that store clustered and secondary indexes
            for the table, respectively. The
            MODIFIED_COUNTER value shows the number
            of rows modified by DML operations and cascade operations
            from foreign keys. The AUTOINC value is
            the next number to be issued for any autoincrement-based
            operation. There are no autoincrement columns defined on
            table t1, so the value is 0. The
            REF_COUNT value is a counter. When the
            counter reaches 0, it signifies that the table metadata can
            be evicted from the table cache.
Example 14.14 Foreign Key INFORMATION_SCHEMA System Tables
        The INNODB_SYS_FOREIGN and
        INNODB_SYS_FOREIGN_COLS tables
        provide data about foreign key relationships. This example uses
        a parent table and child table with a foreign key relationship
        to demonstrate the data found in the
        INNODB_SYS_FOREIGN and
        INNODB_SYS_FOREIGN_COLS tables.
Create the test database with parent and child tables:
mysql> CREATE DATABASE test;
mysql> USE test;
mysql> CREATE TABLE parent (id INT NOT NULL,
    -> PRIMARY KEY (id)) ENGINE=INNODB;
mysql> CREATE TABLE child (id INT, parent_id INT,
    -> INDEX par_ind (parent_id),
    -> CONSTRAINT fk1
    -> FOREIGN KEY (parent_id) REFERENCES parent(id)
    -> ON DELETE CASCADE) ENGINE=INNODB;
            After the parent and child tables are created, query
            INNODB_SYS_FOREIGN and locate
            the foreign key data for the test/child
            and test/parent foreign key relationship:
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN \G
*************************** 1. row ***************************
      ID: test/fk1
FOR_NAME: test/child
REF_NAME: test/parent
  N_COLS: 1
    TYPE: 1
1 row in set (0.00 sec)
            Metadata includes the foreign key ID
            (fk1), which is named for the
            CONSTRAINT that was defined on the child
            table. The FOR_NAME is the name of the
            child table where the foreign key is defined.
            REF_NAME is the name of the parent table
            (the “referenced” table).
            N_COLS is the number of columns in the
            foreign key index. TYPE is a numerical
            value representing bit flags that provide additional
            information about the foreign key column. In this case, the
            TYPE value is 1, which indicates that the
            ON DELETE CASCADE option was specified
            for the foreign key. See the
            INNODB_SYS_FOREIGN table
            definition for more information about
            TYPE values.
          
            Using the foreign key ID, query
            INNODB_SYS_FOREIGN_COLS to view
            data about the columns of the foreign key.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN_COLS WHERE ID = 'test/fk1' \G
*************************** 1. row ***************************
          ID: test/fk1
FOR_COL_NAME: parent_id
REF_COL_NAME: id
         POS: 0
1 row in set (0.00 sec)
            FOR_COL_NAME is the name of the foreign
            key column in the child table, and
            REF_COL_NAME is the name of the
            referenced column in the parent table. The
            POS value is the ordinal position of the
            key field within the foreign key index, starting at zero.
Example 14.15 Joining InnoDB INFORMATION_SCHEMA System Tables
        This example demonstrates joining three
        InnoDB INFORMATION_SCHEMA
        system tables (INNODB_SYS_TABLES,
        INNODB_SYS_TABLESPACES, and
        INNODB_SYS_TABLESTATS) to gather
        file format, row format, page size, and index size information
        about tables in the employees sample database.
      
The following table name aliases are used to shorten the query string:
        An IF() control flow function is
        used to account for compressed tables. If a table is compressed,
        the index size is calculated using
        ZIP_PAGE_SIZE rather than
        PAGE_SIZE.
        CLUST_INDEX_SIZE and
        OTHER_INDEX_SIZE, which are reported in
        bytes, are divided by 1024*1024 to provide
        index sizes in megabytes (MBs). MB values are rounded to zero
        decimal spaces using the ROUND()
        function.
      
mysql> SELECT a.NAME, a.FILE_FORMAT, a.ROW_FORMAT, @page_size := IF(a.ROW_FORMAT='Compressed', b.ZIP_PAGE_SIZE, b.PAGE_SIZE) AS page_size, ROUND((@page_size * c.CLUST_INDEX_SIZE) /(1024*1024)) AS pk_mb, ROUND((@page_size * c.OTHER_INDEX_SIZE) /(1024*1024)) AS secidx_mb FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES a INNER JOIN INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES b on a.NAME = b.NAME INNER JOIN INFORMATION_SCHEMA.INNODB_SYS_TABLESTATS c on b.NAME = c.NAME WHERE a.NAME LIKE 'employees/%' ORDER BY a.NAME DESC; +------------------------+-------------+------------+-----------+-------+-----------+ | NAME | FILE_FORMAT | ROW_FORMAT | page_size | pk_mb | secidx_mb | +------------------------+-------------+------------+-----------+-------+-----------+ | employees/titles | Antelope | Compact | 16384 | 20 | 11 | | employees/salaries | Antelope | Compact | 16384 | 91 | 33 | | employees/employees | Antelope | Compact | 16384 | 15 | 0 | | employees/dept_manager | Antelope | Compact | 16384 | 0 | 0 | | employees/dept_emp | Antelope | Compact | 16384 | 12 | 10 | | employees/departments | Antelope | Compact | 16384 | 0 | 0 | +------------------------+-------------+------------+-----------+-------+-----------+ 6 rows in set (0.01 sec)
      With the introduction of FULLTEXT index support
      for InnoDB tables in MySQL 5.6.4, the following
      tables were added to the INFORMATION_SCHEMA
      database:
    
mysql> SHOW TABLES FROM INFORMATION_SCHEMA LIKE 'INNODB_FT%'; +-------------------------------------------+ | Tables_in_INFORMATION_SCHEMA (INNODB_FT%) | +-------------------------------------------+ | INNODB_FT_CONFIG | | INNODB_FT_BEING_DELETED | | INNODB_FT_DELETED | | INNODB_FT_DEFAULT_STOPWORD | | INNODB_FT_INDEX_TABLE | | INNODB_FT_INDEX_CACHE | +-------------------------------------------+ 6 rows in set (0.00 sec)
          INNODB_FT_CONFIG: Displays
          metadata about the FULLTEXT index and
          associated processing for an InnoDB table.
        
          INNODB_FT_BEING_DELETED: Provides
          a snapshot of the
          INNODB_FT_DELETED table that is
          only used during an OPTIMIZE
          TABLE maintenance operation. When
          OPTIMIZE TABLE is run, the
          INNODB_FT_BEING_DELETED table is
          emptied, and DOC_IDs are removed from the
          INNODB_FT_DELETED table. Because
          the contents of
          INNODB_FT_BEING_DELETED typically
          have a short lifetime, this table has limited utility for
          monitoring or debugging. For information about running
          OPTIMIZE TABLE on tables with
          FULLTEXT indexes, see
          Section 12.9.6, “Fine-Tuning MySQL Full-Text Search”.
        
          INNODB_FT_DELETED: Records rows
          that are deleted from the FULLTEXT index
          for an InnoDB table. To avoid expensive
          index reorganization during DML operations for an
          InnoDB FULLTEXT index,
          the information about newly deleted words is stored
          separately, filtered out of search results when you perform a
          text search, and removed from the main search index only when
          you run OPTIMIZE TABLE table.
        
          INNODB_FT_DEFAULT_STOPWORD: Holds
          a list of stopwords that
          are used by default when creating a
          FULLTEXT index.
        
          For information about the
          INNODB_FT_DEFAULT_STOPWORD table,
          see Section 12.9.4, “Full-Text Stopwords”.
        
          INNODB_FT_INDEX_TABLE: Contains
          data about the inverted index used to process text searches
          against the FULLTEXT index.
        
          INNODB_FT_INDEX_CACHE: Contains
          token information about newly inserted rows in a
          FULLTEXT index. To avoid expensive index
          reorganization during DML operations, the information about
          newly indexed words is stored separately, and combined with
          the main search index only when OPTIMIZE
          TABLE is run, when the server is shut down, or when
          the cache size exceeds a limit defined by
          innodb_ft_cache_size or
          innodb_ft_total_cache_size.
        With the exception of the
        INNODB_FT_DEFAULT_STOPWORD table,
        you must set the
        innodb_ft_aux_table
        configuration variable to the name of the table
        (database_name/table_name)
        that contains the FULLTEXT index. Otherwise,
        the InnoDB FULLTEXT index
        INFORMATION_SCHEMA tables appear empty.
Example 14.16 InnoDB FULLTEXT Index INFORMATION_SCHEMA Tables
        This example uses a table with a FULLTEXT
        index to demonstrate the data contained in the
        FULLTEXT index
        INFORMATION_SCHEMA tables.
            Create a table with a FULLTEXT index and
            insert some data:
          
mysql> CREATE TABLE articles (
      id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
      title VARCHAR(200),
      body TEXT,
      FULLTEXT (title,body)
     ) ENGINE=InnoDB;
INSERT INTO articles (title,body) VALUES
    ('MySQL Tutorial','DBMS stands for DataBase ...'),
    ('How To Use MySQL Well','After you went through a ...'),
    ('Optimizing MySQL','In this tutorial we will show ...'),
    ('1001 MySQL Tricks','1. Never run mysqld as root. 2. ...'),
    ('MySQL vs. YourSQL','In the following database comparison ...'),
    ('MySQL Security','When configured properly, MySQL ...');
            Set the innodb_ft_aux_table
            variable to the name of the table with the
            FULLTEXT index. If this variable is not
            set, the InnoDB
            FULLTEXT
            INFORMATION_SCHEMA tables appear empty,
            with the exception of the
            INNODB_FT_DEFAULT_STOPWORD
            table.
          
SET GLOBAL innodb_ft_aux_table = 'test/articles';
            Query the INNODB_FT_INDEX_CACHE
            table, which shows information about newly inserted rows in
            a FULLTEXT index. To avoid expensive
            index reorganization during DML operations, data for newly
            inserted rows remains in the FULLTEXT
            index cache until OPTIMIZE
            TABLE is run (or until the server is shutdown or
            cache limits are exceeded).
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE LIMIT 5; +------------+--------------+-------------+-----------+--------+----------+ | WORD | FIRST_DOC_ID | LAST_DOC_ID | DOC_COUNT | DOC_ID | POSITION | +------------+--------------+-------------+-----------+--------+----------+ | 1001 | 5 | 5 | 1 | 5 | 0 | | after | 3 | 3 | 1 | 3 | 22 | | comparison | 6 | 6 | 1 | 6 | 44 | | configured | 7 | 7 | 1 | 7 | 20 | | database | 2 | 6 | 2 | 2 | 31 | +------------+--------------+-------------+-----------+--------+----------+ 5 rows in set (0.00 sec)
            Enable
            innodb_optimize_fulltext_only
            and run OPTIMIZE TABLE on the
            table that contains the FULLTEXT index.
            This operation flushes the contents of the
            FULLTEXT index cache to the main
            FULLTEXT index.
            innodb_optimize_fulltext_only
            changes the way the OPTIMIZE
            TABLE statement operates on
            InnoDB tables, and is intended to be
            enabled temporarily, during maintenance operations on
            InnoDB tables with
            FULLTEXT indexes.
          
mysql> SET GLOBAL innodb_optimize_fulltext_only=ON; Query OK, 0 rows affected (0.00 sec) mysql> OPTIMIZE TABLE articles; +---------------+----------+----------+----------+ | Table | Op | Msg_type | Msg_text | +---------------+----------+----------+----------+ | test.articles | optimize | status | OK | +---------------+----------+----------+----------+ 1 row in set (0.03 sec)
            Query the INNODB_FT_INDEX_TABLE
            table to view information about data in the main
            FULLTEXT index, including information
            about the data that was just flushed from the
            FULLTEXT index cache.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_INDEX_TABLE LIMIT 5; +------------+--------------+-------------+-----------+--------+----------+ | WORD | FIRST_DOC_ID | LAST_DOC_ID | DOC_COUNT | DOC_ID | POSITION | +------------+--------------+-------------+-----------+--------+----------+ | 1001 | 5 | 5 | 1 | 5 | 0 | | after | 3 | 3 | 1 | 3 | 22 | | comparison | 6 | 6 | 1 | 6 | 44 | | configured | 7 | 7 | 1 | 7 | 20 | | database | 2 | 6 | 2 | 2 | 31 | +------------+--------------+-------------+-----------+--------+----------+ 5 rows in set (0.00 sec)
            The INNODB_FT_INDEX_CACHE table
            is now empty since the OPTIMIZE
            TABLE operation flushed the
            FULLTEXT index cache.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE LIMIT 5; Empty set (0.00 sec)
            Delete some records from the
            test/articles table.
          
mysql> DELETE FROM test.articles WHERE id < 4; Query OK, 3 rows affected (0.11 sec)
            Query the INNODB_FT_DELETED
            table. This table records rows that are deleted from the
            FULLTEXT index. To avoid expensive index
            reorganization during DML operations, information about
            newly deleted records is stored separately, filtered out of
            search results when you do a text search, and removed from
            the main search index when you run
            OPTIMIZE TABLE.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_DELETED; +--------+ | DOC_ID | +--------+ | 2 | | 3 | | 4 | +--------+ 3 rows in set (0.00 sec)
            Run OPTIMIZE TABLE to remove
            the deleted records.
          
mysql> OPTIMIZE TABLE articles; +---------------+----------+----------+----------+ | Table | Op | Msg_type | Msg_text | +---------------+----------+----------+----------+ | test.articles | optimize | status | OK | +---------------+----------+----------+----------+ 1 row in set (0.10 sec)
            The INNODB_FT_DELETED table
            should now appear empty.
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_DELETED; Empty set (0.00 sec)
            Query the INNODB_FT_CONFIG
            table. This table contains metadata about the
            FULLTEXT index and related processing:
                optimize_checkpoint_limit is the
                number of seconds after which an
                OPTIMIZE TABLE run will
                stop.
              
                synced_doc_id is the next
                DOC_ID to be issued.
              
                stopword_table_name is the
                database/table name for a
                user-defined stopword table. This field appears empty if
                there is no user-defined stopword table.
              
                use_stopword indicates whether or not
                a stopword table is used, which is defined when the
                FULLTEXT index is created.
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_CONFIG; +---------------------------+-------+ | KEY | VALUE | +---------------------------+-------+ | optimize_checkpoint_limit | 180 | | synced_doc_id | 8 | | stopword_table_name | | | use_stopword | 1 | +---------------------------+-------+ 4 rows in set (0.00 sec)
      The InnoDB
      INFORMATION_SCHEMA buffer pool tables provide
      buffer pool status information and metadata about the pages within
      the InnoDB buffer pool. The tables were
      introduced in MySQL 5.6.2 and later backported to MySQL 5.5 (in
      MySQL 5.5.28) and MySQL 5.1 (in MySQL 5.1.66).
    
      The InnoDB
      INFORMATION_SCHEMA buffer pool tables include
      those listed below:
    
mysql> SHOW TABLES FROM INFORMATION_SCHEMA LIKE 'INNODB_BUFFER%'; +-----------------------------------------------+ | Tables_in_INFORMATION_SCHEMA (INNODB_BUFFER%) | +-----------------------------------------------+ | INNODB_BUFFER_PAGE_LRU | | INNODB_BUFFER_PAGE | | INNODB_BUFFER_POOL_STATS | +-----------------------------------------------+ 3 rows in set (0.00 sec)
          INNODB_BUFFER_PAGE: Holds
          information about each page in the InnoDB
          buffer pool.
        
          INNODB_BUFFER_PAGE_LRU: Holds
          information about the pages in the InnoDB
          buffer pool, in particular how they are ordered in the LRU
          list that determines which pages to evict from the buffer pool
          when it becomes full. The
          INNODB_BUFFER_PAGE_LRU table has
          the same columns as the
          INNODB_BUFFER_PAGE table, except
          that the INNODB_BUFFER_PAGE_LRU
          table has an LRU_POSITION column instead of
          a BLOCK_ID column.
        
          INNODB_BUFFER_POOL_STATS:
          Provides buffer pool status information. Much of the same
          information is provided by
          SHOW ENGINE
          INNODB STATUS output, or may be obtained using
          InnoDB buffer pool server status variables.
        Querying the INNODB_BUFFER_PAGE
        table or INNODB_BUFFER_PAGE_LRU
        table can introduce significant performance overhead. Do not
        query these tables on a production system unless you are aware
        of the performance impact that your query may have, and have
        determined it to be acceptable. To avoid impacting performance,
        reproduce the issue you want to investigate on a test instance
        and run your queries on the test instance.
Example 14.17 Querying System Data in the INNODB_BUFFER_PAGE Table
        This query provides an approximate count of pages that contain
        system data by excluding pages where the
        TABLE_NAME value is either
        NULL or includes a slash
        “/” or period
        “.” in the table name, which
        indicates a user-defined table.
      
SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME IS NULL OR (INSTR(TABLE_NAME, '/') = 0 AND INSTR(TABLE_NAME, '.') = 0); +----------+ | COUNT(*) | +----------+ | 1320 | +----------+ 1 row in set (0.12 sec)
This query returns the approximate number of pages that contain system data, the total number of buffer pool pages, and an approximate percentage of pages that contain system data.
SELECT (SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME IS NULL OR (INSTR(TABLE_NAME, '/') = 0 AND INSTR(TABLE_NAME, '.') = 0) ) AS system_pages, ( SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE ) AS total_pages, ( SELECT ROUND((system_pages/total_pages) * 100) ) AS system_page_percentage; +--------------+-------------+------------------------+ | system_pages | total_pages | system_page_percentage | +--------------+-------------+------------------------+ | 1320 | 8192 | 16 | +--------------+-------------+------------------------+ 1 row in set (0.15 sec)
        The type of system data in the buffer pool can be determined by
        querying the PAGE_TYPE value. For example,
        the following query returns eight distinct
        PAGE_TYPE values among the pages that contain
        system data:
      
mysql> SELECT DISTINCT PAGE_TYPE FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME IS NULL OR (INSTR(TABLE_NAME, '/') = 0 AND INSTR(TABLE_NAME, '.') = 0); +-------------------+ | PAGE_TYPE | +-------------------+ | SYSTEM | | IBUF_BITMAP | | UNDO_LOG | | UNKNOWN | | FILE_SPACE_HEADER | | INODE | | ALLOCATED | | TRX_SYSTEM | +-------------------+ 8 rows in set (0.05 sec)
Example 14.18 Querying User Data in the INNODB_BUFFER_PAGE Table
        This query provides an approximate count of pages containing
        user data by counting pages where the
        TABLE_NAME value is NOT
        NULL and NOT LIKE
        '%INNODB_SYS_TABLES%'.
      
mysql> SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME IS NOT NULL AND TABLE_NAME NOT LIKE '%INNODB_SYS_TABLES%'; +----------+ | COUNT(*) | +----------+ | 6872 | +----------+ 1 row in set (0.06 sec)
This query returns the approximate number of pages that contain user data, the total number of buffer pool pages, and an approximate percentage of pages that contain user data.
mysql> SELECT (SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME IS NOT NULL AND (INSTR(TABLE_NAME, '/') > 0 OR INSTR(TABLE_NAME, '.') > 0) ) AS user_pages, ( SELECT COUNT(*) FROM information_schema.INNODB_BUFFER_PAGE ) AS total_pages, ( SELECT ROUND((user_pages/total_pages) * 100) ) AS user_page_percentage; +------------+-------------+----------------------+ | user_pages | total_pages | user_page_percentage | +------------+-------------+----------------------+ | 6872 | 8192 | 84 | +------------+-------------+----------------------+ 1 row in set (0.08 sec)
This query identifies user-defined tables with pages in the buffer pool:
mysql> SELECT DISTINCT TABLE_NAME FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME IS NOT NULL AND (INSTR(TABLE_NAME, '/') > 0 OR INSTR(TABLE_NAME, '.') > 0) AND TABLE_NAME NOT LIKE '`mysql`.`innodb_%'; +-------------------------+ | TABLE_NAME | +-------------------------+ | `employees`.`salaries` | | `employees`.`employees` | +-------------------------+ 2 rows in set (0.09 sec)
Example 14.19 Querying Index Data in the INNODB_BUFFER_PAGE Table
        For information about index pages, query the
        INDEX_NAME column using the name of the
        index. For example, the following query returns the number of
        pages and total data size of pages for the
        emp_no index that is defined on the
        employees.salaries table:
      
mysql> SELECT INDEX_NAME, COUNT(*) AS Pages, ROUND(SUM(IF(COMPRESSED_SIZE = 0, @@global.innodb_page_size, COMPRESSED_SIZE))/1024/1024) AS 'Total Data (MB)' FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE INDEX_NAME='emp_no' AND TABLE_NAME = '`employees`.`salaries`'; +------------+-------+-----------------+ | INDEX_NAME | Pages | Total Data (MB) | +------------+-------+-----------------+ | emp_no | 1756 | 27 | +------------+-------+-----------------+ 1 row in set (0.07 sec)
        This query returns the number of pages and total data size of
        pages for all indexes defined on the
        employees.salaries table:
      
mysql> SELECT INDEX_NAME, COUNT(*) AS Pages, ROUND(SUM(IF(COMPRESSED_SIZE = 0, @@global.innodb_page_size, COMPRESSED_SIZE))/1024/1024) AS 'Total Data (MB)' FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME = '`employees`.`salaries`' GROUP BY INDEX_NAME; +------------+-------+-----------------+ | INDEX_NAME | Pages | Total Data (MB) | +------------+-------+-----------------+ | emp_no | 1756 | 27 | | PRIMARY | 4838 | 76 | +------------+-------+-----------------+ 2 rows in set (0.12 sec)
Example 14.20 Querying LRU_POSITION Data in the INNODB_BUFFER_PAGE_LRU Table
        The INNODB_BUFFER_PAGE_LRU table
        holds information about the pages in the
        InnoDB buffer pool, in particular how they
        are ordered that determines which pages to evict from the buffer
        pool when it becomes full. The definition for this page is the
        same as for INNODB_BUFFER_PAGE,
        except this table has an LRU_POSITION column
        instead of a BLOCK_ID column.
      
        This query counts the number of positions at a specific location
        in the LRU list occupied by pages of the
        employees.employees table.
      
mysql> SELECT COUNT(LRU_POSITION) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRU WHERE TABLE_NAME='`employees`.`employees`' AND LRU_POSITION < 3072; +---------------------+ | COUNT(LRU_POSITION) | +---------------------+ | 275 | +---------------------+ 1 row in set (0.04 sec)
Example 14.21 Querying the INNODB_BUFFER_POOL_STATS Table
        The INNODB_BUFFER_POOL_STATS table
        provides information similar to
        SHOW ENGINE INNODB
        STATUS and InnoDB buffer pool
        status variables.
      
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS \G
*************************** 1. row ***************************
                         POOL_ID: 0
                       POOL_SIZE: 8192
                    FREE_BUFFERS: 1024
                  DATABASE_PAGES: 7029
              OLD_DATABASE_PAGES: 2574
         MODIFIED_DATABASE_PAGES: 0
              PENDING_DECOMPRESS: 0
                   PENDING_READS: 0
               PENDING_FLUSH_LRU: 0
              PENDING_FLUSH_LIST: 0
                PAGES_MADE_YOUNG: 173
            PAGES_NOT_MADE_YOUNG: 3721891
           PAGES_MADE_YOUNG_RATE: 0
       PAGES_MADE_NOT_YOUNG_RATE: 0
               NUMBER_PAGES_READ: 1075
            NUMBER_PAGES_CREATED: 12594
            NUMBER_PAGES_WRITTEN: 13525
                 PAGES_READ_RATE: 0
               PAGES_CREATE_RATE: 0
              PAGES_WRITTEN_RATE: 0
                NUMBER_PAGES_GET: 27873240
                        HIT_RATE: 0
    YOUNG_MAKE_PER_THOUSAND_GETS: 0
NOT_YOUNG_MAKE_PER_THOUSAND_GETS: 0
         NUMBER_PAGES_READ_AHEAD: 576
       NUMBER_READ_AHEAD_EVICTED: 0
                 READ_AHEAD_RATE: 0
         READ_AHEAD_EVICTED_RATE: 0
                    LRU_IO_TOTAL: 0
                  LRU_IO_CURRENT: 0
                UNCOMPRESS_TOTAL: 0
              UNCOMPRESS_CURRENT: 0
1 row in set (0.00 sec)
        For comparison,
        SHOW ENGINE INNODB
        STATUS output and InnoDB buffer
        pool status variable output is shown below, based on the same
        data set.
      
        For more information about
        SHOW ENGINE INNODB
        STATUS output, see
        Section 14.14.3, “InnoDB Standard Monitor and Lock Monitor Output”.
      
mysql> SHOW ENGINE INNODB STATUS \G ... ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 137363456; in additional pool allocated 0 Dictionary memory allocated 99725 Buffer pool size 8192 Free buffers 1024 Database pages 7029 Old database pages 2574 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 173, not young 3721891 0.00 youngs/s, 0.00 non-youngs/s Pages read 1075, created 12594, written 13525 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 7029, unzip_LRU len: 0 I/O sum[0]:cur[0], unzip sum[0]:cur[0] ...
For status variable descriptions, see Section 5.1.6, “Server Status Variables”.
mysql> SHOW STATUS LIKE 'Innodb_buffer%'; +---------------------------------------+-------------+ | Variable_name | Value | +---------------------------------------+-------------+ | Innodb_buffer_pool_dump_status | not started | | Innodb_buffer_pool_load_status | not started | | Innodb_buffer_pool_pages_data | 7029 | | Innodb_buffer_pool_bytes_data | 115163136 | | Innodb_buffer_pool_pages_dirty | 0 | | Innodb_buffer_pool_bytes_dirty | 0 | | Innodb_buffer_pool_pages_flushed | 13525 | | Innodb_buffer_pool_pages_free | 1024 | | Innodb_buffer_pool_pages_misc | 139 | | Innodb_buffer_pool_pages_total | 8192 | | Innodb_buffer_pool_read_ahead_rnd | 0 | | Innodb_buffer_pool_read_ahead | 576 | | Innodb_buffer_pool_read_ahead_evicted | 0 | | Innodb_buffer_pool_read_requests | 27873240 | | Innodb_buffer_pool_reads | 500 | | Innodb_buffer_pool_wait_free | 0 | | Innodb_buffer_pool_write_requests | 11966441 | +---------------------------------------+-------------+ 17 rows in set (0.00 sec)
      The INNODB_METRICS table, introduced
      in MySQL 5.6.2, consolidates all InnoDB
      performance and resource-related counters into a single
      INFORMATION_SCHEMA table.
    
      The columns of the INNODB_METRICS
      table are shown in the following example. For a description of
      each column, see Section 21.29.19, “The INFORMATION_SCHEMA INNODB_METRICS Table”.
    
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME="dml_inserts" \G
*************************** 1. row ***************************
           NAME: dml_inserts
      SUBSYSTEM: dml
          COUNT: 46273
      MAX_COUNT: 46273
      MIN_COUNT: NULL
      AVG_COUNT: 492.2659574468085
    COUNT_RESET: 46273
MAX_COUNT_RESET: 46273
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: NULL
   TIME_ENABLED: 2014-11-28 16:07:53
  TIME_DISABLED: NULL
   TIME_ELAPSED: 94
     TIME_RESET: NULL
         STATUS: enabled
           TYPE: status_counter
        COMMENT: Number of rows inserted
1 row in set (0.00 sec)
You can enable, disable, and reset counters using the following configuration options:
          innodb_monitor_enable:
          Enables one or more counters.
        
SET GLOBAL innodb_monitor_enable = [counter-name|module_name|pattern|all];
          innodb_monitor_disable:
          Disables one or more counters.
        
SET GLOBAL innodb_monitor_disable = [counter-name|module_name|pattern|all];
          innodb_monitor_reset: Resets
          the count value for one or more counters to zero.
        
SET GLOBAL innodb_monitor_reset = [counter-name|module_name|pattern|all];
          innodb_monitor_reset_all:
          Resets all values for one or more counters. A counter must be
          disabled before using
          innodb_monitor_reset_all.
SET GLOBAL innodb_monitor_reset_all = [counter-name|module_name|pattern|all];
      You can also enable counters and counter modules at startup using
      the MySQL server configuration file. For example, to enable the
      log module,
      metadata_table_handles_opened and
      metadata_table_handles_closed counters, enter
      the following line in the [mysqld] section of
      your my.cnf configuration file.
    
[mysqld] innodb_monitor_enable = module_recovery,metadata_table_handles_opened,metadata_table_handles_closed
      When enabling multiple counters or modules in your configuration
      file, you must specify the
      innodb_monitor_enable
      configuration option followed by counter and module names
      separated by a comma, as shown in the example above. Only the
      innodb_monitor_enable option can
      be used in your configuration file. The disable and reset
      configuration options are only supported on the command line.
Because each counter imposes some degree of runtime overhead on the server, typically you enable more counters on test and development servers during experimentation and benchmarking, and only enable counters on production servers to diagnose known issues or monitor aspects that are likely to be bottlenecks for a particular server and workload.
      The counters represented in the
      INNODB_METRICS table are subject to
      change, so for the most up-to-date list, query a running MySQL
      server. The list below shows counters that are available as of
      MySQL 5.6.23.
    
      Counters that are enabled by default correspond to those used by
      SHOW ENGINE INNODB
      STATUS. Counters used by
      SHOW ENGINE INNODB
      STATUS are always “on” at a system level
      but you can disable these counters for the
      INNODB_METRICS table, as required.
      Also, counter status is not persistent. Unless specified
      otherwise, counters revert to their default enabled or disabled
      status when the server is restarted.
    
      If you run programs that would be affected by additions or changes
      to the INNODB_METRICS table, it is
      recommended that you review releases notes and query the
      INNODB_METRICS table for the new
      release prior to upgrading.
    
mysql> SELECT name, subsystem, status FROM INFORMATION_SCHEMA.INNODB_METRICS ORDER BY NAME; +------------------------------------------+---------------------+----------+ | name | subsystem | status | +------------------------------------------+---------------------+----------+ | adaptive_hash_pages_added | adaptive_hash_index | disabled | | adaptive_hash_pages_removed | adaptive_hash_index | disabled | | adaptive_hash_rows_added | adaptive_hash_index | disabled | | adaptive_hash_rows_deleted_no_hash_entry | adaptive_hash_index | disabled | | adaptive_hash_rows_removed | adaptive_hash_index | disabled | | adaptive_hash_rows_updated | adaptive_hash_index | disabled | | adaptive_hash_searches | adaptive_hash_index | enabled | | adaptive_hash_searches_btree | adaptive_hash_index | disabled | | buffer_data_reads | buffer | enabled | | buffer_data_written | buffer | enabled | | buffer_flush_adaptive | buffer | disabled | | buffer_flush_adaptive_pages | buffer | disabled | | buffer_flush_adaptive_total_pages | buffer | disabled | | buffer_flush_avg_page_rate | buffer | disabled | | buffer_flush_background | buffer | disabled | | buffer_flush_background_pages | buffer | disabled | | buffer_flush_background_total_pages | buffer | disabled | | buffer_flush_batches | buffer | disabled | | buffer_flush_batch_num_scan | buffer | disabled | | buffer_flush_batch_pages | buffer | disabled | | buffer_flush_batch_rescan | buffer | disabled | | buffer_flush_batch_scanned | buffer | disabled | | buffer_flush_batch_scanned_per_call | buffer | disabled | | buffer_flush_batch_total_pages | buffer | disabled | | buffer_flush_lsn_avg_rate | buffer | disabled | | buffer_flush_neighbor | buffer | disabled | | buffer_flush_neighbor_pages | buffer | disabled | | buffer_flush_neighbor_total_pages | buffer | disabled | | buffer_flush_n_to_flush_requested | buffer | disabled | | buffer_flush_pct_for_dirty | buffer | disabled | | buffer_flush_pct_for_lsn | buffer | disabled | | buffer_flush_sync | buffer | disabled | | buffer_flush_sync_pages | buffer | disabled | | buffer_flush_sync_total_pages | buffer | disabled | | buffer_flush_sync_waits | buffer | disabled | | buffer_LRU_batches | buffer | disabled | | buffer_LRU_batch_num_scan | buffer | disabled | | buffer_LRU_batch_pages | buffer | disabled | | buffer_LRU_batch_scanned | buffer | disabled | | buffer_LRU_batch_scanned_per_call | buffer | disabled | | buffer_LRU_batch_total_pages | buffer | disabled | | buffer_LRU_get_free_search | Buffer | disabled | | buffer_LRU_search_num_scan | buffer | disabled | | buffer_LRU_search_scanned | buffer | disabled | | buffer_LRU_search_scanned_per_call | buffer | disabled | | buffer_LRU_single_flush_failure_count | Buffer | disabled | | buffer_LRU_single_flush_num_scan | buffer | disabled | | buffer_LRU_single_flush_scanned | buffer | disabled | | buffer_LRU_single_flush_scanned_per_call | buffer | disabled | | buffer_LRU_unzip_search_num_scan | buffer | disabled | | buffer_LRU_unzip_search_scanned | buffer | disabled | | buffer_LRU_unzip_search_scanned_per_call | buffer | disabled | | buffer_pages_created | buffer | enabled | | buffer_pages_read | buffer | enabled | | buffer_pages_written | buffer | enabled | | buffer_page_read_blob | buffer_page_io | disabled | | buffer_page_read_fsp_hdr | buffer_page_io | disabled | | buffer_page_read_ibuf_bitmap | buffer_page_io | disabled | | buffer_page_read_ibuf_free_list | buffer_page_io | disabled | | buffer_page_read_index_ibuf_leaf | buffer_page_io | disabled | | buffer_page_read_index_ibuf_non_leaf | buffer_page_io | disabled | | buffer_page_read_index_inode | buffer_page_io | disabled | | buffer_page_read_index_leaf | buffer_page_io | disabled | | buffer_page_read_index_non_leaf | buffer_page_io | disabled | | buffer_page_read_other | buffer_page_io | disabled | | buffer_page_read_system_page | buffer_page_io | disabled | | buffer_page_read_trx_system | buffer_page_io | disabled | | buffer_page_read_undo_log | buffer_page_io | disabled | | buffer_page_read_xdes | buffer_page_io | disabled | | buffer_page_read_zblob | buffer_page_io | disabled | | buffer_page_read_zblob2 | buffer_page_io | disabled | | buffer_page_written_blob | buffer_page_io | disabled | | buffer_page_written_fsp_hdr | buffer_page_io | disabled | | buffer_page_written_ibuf_bitmap | buffer_page_io | disabled | | buffer_page_written_ibuf_free_list | buffer_page_io | disabled | | buffer_page_written_index_ibuf_leaf | buffer_page_io | disabled | | buffer_page_written_index_ibuf_non_leaf | buffer_page_io | disabled | | buffer_page_written_index_inode | buffer_page_io | disabled | | buffer_page_written_index_leaf | buffer_page_io | disabled | | buffer_page_written_index_non_leaf | buffer_page_io | disabled | | buffer_page_written_other | buffer_page_io | disabled | | buffer_page_written_system_page | buffer_page_io | disabled | | buffer_page_written_trx_system | buffer_page_io | disabled | | buffer_page_written_undo_log | buffer_page_io | disabled | | buffer_page_written_xdes | buffer_page_io | disabled | | buffer_page_written_zblob | buffer_page_io | disabled | | buffer_page_written_zblob2 | buffer_page_io | disabled | | buffer_pool_bytes_data | buffer | enabled | | buffer_pool_bytes_dirty | buffer | enabled | | buffer_pool_pages_data | buffer | enabled | | buffer_pool_pages_dirty | buffer | enabled | | buffer_pool_pages_free | buffer | enabled | | buffer_pool_pages_misc | buffer | enabled | | buffer_pool_pages_total | buffer | enabled | | buffer_pool_reads | buffer | enabled | | buffer_pool_read_ahead | buffer | enabled | | buffer_pool_read_ahead_evicted | buffer | enabled | | buffer_pool_read_requests | buffer | enabled | | buffer_pool_size | server | enabled | | buffer_pool_wait_free | buffer | enabled | | buffer_pool_write_requests | buffer | enabled | | compression_pad_decrements | compression | disabled | | compression_pad_increments | compression | disabled | | compress_pages_compressed | compression | disabled | | compress_pages_decompressed | compression | disabled | | ddl_background_drop_indexes | ddl | disabled | | ddl_background_drop_tables | ddl | disabled | | ddl_online_create_index | ddl | disabled | | ddl_pending_alter_table | ddl | disabled | | dml_deletes | dml | enabled | | dml_inserts | dml | enabled | | dml_reads | dml | enabled | | dml_updates | dml | enabled | | file_num_open_files | file_system | enabled | | ibuf_merges | change_buffer | enabled | | ibuf_merges_delete | change_buffer | enabled | | ibuf_merges_delete_mark | change_buffer | enabled | | ibuf_merges_discard_delete | change_buffer | enabled | | ibuf_merges_discard_delete_mark | change_buffer | enabled | | ibuf_merges_discard_insert | change_buffer | enabled | | ibuf_merges_insert | change_buffer | enabled | | ibuf_size | change_buffer | enabled | | icp_attempts | icp | disabled | | icp_match | icp | disabled | | icp_no_match | icp | disabled | | icp_out_of_range | icp | disabled | | index_page_discards | index | disabled | | index_page_merge_attempts | index | disabled | | index_page_merge_successful | index | disabled | | index_page_reorg_attempts | index | disabled | | index_page_reorg_successful | index | disabled | | index_page_splits | index | disabled | | innodb_activity_count | server | enabled | | innodb_background_drop_table_usec | server | disabled | | innodb_checkpoint_usec | server | disabled | | innodb_dblwr_pages_written | server | enabled | | innodb_dblwr_writes | server | enabled | | innodb_dict_lru_usec | server | disabled | | innodb_ibuf_merge_usec | server | disabled | | innodb_log_flush_usec | server | disabled | | innodb_master_active_loops | server | disabled | | innodb_master_idle_loops | server | disabled | | innodb_master_purge_usec | server | disabled | | innodb_master_thread_sleeps | server | disabled | | innodb_mem_validate_usec | server | disabled | | innodb_page_size | server | enabled | | innodb_rwlock_s_os_waits | server | enabled | | innodb_rwlock_s_spin_rounds | server | enabled | | innodb_rwlock_s_spin_waits | server | enabled | | innodb_rwlock_x_os_waits | server | enabled | | innodb_rwlock_x_spin_rounds | server | enabled | | innodb_rwlock_x_spin_waits | server | enabled | | lock_deadlocks | lock | enabled | | lock_rec_locks | lock | disabled | | lock_rec_lock_created | lock | disabled | | lock_rec_lock_removed | lock | disabled | | lock_rec_lock_requests | lock | disabled | | lock_rec_lock_waits | lock | disabled | | lock_row_lock_current_waits | lock | enabled | | lock_row_lock_time | lock | enabled | | lock_row_lock_time_avg | lock | enabled | | lock_row_lock_time_max | lock | enabled | | lock_row_lock_waits | lock | enabled | | lock_table_locks | lock | disabled | | lock_table_lock_created | lock | disabled | | lock_table_lock_removed | lock | disabled | | lock_table_lock_waits | lock | disabled | | lock_timeouts | lock | enabled | | log_checkpoints | recovery | disabled | | log_lsn_buf_pool_oldest | recovery | disabled | | log_lsn_checkpoint_age | recovery | disabled | | log_lsn_current | recovery | disabled | | log_lsn_last_checkpoint | recovery | disabled | | log_lsn_last_flush | recovery | disabled | | log_max_modified_age_async | recovery | disabled | | log_max_modified_age_sync | recovery | disabled | | log_num_log_io | recovery | disabled | | log_pending_checkpoint_writes | recovery | disabled | | log_pending_log_writes | recovery | disabled | | log_waits | recovery | enabled | | log_writes | recovery | enabled | | log_write_requests | recovery | enabled | | metadata_mem_pool_size | metadata | enabled | | metadata_table_handles_closed | metadata | disabled | | metadata_table_handles_opened | metadata | disabled | | metadata_table_reference_count | metadata | disabled | | os_data_fsyncs | os | enabled | | os_data_reads | os | enabled | | os_data_writes | os | enabled | | os_log_bytes_written | os | enabled | | os_log_fsyncs | os | enabled | | os_log_pending_fsyncs | os | enabled | | os_log_pending_writes | os | enabled | | os_pending_reads | os | disabled | | os_pending_writes | os | disabled | | purge_del_mark_records | purge | disabled | | purge_dml_delay_usec | purge | disabled | | purge_invoked | purge | disabled | | purge_resume_count | purge | disabled | | purge_stop_count | purge | disabled | | purge_undo_log_pages | purge | disabled | | purge_upd_exist_or_extern_records | purge | disabled | | trx_active_transactions | transaction | disabled | | trx_commits_insert_update | transaction | disabled | | trx_nl_ro_commits | transaction | disabled | | trx_rollbacks | transaction | disabled | | trx_rollbacks_savepoint | transaction | disabled | | trx_rollback_active | transaction | disabled | | trx_ro_commits | transaction | disabled | | trx_rseg_current_size | transaction | disabled | | trx_rseg_history_len | transaction | enabled | | trx_rw_commits | transaction | disabled | | trx_undo_slots_cached | transaction | disabled | | trx_undo_slots_used | transaction | disabled | +------------------------------------------+---------------------+----------+ 214 rows in set (0.00 sec)
      The module names correspond to, but are not identical to, the
      values from the SUBSYSTEM column of the
      INNODB_METRICS table. Rather
      enabling, disabling, or resetting counters individually, you can
      use module names to quickly enable, disable, or reset all counters
      for a particular subsystem. For example, use
      module_dml to enable all counters associated
      with the dml subsystem.
    
mysql> SET GLOBAL innodb_monitor_enable = module_dml;
      
mysql> SELECT name, subsystem, status FROM INFORMATION_SCHEMA.INNODB_METRICS 
WHERE subsystem ='dml';
+-------------+-----------+---------+
| name        | subsystem | status  |
+-------------+-----------+---------+
| dml_reads   | dml       | enabled |
| dml_inserts | dml       | enabled |
| dml_deletes | dml       | enabled |
| dml_updates | dml       | enabled |
+-------------+-----------+---------+
4 rows in set (0.01 sec)
      Here are the values you can use for
      module_name with the
      innodb_monitor_enable and related
      configuration options, along with the corresponding
      SUBSYSTEM names:
          module_metadata (subsystem =
          metadata)
        
          module_lock (subsystem =
          lock)
        
          module_buffer (subsystem =
          buffer)
        
          module_buf_page (subsystem =
          buffer_page_io)
        
          module_os (subsystem =
          os)
        
          module_trx (subsystem =
          transaction)
        
          module_purge (subsystem =
          purge)
        
          module_compress (subsystem =
          compression)
        
          module_file (subsystem =
          file_system)
        
          module_index (subsystem =
          index)
        
          module_adaptive_hash (subsystem =
          adaptive_hash_index)
        
          module_ibuf_system (subsystem =
          change_buffer)
        
          module_srv (subsystem =
          server)
        
          module_ddl (subsystem =
          ddl)
        
          module_dml (subsystem =
          dml)
        
          module_log (subsystem =
          recovery)
        
          module_icp (subsystem =
          icp)
Example 14.22 Working with INNODB_METRICS Table Counters
        This example demonstrates enabling, disabling, and resetting a
        counter, and querying counter data in the
        INNODB_METRICS table.
            Create a simple InnoDB table:
          
mysql> USE test; Database changed mysql> CREATE TABLE t1 (c1 INT) ENGINE=INNODB; Query OK, 0 rows affected (0.02 sec)
            Enable the dml_inserts counter.
          
mysql> SET GLOBAL innodb_monitor_enable = dml_inserts; Query OK, 0 rows affected (0.01 sec)
            A description of the dml_inserts counter
            can be found in the COMMENT column of the
            INNODB_METRICS table:
          
mysql> SELECT NAME, COMMENT FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME="dml_inserts"; +-------------+-------------------------+ | NAME | COMMENT | +-------------+-------------------------+ | dml_inserts | Number of rows inserted | +-------------+-------------------------+ 1 row in set (0.00 sec)
            Query the INNODB_METRICS table
            for the dml_inserts counter data. Because
            no DML operations have been performed, the counter values
            are zero or NULL. The TIME_ENABLED and
            TIME_ELAPSED values indicate when the
            counter was last enabled and how many seconds have elapsed
            since this time.
          
mysql>  SELECT * FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME="dml_inserts" \G
*************************** 1. row ***************************
           NAME: dml_inserts
      SUBSYSTEM: dml
          COUNT: 0
      MAX_COUNT: 0
      MIN_COUNT: NULL
      AVG_COUNT: 0
    COUNT_RESET: 0
MAX_COUNT_RESET: 0
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: NULL
   TIME_ENABLED: 2014-12-04 14:18:28
  TIME_DISABLED: NULL
   TIME_ELAPSED: 28
     TIME_RESET: NULL
         STATUS: enabled
           TYPE: status_counter
        COMMENT: Number of rows inserted
1 row in set (0.01 sec)Insert three rows of data into the table.
mysql> INSERT INTO t1 values(1); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO t1 values(2); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO t1 values(3); Query OK, 1 row affected (0.00 sec)
            Query the INNODB_METRICS table
            again for the dml_inserts counter data. A
            number of counter values have now incremented including
            COUNT, MAX_COUNT,
            AVG_COUNT, and
            COUNT_RESET. Refer to the
            INNODB_METRICS table definition
            for descriptions of these values.
          
mysql>  SELECT * FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME="dml_inserts"\G
*************************** 1. row ***************************
           NAME: dml_inserts
      SUBSYSTEM: dml
          COUNT: 3
      MAX_COUNT: 3
      MIN_COUNT: NULL
      AVG_COUNT: 0.046153846153846156
    COUNT_RESET: 3
MAX_COUNT_RESET: 3
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: NULL
   TIME_ENABLED: 2014-12-04 14:18:28
  TIME_DISABLED: NULL
   TIME_ELAPSED: 65
     TIME_RESET: NULL
         STATUS: enabled
           TYPE: status_counter
        COMMENT: Number of rows inserted
1 row in set (0.00 sec)
            Reset the dml_inserts counter, and query
            the INNODB_METRICS table again
            for the dml_inserts counter data. The
            “%_RESET” values that were
            reported previously, such as COUNT_RESET
            and MAX_RESET, are set back to zero.
            Values such as COUNT,
            MAX_COUNT, and
            AVG_COUNT, which cumulatively collect
            data from the time the counter is enabled, are unaffected by
            the reset.
          
mysql> SET GLOBAL innodb_monitor_reset = dml_inserts;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME="dml_inserts"\G
*************************** 1. row ***************************
           NAME: dml_inserts
      SUBSYSTEM: dml
          COUNT: 3
      MAX_COUNT: 3
      MIN_COUNT: NULL
      AVG_COUNT: 0.03529411764705882
    COUNT_RESET: 0
MAX_COUNT_RESET: 0
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: 0
   TIME_ENABLED: 2014-12-04 14:18:28
  TIME_DISABLED: NULL
   TIME_ELAPSED: 85
     TIME_RESET: 2014-12-04 14:19:44
         STATUS: enabled
           TYPE: status_counter
        COMMENT: Number of rows inserted
1 row in set (0.00 sec)
            To reset all counter values, you must first disable the
            counter. Disabling the counter sets the
            STATUS value to
            disbaled.
          
mysql> SET GLOBAL innodb_monitor_disable = dml_inserts;
Query OK, 0 rows affected (0.00 sec)
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME="dml_inserts"\G
*************************** 1. row ***************************
           NAME: dml_inserts
      SUBSYSTEM: dml
          COUNT: 3
      MAX_COUNT: 3
      MIN_COUNT: NULL
      AVG_COUNT: 0.030612244897959183
    COUNT_RESET: 0
MAX_COUNT_RESET: 0
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: 0
   TIME_ENABLED: 2014-12-04 14:18:28
  TIME_DISABLED: 2014-12-04 14:20:06
   TIME_ELAPSED: 98
     TIME_RESET: NULL
         STATUS: disabled
           TYPE: status_counter
        COMMENT: Number of rows inserted
1 row in set (0.00 sec)
              Wildcard match is supported for counter and module names.
              For example, instead of specifying the full
              dml_inserts counter name, you can
              specify “dml_i%”. You can
              also enable, disable, or reset multiple counters or
              modules at once using a wildcard match. For example,
              specify “dml_%” to enable,
              disable, or reset all counters that begin with
              “dml_%.”
            After the counter is disabled, you can reset all counter
            values using the
            innodb_monitor_reset_all
            option. All values are set to zero or NULL.
          
mysql> SET GLOBAL innodb_monitor_reset_all = dml_inserts;
Query OK, 0 rows affected (0.00 sec)
          
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME="dml_inserts"\G
*************************** 1. row ***************************
           NAME: dml_inserts
      SUBSYSTEM: dml
          COUNT: 0
      MAX_COUNT: NULL
      MIN_COUNT: NULL
      AVG_COUNT: NULL
    COUNT_RESET: 0
MAX_COUNT_RESET: NULL
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: NULL
   TIME_ENABLED: NULL
  TIME_DISABLED: NULL
   TIME_ELAPSED: NULL
     TIME_RESET: NULL
         STATUS: disabled
           TYPE: status_counter
        COMMENT: Number of rows inserted
1 row in set (0.01 sec)
    This section provides a brief introduction to
    InnoDB integration with Performance Schema. For
    comprehensive Performance Schema documentation, see
    Chapter 22, MySQL Performance Schema.
  
    You can profile certain internal InnoDB
    operations using the MySQL
    Performance Schema
    feature. This type of tuning is primarily for expert users
    who evaluate optimization strategies to overcome performance
    bottlenecks. DBAs can also use this feature for capacity planning,
    to see whether their typical workload encounters any performance
    bottlenecks with a particular combination of CPU, RAM, and disk
    storage; and if so, to judge whether performance can be improved by
    increasing the capacity of some part of the system.
  
    To use this feature to examine InnoDB
    performance:
        You must be generally familiar with how to use the
        Performance Schema
        feature. For example, you should know how enable
        instruments and consumers, and how to query
        performance_schema tables to retrieve data.
        For an introductory overview, see
        Section 22.1, “Performance Schema Quick Start”.
      
        You should be familiar with Performance Schema instruments that
        are available for InnoDB. To view
        InnoDB-related instruments, you can query the
        setup_instruments table for
        instrument names that contain 'innodb'.
      
mysql> SELECT * FROM setup_instruments WHERE NAME LIKE '%innodb%'; +-------------------------------------------------------+---------+-------+ | NAME | ENABLED | TIMED | +-------------------------------------------------------+---------+-------+ | wait/synch/mutex/innodb/commit_cond_mutex | NO | NO | | wait/synch/mutex/innodb/innobase_share_mutex | NO | NO | | wait/synch/mutex/innodb/autoinc_mutex | NO | NO | | wait/synch/mutex/innodb/buf_pool_mutex | NO | NO | | wait/synch/mutex/innodb/buf_pool_zip_mutex | NO | NO | | wait/synch/mutex/innodb/cache_last_read_mutex | NO | NO | | wait/synch/mutex/innodb/dict_foreign_err_mutex | NO | NO | | wait/synch/mutex/innodb/dict_sys_mutex | NO | NO | | wait/synch/mutex/innodb/file_format_max_mutex | NO | NO | | wait/synch/mutex/innodb/fil_system_mutex | NO | NO | | wait/synch/mutex/innodb/flush_list_mutex | NO | NO | | wait/synch/mutex/innodb/fts_bg_threads_mutex | NO | NO | | wait/synch/mutex/innodb/fts_delete_mutex | NO | NO | | wait/synch/mutex/innodb/fts_optimize_mutex | NO | NO | ... | wait/synch/rwlock/innodb/btr_search_latch | NO | NO | | wait/synch/rwlock/innodb/dict_operation_lock | NO | NO | | wait/synch/rwlock/innodb/fil_space_latch | NO | NO | | wait/synch/rwlock/innodb/checkpoint_lock | NO | NO | | wait/synch/rwlock/innodb/fts_cache_rw_lock | NO | NO | | wait/synch/rwlock/innodb/fts_cache_init_rw_lock | NO | NO | | wait/synch/rwlock/innodb/trx_i_s_cache_lock | NO | NO | | wait/synch/rwlock/innodb/trx_purge_latch | NO | NO | | wait/synch/rwlock/innodb/index_tree_rw_lock | NO | NO | | wait/synch/rwlock/innodb/index_online_log | NO | NO | | wait/synch/rwlock/innodb/dict_table_stats | NO | NO | | wait/synch/rwlock/innodb/hash_table_locks | NO | NO | | wait/synch/cond/innodb/commit_cond | NO | NO | | wait/io/file/innodb/innodb_data_file | YES | YES | | wait/io/file/innodb/innodb_log_file | YES | YES | | wait/io/file/innodb/innodb_temp_file | YES | YES | +-------------------------------------------------------+---------+-------+ 62 rows in set (0.00 sec)
        For additional information about the instrumented
        InnoDB objects, you can query Performance
        Schema
        instances
        tables, which provide additional information about
        instrumented objects. Instance tables relevant to
        InnoDB include:
            The mutex_instances table
          
            The rwlock_instances table
          
            The cond_instances table
          
            The file_instances table
          Mutexes and RW-locks related to the InnoDB
          buffer pool are not included in this coverage; the same
          applies to the output of the SHOW ENGINE INNODB
          MUTEX command.
        For example, to view information about instrumented
        InnoDB file objects seen by the Performance
        Schema when executing file I/O instrumentation, you might issue
        the following query:
      
mysql> SELECT * FROM file_instances WHERE EVENT_NAME LIKE '%innodb%'\G *************************** 1. row *************************** FILE_NAME: /path/to/mysql-5.6/data/ibdata1 EVENT_NAME: wait/io/file/innodb/innodb_data_file OPEN_COUNT: 2 *************************** 2. row *************************** FILE_NAME: /path/to/mysql-5.6/data/ib_logfile0 EVENT_NAME: wait/io/file/innodb/innodb_log_file OPEN_COUNT: 2 *************************** 3. row *************************** FILE_NAME: /path/to/mysql-5.6/data/ib_logfile1 EVENT_NAME: wait/io/file/innodb/innodb_log_file OPEN_COUNT: 2 ...
        You should be familiar with
        performance_schema tables that store
        InnoDB event data. Tables relevant to
        InnoDB-related events include:
The Wait Event tables, which store wait events.
The Summary tables, which provide aggregated information for terminated events over time. Summary tables include file I/O summary tables, which aggregate information about I/O operations.
        If you are only interested in InnoDB-related
        objects, use the clause WHERE EVENT_NAME LIKE
        '%innodb%' or WHERE NAME LIKE
        '%innodb%' (as required) when querying these tables.
A mutex is a synchronization mechanism used in the code to enforce that only one thread at a given time can have access to a common resource. When two or more threads executing in the server need to access the same resource, the threads compete against each other. The first thread to obtain a lock on the mutex causes the other threads to wait until the lock is released.
      For InnoDB mutexes that are instrumented, mutex
      waits can be monitored using
      Performance Schema. Wait
      event data collected in Performance Schema tables can help
      identify mutexes with the most waits or the greatest total wait
      time, for example.
    
      The following example demonstrates how to enable
      InnoDB mutex wait instruments, how to enable
      associated consumers, and how to query wait event data.
          To view available InnoDB mutex wait
          instruments, query the Performance Schema
          setup_instruments table, as shown
          below. All InnoDB mutex wait instruments
          are disabled by default.
        
mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%wait/synch/mutex/innodb%'; +-------------------------------------------------------+---------+-------+ | NAME | ENABLED | TIMED | +-------------------------------------------------------+---------+-------+ | wait/synch/mutex/innodb/commit_cond_mutex | NO | NO | | wait/synch/mutex/innodb/innobase_share_mutex | NO | NO | | wait/synch/mutex/innodb/autoinc_mutex | NO | NO | | wait/synch/mutex/innodb/buf_pool_mutex | NO | NO | | wait/synch/mutex/innodb/buf_pool_zip_mutex | NO | NO | | wait/synch/mutex/innodb/cache_last_read_mutex | NO | NO | | wait/synch/mutex/innodb/dict_foreign_err_mutex | NO | NO | | wait/synch/mutex/innodb/dict_sys_mutex | NO | NO | | wait/synch/mutex/innodb/file_format_max_mutex | NO | NO | | wait/synch/mutex/innodb/fil_system_mutex | NO | NO | | wait/synch/mutex/innodb/flush_list_mutex | NO | NO | | wait/synch/mutex/innodb/fts_bg_threads_mutex | NO | NO | | wait/synch/mutex/innodb/fts_delete_mutex | NO | NO | | wait/synch/mutex/innodb/fts_optimize_mutex | NO | NO | | wait/synch/mutex/innodb/fts_doc_id_mutex | NO | NO | | wait/synch/mutex/innodb/fts_pll_tokenize_mutex | NO | NO | | wait/synch/mutex/innodb/log_flush_order_mutex | NO | NO | | wait/synch/mutex/innodb/hash_table_mutex | NO | NO | | wait/synch/mutex/innodb/ibuf_bitmap_mutex | NO | NO | | wait/synch/mutex/innodb/ibuf_mutex | NO | NO | | wait/synch/mutex/innodb/ibuf_pessimistic_insert_mutex | NO | NO | | wait/synch/mutex/innodb/log_sys_mutex | NO | NO | | wait/synch/mutex/innodb/mem_pool_mutex | NO | NO | | wait/synch/mutex/innodb/mutex_list_mutex | NO | NO | | wait/synch/mutex/innodb/page_zip_stat_per_index_mutex | NO | NO | | wait/synch/mutex/innodb/purge_sys_bh_mutex | NO | NO | | wait/synch/mutex/innodb/recv_sys_mutex | NO | NO | | wait/synch/mutex/innodb/recv_writer_mutex | NO | NO | | wait/synch/mutex/innodb/rseg_mutex | NO | NO | | wait/synch/mutex/innodb/rw_lock_list_mutex | NO | NO | | wait/synch/mutex/innodb/rw_lock_mutex | NO | NO | | wait/synch/mutex/innodb/srv_dict_tmpfile_mutex | NO | NO | | wait/synch/mutex/innodb/srv_innodb_monitor_mutex | NO | NO | | wait/synch/mutex/innodb/srv_misc_tmpfile_mutex | NO | NO | | wait/synch/mutex/innodb/srv_monitor_file_mutex | NO | NO | | wait/synch/mutex/innodb/buf_dblwr_mutex | NO | NO | | wait/synch/mutex/innodb/trx_undo_mutex | NO | NO | | wait/synch/mutex/innodb/srv_sys_mutex | NO | NO | | wait/synch/mutex/innodb/lock_mutex | NO | NO | | wait/synch/mutex/innodb/lock_wait_mutex | NO | NO | | wait/synch/mutex/innodb/trx_mutex | NO | NO | | wait/synch/mutex/innodb/srv_threads_mutex | NO | NO | | wait/synch/mutex/innodb/os_mutex | NO | NO | | wait/synch/mutex/innodb/ut_list_mutex | NO | NO | | wait/synch/mutex/innodb/trx_sys_mutex | NO | NO | | wait/synch/mutex/innodb/zip_pad_mutex | NO | NO | +-------------------------------------------------------+---------+-------+ 46 rows in set (0.00 sec)
          Some InnoDB mutex instances are created at
          server startup and are only instrumented if the associated
          instrument is also enabled at server startup. To ensure that
          all InnoDB mutex instances are instrumented
          and enabled, add the following
          performance-schema-instrument rule to your
          MySQL configuration file:
        
performance-schema-instrument='wait/synch/mutex/innodb/%=ON'
          If you do not require wait event data for all
          InnoDB mutexes, you can disable specific
          instruments by adding additional
          performance-schema-instrument rules to your
          MySQL configuration file. For example, to disable
          InnoDB mutex wait event instruments related
          to full-text search, add the following rule:
performance-schema-instrument='wait/synch/mutex/innodb/fts%=OFF'
            Rules with a longer prefix such as
            wait/synch/mutex/innodb/fts% take
            precedence over rules with shorter prefixes such as
            wait/synch/mutex/innodb/%.
          After adding the
          performance-schema-instrument rules to your
          configuration file, restart the server. All the
          InnoDB mutexes except for those related to
          full text search are enabled. To verify, query the
          setup_instruments table. The
          ENABLED and TIMED
          columns should be set to YES for the
          instruments that you enabled.
        
mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%wait/synch/mutex/innodb%'; +-------------------------------------------------------+---------+-------+ | NAME | ENABLED | TIMED | +-------------------------------------------------------+---------+-------+ | wait/synch/mutex/innodb/commit_cond_mutex | YES | YES | | wait/synch/mutex/innodb/innobase_share_mutex | YES | YES | | wait/synch/mutex/innodb/autoinc_mutex | YES | YES | ... | wait/synch/mutex/innodb/zip_pad_mutex | YES | YES | +-------------------------------------------------------+---------+-------+ 46 rows in set (0.00 sec)
          Enable wait event consumers by updating the
          setup_consumers table. Wait event
          consumers are disabled by default.
        
mysql> UPDATE performance_schema.setup_consumers SET enabled = 'YES' WHERE name like 'events_waits%'; Query OK, 3 rows affected (0.00 sec) Rows matched: 3 Changed: 3 Warnings: 0
          You can verify that wait event consumers are enabled by
          querying the setup_consumers
          table. The events_waits_current,
          events_waits_history, and
          events_waits_history_long
          consumers should be enabled.
        
mysql> SELECT * FROM performance_schema.setup_consumers; +--------------------------------+---------+ | NAME | ENABLED | +--------------------------------+---------+ | events_stages_current | NO | | events_stages_history | NO | | events_stages_history_long | NO | | events_statements_current | YES | | events_statements_history | NO | | events_statements_history_long | NO | | events_waits_current | YES | | events_waits_history | YES | | events_waits_history_long | YES | | global_instrumentation | YES | | thread_instrumentation | YES | | statements_digest | YES | +--------------------------------+---------+ 12 rows in set (0.00 sec)
Once instruments and consumers are enabled, run the workload that you want to monitor. In this example, the mysqlslap load emulation client is used to simulate a workload.
shell> ./mysqlslap --auto-generate-sql --concurrency=100 --iterations=10 --number-of-queries=1000 --number-char-cols=6 --number-int-cols=6;
          Query the wait event data. In this example, wait event data is
          queried from the
          events_waits_summary_global_by_event_name
          table which aggregates data found in the
          events_waits_current,
          events_waits_history, and
          events_waits_history_long tables.
          Data is summarized by event name
          (EVENT_NAME), which is the name of the
          instrument that produced the event. Summarized data includes:
              COUNT_STAR
            
The number of summarized wait events.
              SUM_TIMER_WAIT
            
The total wait time of the summarized timed wait events.
              MIN_TIMER_WAIT
            
The minimum wait time of the summarized timed wait events.
              AVG_TIMER_WAIT
            
The average wait time of the summarized timed wait events.
              MAX_TIMER_WAIT
            
The maximum wait time of the summarized timed wait events.
          The following query returns the instrument name
          (EVENT_NAME), the number of wait events
          (COUNT_STAR), and the total wait time for
          the events for that instrument
          (SUM_TIMER_WAIT). Because waits are timed
          in picoseconds (trillionths of a second) by default, wait
          times are divided by 1000000000 to show wait times in
          milliseconds. Data is presented in descending order, by the
          number of summarized wait events
          (COUNT_STAR). You can adjust the
          ORDER BY clause to order the data by total
          wait time.
        
mysql> SELECT EVENT_NAME, COUNT_STAR, SUM_TIMER_WAIT/1000000000 SUM_TIMER_WAIT_MS FROM performance_schema.events_waits_summary_global_by_event_name WHERE SUM_TIMER_WAIT > 0 AND EVENT_NAME LIKE 'wait/synch/mutex/innodb/%' ORDER BY COUNT_STAR DESC; +--------------------------------------------------+------------+-------------------+ | EVENT_NAME | COUNT_STAR | SUM_TIMER_WAIT_MS | +--------------------------------------------------+------------+-------------------+ | wait/synch/mutex/innodb/srv_threads_mutex | 454474 | 16.3490 | | wait/synch/mutex/innodb/trx_mutex | 107586 | 8.2415 | | wait/synch/mutex/innodb/log_sys_mutex | 37292 | 460.9484 | | wait/synch/mutex/innodb/redo_rseg_mutex | 31077 | 3.2212 | | wait/synch/mutex/innodb/trx_sys_mutex | 20005 | 447.8246 | | wait/synch/mutex/innodb/lock_mutex | 19293 | 728.4862 | | wait/synch/mutex/innodb/fil_system_mutex | 15789 | 1.3710 | | wait/synch/mutex/innodb/dict_sys_mutex | 9606 | 319.7662 | | wait/synch/mutex/innodb/rw_lock_list_mutex | 8351 | 0.2046 | | wait/synch/mutex/innodb/trx_undo_mutex | 6274 | 0.5573 | | wait/synch/mutex/innodb/buf_pool_mutex | 3230 | 7.3861 | | wait/synch/mutex/innodb/trx_pool_mutex | 2225 | 9591.4174 | | wait/synch/mutex/innodb/innobase_share_mutex | 2025 | 0.1891 | | wait/synch/mutex/innodb/flush_list_mutex | 1421 | 0.1452 | | wait/synch/mutex/innodb/trx_pool_manager_mutex | 1115 | 0.1503 | | wait/synch/mutex/innodb/file_format_max_mutex | 1032 | 0.0445 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 594 | 0.0950 | | wait/synch/mutex/innodb/log_flush_order_mutex | 502 | 0.0489 | | wait/synch/mutex/innodb/srv_sys_mutex | 417 | 0.6856 | | wait/synch/mutex/innodb/recalc_pool_mutex | 406 | 0.0842 | | wait/synch/mutex/innodb/purge_sys_pq_mutex | 365 | 0.0302 | | wait/synch/mutex/innodb/recv_sys_mutex | 273 | 0.0121 | | wait/synch/mutex/innodb/lock_wait_mutex | 170 | 0.0852 | | wait/synch/mutex/innodb/noredo_rseg_mutex | 32 | 0.0008 | | wait/synch/mutex/innodb/ibuf_mutex | 10 | 0.0024 | | wait/synch/mutex/innodb/srv_innodb_monitor_mutex | 2 | 0.0005 | | wait/synch/mutex/innodb/autoinc_mutex | 2 | 0.0004 | | wait/synch/mutex/innodb/recv_writer_mutex | 1 | 0.0001 | +--------------------------------------------------+------------+-------------------+ 28 rows in set (0.03 sec)
            The preceding result set includes wait event data produced
            during the startup process. To exclude this data, you can
            truncate the
            events_waits_summary_global_by_event_name
            table immediately after startup and before running your
            workload. However, the truncate operation itself may produce
            a negligible amount wait event data.
mysql> TRUNCATE performance_schema.events_waits_summary_global_by_event_name;
    InnoDB monitors provide information about the
    InnoDB internal state. This information is useful
    for performance tuning.
      There are four types of InnoDB monitors:
          The standard InnoDB Monitor displays the
          following types of information:
Table and record locks held by each active transaction.
Lock waits of a transaction.
Semaphore waits of threads.
Pending file I/O requests.
Buffer pool statistics.
              Purge and change buffer merge activity of the main
              InnoDB thread.
          The InnoDB Lock Monitor prints additional
          lock information as part of the standard
          InnoDB Monitor output.
        
          The InnoDB Tablespace Monitor prints a list
          of file segments in the shared tablespace and validates the
          tablespace allocation data structures.
        
          The InnoDB Table Monitor prints the
          contents of the InnoDB internal data
          dictionary.
            The Tablespace Monitor and Table Monitor are deprecated and
            will be removed in a future MySQL release. Similar
            information for the Table Monitor can be obtained from
            InnoDB
            INFORMATION_SCHEMA tables. See
            Section 21.29, “INFORMATION_SCHEMA Tables for InnoDB”.
      For additional information about InnoDB
      monitors, see:
Mark Leith: InnoDB Table and Tablespace Monitors
      When you enable InnoDB monitors for periodic
      output, InnoDB writes their output to the
      mysqld server standard error output
      (stderr). In this case, no output is sent to
      clients. When switched on, InnoDB monitors
      print data about every 15 seconds. Server output usually is
      directed to the error log (see Section 5.2.2, “The Error Log”). This
      data is useful in performance tuning. On Windows, start the server
      from a command prompt in a console window with the
      --console option if you want to
      direct the output to the window rather than to the error log.
    
      InnoDB sends diagnostic output to
      stderr or to files rather than to
      stdout or fixed-size memory buffers, to avoid
      potential buffer overflows. As a side effect, the output of
      SHOW ENGINE INNODB
      STATUS is written to a status file in the MySQL data
      directory every fifteen seconds. The name of the file is
      innodb_status.,
      where pidpid is the server process ID.
      InnoDB removes the file for a normal shutdown.
      If abnormal shutdowns have occurred, instances of these status
      files may be present and must be removed manually. Before removing
      them, you might want to examine them to see whether they contain
      useful information about the cause of abnormal shutdowns. The
      innodb_status.
      file is created only if the configuration option
      pidinnodb-status-file=1 is set.
    
      InnoDB monitors should be enabled only when you
      actually want to see monitor information because output generation
      does result in some performance decrement. Also, if you enable
      monitor output, your error log may become quite large if you
      forget to disable it later.
        To assist with troubleshooting, InnoDB
        temporarily enables standard InnoDB Monitor
        output under certain conditions. For more information, see
        Section 14.18, “InnoDB Troubleshooting”.
Each monitor begins with a header containing a timestamp and the monitor name. For example:
===================================== 2014-10-16 16:28:15 7feee43c5700 INNODB MONITOR OUTPUT =====================================
      The header for the standard InnoDB Monitor
      (INNODB MONITOR OUTPUT) is also used for the
      Lock Monitor because the latter produces the same output with the
      addition of extra lock information.
    
      Enabling an InnoDB monitor for periodic output
      involves using a CREATE TABLE statement to
      create a specially named InnoDB table that is
      associated with the monitor. For example, to enable the standard
      InnoDB Monitor, you would create an
      InnoDB table named
      innodb_monitor.
    
      Using CREATE TABLE syntax is just a
      way to pass a command to the InnoDB engine
      through MySQL's SQL parser. The only things that matter are the
      table name and that it be an InnoDB table. The
      structure of the table is not relevant. If you shut down the
      server, the monitor does not restart automatically when you
      restart the server. Drop the monitor table and issue a new
      CREATE TABLE statement to start the
      monitor.
        The CREATE TABLE method of
        enabling InnoDB monitors is deprecated and
        may be removed in a future release. As of MySQL 5.6.16, you can
        enable the standard InnoDB Monitor and
        InnoDB Lock Monitor using the
        innodb_status_output and
        innodb_status_output_locks
        system variables.
      The PROCESS privilege is required
      to enable and disable InnoDB Monitors.
      To enable the standard InnoDB Monitor for periodic output, create
      the innodb_monitor table:
    
CREATE TABLE innodb_monitor (a INT) ENGINE=INNODB;
      To disable the standard InnoDB Monitor, drop
      the table:
    
DROP TABLE innodb_monitor;
      As of MySQL 5.6.16, you can also enable the standard
      InnoDB Monitor by setting the
      innodb_status_output system
      variable to ON.
    
set GLOBAL innodb_status_output=ON;
      To disable the standard InnoDB Monitor, set
      innodb_status_output to
      OFF.
    
      When you shut down the server, the
      innodb_status_output variable is
      set to the default OFF value.
      As an alternative to enabling the standard
      InnoDB Monitor for periodic output, you can
      obtain standard InnoDB Monitor output on demand
      using the SHOW ENGINE
      INNODB STATUS SQL statement, which fetches the output to
      your client program. If you are using the mysql
      interactive client, the output is more readable if you replace the
      usual semicolon statement terminator with \G:
    
mysql> SHOW ENGINE INNODB STATUS\G
      SHOW ENGINE INNODB
      STATUS output also includes InnoDB
      Lock Monitor data if the InnoDB Lock Monitor is
      enabled.
      To enable the InnoDB Lock Monitor for periodic
      output, create the innodb_lock_monitor table:
    
CREATE TABLE innodb_lock_monitor (a INT) ENGINE=INNODB;
      To disable the InnoDB Lock Monitor, drop the
      table:
    
DROP TABLE innodb_lock_monitor;
      As of MySQL 5.6.16, you can also enable the
      InnoDB Lock Monitor by setting the
      innodb_status_output_locks system
      variable to ON. As with the
      CREATE TABLE method for enabling
      InnoDB Monitors, both the
      InnoDB standard Monitor and
      InnoDB Lock Monitor must be enabled to have
      InnoDBLock Monitor data printed periodically:
    
set GLOBAL innodb_status_output=ON; set GLOBAL innodb_status_output_locks=ON;
      When you shut down the server, the
      innodb_status_output and
      innodb_status_output_locks
      variables are set to the default OFF value.
    
      To disable the InnoDB Lock Monitor, set
      innodb_status_output_locks to
      OFF. Set
      innodb_status_output to OFF to
      also disable the standard InnoDB Monitor.
        To enable the InnoDB Lock Monitor for
        SHOW ENGINE INNODB
        STATUS output, you are only required to enable
        innodb_status_output_locks.
      To enable the InnoDB Tablespace Monitor for
      periodic output, create the
      innodb_tablespace_monitor table:
    
CREATE TABLE innodb_tablespace_monitor (a INT) ENGINE=INNODB;
      To disable the standard InnoDB Tablespace
      Monitor, drop the table:
    
DROP TABLE innodb_tablespace_monitor;
The Tablespace Monitor is deprecated and will be removed in a future MySQL release.
      To enable the InnoDB Table Monitor for periodic
      output, create the innodb_table_monitor table:
    
CREATE TABLE innodb_table_monitor (a INT) ENGINE=INNODB;
      To disable the InnoDB Table Monitor, drop the
      table:
    
DROP TABLE innodb_table_monitor;
The Tablespace Monitor is deprecated and will be removed in a future MySQL release.
      The Lock Monitor is the same as the standard Monitor except that
      it includes additional lock information. Enabling either monitor
      for periodic output turns on the same output stream, but the
      stream includes extra information if the Lock Monitor is enabled.
      For example, if you enable the standard InnoDB
      Monitor and InnoDB Lock Monitor, that turns on
      a single output stream. The stream includes extra lock information
      until you disable the Lock Monitor.
    
Example InnoDB Monitor output (as of MySQL 5.6.22):
mysql> SHOW ENGINE INNODB STATUS\G
*************************** 1. row ***************************
  Type: InnoDB
  Name: 
Status: 
=====================================
2014-10-17 10:33:50 7f47bcd64700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 6 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 167 srv_active, 0 srv_shutdown, 3023 srv_idle
srv_master_thread log flush and writes: 3190
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 1040
OS WAIT ARRAY INFO: signal count 959
Mutex spin waits 677, rounds 20336, OS waits 644
RW-shared spins 180, rounds 5400, OS waits 180
RW-excl spins 0, rounds 6420, OS waits 214
Spin rounds per wait: 30.04 mutex, 30.00 RW-shared, 6420.00 RW-excl
------------------------
LATEST FOREIGN KEY ERROR
------------------------
2014-10-17 09:51:31 7f47bcde6700 Transaction:
TRANSACTION 436786, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
4 lock struct(s), heap size 1184, 3 row lock(s), undo log entries 3
MySQL thread id 1, OS thread handle 0x7f47bcde6700, query id 96 localhost 
root update
INSERT INTO child VALUES
    (NULL, 1)
    , (NULL, 2)
    , (NULL, 3)
    , (NULL, 4)
    , (NULL, 5)
    , (NULL, 6)
Foreign key constraint fails for table `mysql`.`child`:
,
  CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_id`) REFERENCES `parent` 
  (`id`)
 ON DELETE CASCADE ON UPDATE CASCADE
Trying to add in child table, in index `par_ind` tuple:
DATA TUPLE: 2 fields;
 0: len 4; hex 80000003; asc     ;;
 1: len 4; hex 80000003; asc     ;;
But in parent table `mysql`.`parent`, in index `PRIMARY`,
the closest match we can find is record:
PHYSICAL RECORD: n_fields 3; compact format; info bits 0
 0: len 4; hex 80000004; asc     ;;
 1: len 6; hex 00000006aa26; asc      &;;
 2: len 7; hex 9d000001610137; asc     a 7;;
------------------------
LATEST DETECTED DEADLOCK
------------------------
2014-10-17 09:52:38 7f47bcde6700
*** (1) TRANSACTION:
TRANSACTION 436801, ACTIVE 12 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 360, 1 row lock(s)
MySQL thread id 2, OS thread handle 0x7f47bcda5700, query id 102 localhost 
root updating
DELETE FROM t WHERE i = 1
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 3693 page no 3 n bits 72 index `GEN_CLUST_INDEX` of 
table `mysql`.`t` trx id 436801 lock_mode X waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 4; compact format; info 
bits 0
 0: len 6; hex 000000003a00; asc     : ;;
 1: len 6; hex 00000006aa3f; asc      ?;;
 2: len 7; hex ad0000021d0110; asc        ;;
 3: len 4; hex 80000001; asc     ;;
*** (2) TRANSACTION:
TRANSACTION 436800, ACTIVE 34 sec starting index read
mysql tables in use 1, locked 1
4 lock struct(s), heap size 1184, 3 row lock(s)
MySQL thread id 1, OS thread handle 0x7f47bcde6700, query id 103 localhost 
root updating
DELETE FROM t WHERE i = 1
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 3693 page no 3 n bits 72 index `GEN_CLUST_INDEX` of 
table `mysql`.`t` trx id 436800 lock mode S
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info 
bits 0 0: len 8; hex 73757072656d756d; asc supremum;;
Record lock, heap no 2 PHYSICAL RECORD: n_fields 4; compact format; info 
bits 0
 0: len 6; hex 000000003a00; asc     : ;;
 1: len 6; hex 00000006aa3f; asc      ?;;
 2: len 7; hex ad0000021d0110; asc        ;;
 3: len 4; hex 80000001; asc     ;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 3693 page no 3 n bits 72 index `GEN_CLUST_INDEX` of 
table `mysql`.`t` trx id 436800 lock_mode X waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 4; compact format; info 
bits 0
 0: len 6; hex 000000003a00; asc     : ;;
 1: len 6; hex 00000006aa3f; asc      ?;;
 2: len 7; hex ad0000021d0110; asc        ;;
 3: len 4; hex 80000001; asc     ;;
*** WE ROLL BACK TRANSACTION (1)
------------
TRANSACTIONS
------------
Trx id counter 437661
Purge done for trx's n:o < 437657 undo n:o < 0 state: running but 
idle History list length 371
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 10, OS thread handle 0x7f47bcd64700, query id 1001 localhost
root init
SHOW ENGINE INNODB STATUS
---TRANSACTION 436801, not started
MySQL thread id 2, OS thread handle 0x7f47bcda5700, query id 102 localhost 
root ceaning up
---TRANSACTION 437660, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
43 lock struct(s), heap size 6544, 6474 row lock(s), undo log entries 7124
MySQL thread id 14, OS thread handle 0x7f47bcde6700, query id 1000 localhost 
root update
INSERT INTO `dept_emp` VALUES (100258,'d002','1994-03-21','9999-01-01'),
(100259, 'd005','1998-11-04','9999-01-01'),(100259,'d008','1988-02-03',
'1998-11-04'),(100 260,'d005','1998-09-18','9999-01-01'),(100261,'d004',
'1989-03-11','9999-01-01'), (100262,'d008','1996-08-12','9999-01-01'),
(100263,'d002','1998-06-24','1998-10-0 5'),(100264,'d005','1989-11-09',
'9999-01-01'),(100265,'d001','1992-06-27','9999- 01-01'),(100266,'d009',
'1990-09-10','9999-01-01'),(100267,'d009','1992-04-14','9 999-01-01'),
(100268,'d005','1998-05-01','2000-04-07'),(100269,'d007','1994-01-02',
'1999-09-18'),(100269,'d009','1999-09-
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
344 OS file reads, 45666 OS file writes, 4030 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 202.80 writes/s, 48.33 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 0, seg size 2, 0 merges
merged operations:
 insert 0, delete mark 0, delete 0
discarded operations:
 insert 0, delete mark 0, delete 0
Hash table size 4425293, node heap has 143 buffer(s)
137083.82 hash searches/s, 2495.92 non-hash searches/s
---
LOG
---
Log sequence number 3091027710
Log flushed up to   3090240098
Pages flushed up to 3074432960
Last checkpoint at  3050856266
0 pending log writes, 0 pending chkp writes
1187 log i/o's done, 14.67 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 2197815296; in additional pool allocated 0
Dictionary memory allocated 155455
Buffer pool size   131071
Free buffers       92158
Database pages     38770
Old database pages 14271
Modified db pages  619
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 4, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 322, created 38448, written 42083
0.00 reads/s, 222.30 creates/s, 159.47 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 
0.00/s
LRU len: 38770, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
----------------------
INDIVIDUAL BUFFER POOL INFO
----------------------
---BUFFER POOL 0
Buffer pool size   65536
Free buffers       46120
Database pages     19345
Old database pages 7121
Modified db pages  291
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 3, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 163, created 19182, written 21149
0.00 reads/s, 103.48 creates/s, 83.15 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 
0.00/s
LRU len: 19345, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 1
Buffer pool size   65535
Free buffers       46038
Database pages     19425
Old database pages 7150
Modified db pages  328
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 1, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 159, created 19266, written 20934
0.00 reads/s, 118.81 creates/s, 76.32 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 
0.00/s
LRU len: 19425, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
0 read views open inside InnoDB
Main thread process no. 54607, id 139946075744000, state: sleeping
Number of rows inserted 12163964, updated 0, deleted 3, read 4
67807.03 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================
      Standard InnoDB Monitor output is limited to
      1MB when produced using the
      SHOW ENGINE INNODB
      STATUS statement. This limit does not apply to output
      written to the server's error output.
    
Some notes on the output sections:
      Status
    
      This section shows the timestamp, the monitor name, and the number
      of seconds that per-second averages are based on. The number of
      seconds is the elapsed time between the current time and the last
      time InnoDB Monitor output was printed.
    
      BACKGROUND
      THREAD
    
      The srv_master_thread lines shows work done by
      the main background thread.
    
      SEMAPHORES
    
      This section reports threads waiting for a semaphore and
      statistics on how many times threads have needed a spin or a wait
      on a mutex or a rw-lock semaphore. A large number of threads
      waiting for semaphores may be a result of disk I/O, or contention
      problems inside InnoDB. Contention can be due
      to heavy parallelism of queries or problems in operating system
      thread scheduling. Setting the
      innodb_thread_concurrency system
      variable smaller than the default value might help in such
      situations. The Spin rounds per wait line shows
      the number of spinlock rounds per OS wait for a mutex.
    
      LATEST FOREIGN KEY
      ERROR
    
This section provides information about the most recent foreign key constraint error. It is not present if no such error has occurred. The contents include the statement that failed as well as information about the constraint that failed and the referenced and referencing tables.
      LATEST DETECTED
      DEADLOCK
    
      This section provides information about the most recent deadlock.
      It is not present if no deadlock has occurred. The contents show
      which transactions are involved, the statement each was attempting
      to execute, the locks they have and need, and which transaction
      InnoDB decided to roll back to break the
      deadlock. The lock modes reported in this section are explained in
      Section 14.2.2.1, “InnoDB Lock Modes”.
    
      TRANSACTIONS
    
If this section reports lock waits, your applications might have lock contention. The output can also help to trace the reasons for transaction deadlocks.
      FILE I/O
    
      This section provides information about threads that
      InnoDB uses to perform various types of I/O.
      The first few of these are dedicated to general
      InnoDB processing. The contents also display
      information for pending I/O operations and statistics for I/O
      performance.
    
      The number of these threads are controlled by the
      innodb_read_io_threads and
      innodb_write_io_threads
      parameters. See Section 14.11, “InnoDB Startup Options and System Variables”.
    
      INSERT BUFFER AND ADAPTIVE HASH
      INDEX
    
      This section shows the status of the InnoDB
      insert buffer (also referred to as the
      change buffer) and the
      adaptive hash index.
    
Change buffer status information includes:
          size: The number of pages used within the
          change buffer. Change buffer size is equal to seg
          size - (1 + free list len). The 1
          + value represents the change buffer header page.
        
          free list len: The number of pages free
          within the change buffer.
        
          seg size: The size of the change buffer, in
          pages.
        
          merges: The total number of change buffer
          merges.
        
          merged operations - insert: The number of
          inserted records merged.
        
          merged operations - delete mark: The number
          of deleted records merged.
        
          merged operations - delete: The number of
          purge records merged.
        
          discarded operations - insert: The number
          of insert merge operations discarded.
        
          discarded operations - delete mark: The
          number of delete merge operations discarded.
        
          discarded operations - delete: The number
          of purge merge operations discarded.
For related information, see Section 14.2.6.5, “Change Buffer”.
Adaptive hash index status information includes:
          Hash table size: The total number of array
          cells allocated to the adaptive hash index.
        
          node heap has : The total number of pages allocated to
          the adaptive hash index.
        #
          buffer(s)
          hash searches/s: The per second average of
          searches satisfied by the adaptive hash index.
        
          non-hash searches/s: The per second average
          of searches not satisfied by the adaptive hash index.
For related information, see Section 14.2.6.6, “Adaptive Hash Indexes”.
      LOG
    
      This section displays information about the
      InnoDB log. The contents include the current
      log sequence number, how far the log has been flushed to disk, and
      the position at which InnoDB last took a
      checkpoint. (See Section 14.9.3, “InnoDB Checkpoints”.) The
      section also displays information about pending writes and write
      performance statistics.
    
      BUFFER POOL AND
      MEMORY
    
This section gives you statistics on pages read and written. You can calculate from these numbers how many data file I/O operations your queries currently are doing.
For additional information about the operation of the buffer pool, see Section 8.10.1, “The InnoDB Buffer Pool”.
      ROW OPERATIONS
    
This section shows what the main thread is doing, including the number and performance rate for each type of row operation.
        The InnoDB Tablespace Monitor is deprecated
        and may be removed in a future release.
      The InnoDB Tablespace Monitor prints
      information about the file segments in the shared tablespace and
      validates the tablespace allocation data structures. The
      Tablespace Monitor does not describe file-per-table tablespaces
      created with the
      innodb_file_per_table option.
    
      Example InnoDB Tablespace Monitor output:
    
================================================ 090408 21:28:09 INNODB TABLESPACE MONITOR OUTPUT ================================================ FILE SPACE INFO: id 0 size 13440, free limit 3136, free extents 28 not full frag extents 2: used pages 78, full frag extents 3 first seg id not used 0 23845 SEGMENT id 0 1 space 0; page 2; res 96 used 46; full ext 0 fragm pages 32; free extents 0; not full extents 1: pages 14 SEGMENT id 0 2 space 0; page 2; res 1 used 1; full ext 0 fragm pages 1; free extents 0; not full extents 0: pages 0 SEGMENT id 0 3 space 0; page 2; res 1 used 1; full ext 0 fragm pages 1; free extents 0; not full extents 0: pages 0 ... SEGMENT id 0 15 space 0; page 2; res 160 used 160; full ext 2 fragm pages 32; free extents 0; not full extents 0: pages 0 SEGMENT id 0 488 space 0; page 2; res 1 used 1; full ext 0 fragm pages 1; free extents 0; not full extents 0: pages 0 SEGMENT id 0 17 space 0; page 2; res 1 used 1; full ext 0 fragm pages 1; free extents 0; not full extents 0: pages 0 ... SEGMENT id 0 171 space 0; page 2; res 592 used 481; full ext 7 fragm pages 16; free extents 0; not full extents 2: pages 17 SEGMENT id 0 172 space 0; page 2; res 1 used 1; full ext 0 fragm pages 1; free extents 0; not full extents 0: pages 0 SEGMENT id 0 173 space 0; page 2; res 96 used 44; full ext 0 fragm pages 32; free extents 0; not full extents 1: pages 12 ... SEGMENT id 0 601 space 0; page 2; res 1 used 1; full ext 0 fragm pages 1; free extents 0; not full extents 0: pages 0 NUMBER of file segments: 73 Validating tablespace Validation ok --------------------------------------- END OF INNODB TABLESPACE MONITOR OUTPUT =======================================
The Tablespace Monitor output includes information about the shared tablespace as a whole, followed by a list containing a breakdown for each segment within the tablespace.
In this example using the default page size, the tablespace consists of database pages that are 16KB each. The pages are grouped into extents of size 1MB (64 consecutive pages).
The initial part of the output that displays overall tablespace information has this format:
FILE SPACE INFO: id 0 size 13440, free limit 3136, free extents 28 not full frag extents 2: used pages 78, full frag extents 3 first seg id not used 0 23845
Overall tablespace information includes these values:
          id: The tablespace ID. A value of 0 refers
          to the shared tablespace.
        
          size: The current tablespace size in pages.
        
          free limit: The minimum page number for
          which the free list has not been initialized. Pages at or
          above this limit are free.
        
          free extents: The number of free extents.
        
          not full frag extents, used
          pages: The number of fragment extents that are not
          completely filled, and the number of pages in those extents
          that have been allocated.
        
          full frag extents: The number of completely
          full fragment extents.
        
          first seg id not used: The first unused
          segment ID.
Individual segment information has this format:
SEGMENT id 0 15 space 0; page 2; res 160 used 160; full ext 2 fragm pages 32; free extents 0; not full extents 0: pages 0
Segment information includes these values:
      id: The segment ID.
    
      space, page: The tablespace
      number and page within the tablespace where the segment
      “inode” is located. A tablespace number of 0
      indicates the shared tablespace. InnoDB uses
      inodes to keep track of segments in the tablespace. The other
      fields displayed for a segment (id,
      res, and so forth) are derived from information
      in the inode.
    
      res: The number of pages allocated (reserved)
      for the segment.
    
      used: The number of allocated pages in use by
      the segment.
    
      full ext: The number of extents allocated for
      the segment that are completely used.
    
      fragm pages: The number of initial pages that
      have been allocated to the segment.
    
      free extents: The number of extents allocated
      for the segment that are completely unused.
    
      not full extents: The number of extents
      allocated for the segment that are partially used.
    
      pages: The number of pages used within the
      not-full extents.
    
      When a segment grows, it starts as a single page, and
      InnoDB allocates the first pages for it one at
      a time, up to 32 pages (this is the fragm pages
      value). After that, InnoDB allocates complete
      extents. InnoDB can add up to 4 extents at a
      time to a large segment to ensure good sequentiality of data.
    
For the example segment shown earlier, it has 32 fragment pages, plus 2 full extents (64 pages each), for a total of 160 pages used out of 160 pages allocated. The following segment has 32 fragment pages and one partially full extent using 14 pages for a total of 46 pages used out of 96 pages allocated:
SEGMENT id 0 1 space 0; page 2; res 96 used 46; full ext 0 fragm pages 32; free extents 0; not full extents 1: pages 14
      It is possible for a segment that has extents allocated to it to
      have a fragm pages value less than 32 if some
      of the individual pages have been deallocated subsequent to extent
      allocation.
        The InnoDB Table Monitor is deprecated and
        may be removed in a future release. Similar information can be
        obtained from InnoDB
        INFORMATION_SCHEMA tables. See
        Section 21.29, “INFORMATION_SCHEMA Tables for InnoDB”.
      The InnoDB Table Monitor prints the contents of
      the InnoDB internal data dictionary.
    
      The output contains one section per table. The
      SYS_FOREIGN and
      SYS_FOREIGN_COLS sections are for internal data
      dictionary tables that maintain information about foreign keys.
      There are also sections for the Table Monitor table and each
      user-created InnoDB table. Suppose that the
      following two tables have been created in the
      test database:
    
CREATE TABLE parent
(
  par_id    INT NOT NULL,
  fname      CHAR(20),
  lname      CHAR(20),
  PRIMARY KEY (par_id),
  UNIQUE INDEX (lname, fname)
) ENGINE = INNODB;
CREATE TABLE child
(
  par_id      INT NOT NULL,
  child_id    INT NOT NULL,
  name        VARCHAR(40),
  birth       DATE,
  weight      DECIMAL(10,2),
  misc_info   VARCHAR(255),
  last_update TIMESTAMP,
  PRIMARY KEY (par_id, child_id),
  INDEX (name),
  FOREIGN KEY (par_id) REFERENCES parent (par_id)
    ON DELETE CASCADE
    ON UPDATE CASCADE
) ENGINE = INNODB;
Then the Table Monitor output will look something like this (reformatted slightly):
===========================================
090420 12:09:32 INNODB TABLE MONITOR OUTPUT
===========================================
--------------------------------------
TABLE: name SYS_FOREIGN, id 0 11, columns 7, indexes 3, appr.rows 1
  COLUMNS: ID: DATA_VARCHAR DATA_ENGLISH len 0;
           FOR_NAME: DATA_VARCHAR DATA_ENGLISH len 0;
           REF_NAME: DATA_VARCHAR DATA_ENGLISH len 0;
           N_COLS: DATA_INT len 4;
           DB_ROW_ID: DATA_SYS prtype 256 len 6;
           DB_TRX_ID: DATA_SYS prtype 257 len 6;
  INDEX: name ID_IND, id 0 11, fields 1/6, uniq 1, type 3
   root page 46, appr.key vals 1, leaf pages 1, size pages 1
   FIELDS:  ID DB_TRX_ID DB_ROLL_PTR FOR_NAME REF_NAME N_COLS
  INDEX: name FOR_IND, id 0 12, fields 1/2, uniq 2, type 0
   root page 47, appr.key vals 1, leaf pages 1, size pages 1
   FIELDS:  FOR_NAME ID
  INDEX: name REF_IND, id 0 13, fields 1/2, uniq 2, type 0
   root page 48, appr.key vals 1, leaf pages 1, size pages 1
   FIELDS:  REF_NAME ID
--------------------------------------
TABLE: name SYS_FOREIGN_COLS, id 0 12, columns 7, indexes 1, appr.rows 1
  COLUMNS: ID: DATA_VARCHAR DATA_ENGLISH len 0;
           POS: DATA_INT len 4;
           FOR_COL_NAME: DATA_VARCHAR DATA_ENGLISH len 0;
           REF_COL_NAME: DATA_VARCHAR DATA_ENGLISH len 0;
           DB_ROW_ID: DATA_SYS prtype 256 len 6;
           DB_TRX_ID: DATA_SYS prtype 257 len 6;
  INDEX: name ID_IND, id 0 14, fields 2/6, uniq 2, type 3
   root page 49, appr.key vals 1, leaf pages 1, size pages 1
   FIELDS:  ID POS DB_TRX_ID DB_ROLL_PTR FOR_COL_NAME REF_COL_NAME
--------------------------------------
TABLE: name test/child, id 0 14, columns 10, indexes 2, appr.rows 201
  COLUMNS: par_id: DATA_INT DATA_BINARY_TYPE DATA_NOT_NULL len 4;
           child_id: DATA_INT DATA_BINARY_TYPE DATA_NOT_NULL len 4;
           name: DATA_VARCHAR prtype 524303 len 40;
           birth: DATA_INT DATA_BINARY_TYPE len 3;
           weight: DATA_FIXBINARY DATA_BINARY_TYPE len 5;
           misc_info: DATA_VARCHAR prtype 524303 len 255;
           last_update: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 4;
           DB_ROW_ID: DATA_SYS prtype 256 len 6;
           DB_TRX_ID: DATA_SYS prtype 257 len 6;
  INDEX: name PRIMARY, id 0 17, fields 2/9, uniq 2, type 3
   root page 52, appr.key vals 201, leaf pages 5, size pages 6
   FIELDS:  par_id child_id DB_TRX_ID DB_ROLL_PTR name birth weight misc_info last_update
  INDEX: name name, id 0 18, fields 1/3, uniq 3, type 0
   root page 53, appr.key vals 210, leaf pages 1, size pages 1
   FIELDS:  name par_id child_id
  FOREIGN KEY CONSTRAINT test/child_ibfk_1: test/child ( par_id )
             REFERENCES test/parent ( par_id )
--------------------------------------
TABLE: name test/innodb_table_monitor, id 0 15, columns 4, indexes 1, appr.rows 0
  COLUMNS: i: DATA_INT DATA_BINARY_TYPE len 4;
           DB_ROW_ID: DATA_SYS prtype 256 len 6;
           DB_TRX_ID: DATA_SYS prtype 257 len 6;
  INDEX: name GEN_CLUST_INDEX, id 0 19, fields 0/4, uniq 1, type 1
   root page 193, appr.key vals 0, leaf pages 1, size pages 1
   FIELDS:  DB_ROW_ID DB_TRX_ID DB_ROLL_PTR i
--------------------------------------
TABLE: name test/parent, id 0 13, columns 6, indexes 2, appr.rows 299
  COLUMNS: par_id: DATA_INT DATA_BINARY_TYPE DATA_NOT_NULL len 4;
           fname: DATA_CHAR prtype 524542 len 20;
           lname: DATA_CHAR prtype 524542 len 20;
           DB_ROW_ID: DATA_SYS prtype 256 len 6;
           DB_TRX_ID: DATA_SYS prtype 257 len 6;
  INDEX: name PRIMARY, id 0 15, fields 1/5, uniq 1, type 3
   root page 50, appr.key vals 299, leaf pages 2, size pages 3
   FIELDS:  par_id DB_TRX_ID DB_ROLL_PTR fname lname
  INDEX: name lname, id 0 16, fields 2/3, uniq 2, type 2
   root page 51, appr.key vals 300, leaf pages 1, size pages 1
   FIELDS:  lname fname par_id
  FOREIGN KEY CONSTRAINT test/child_ibfk_1: test/child ( par_id )
             REFERENCES test/parent ( par_id )
-----------------------------------
END OF INNODB TABLE MONITOR OUTPUT
==================================
For each table, Table Monitor output contains a section that displays general information about the table and specific information about its columns, indexes, and foreign keys.
      The general information for each table includes the table name (in
      db_name/tbl_name
      The COLUMNS part of a table section lists each
      column in the table. Information for each column indicates its
      name and data type characteristics. Some internal columns are
      added by InnoDB, such as
      DB_ROW_ID (row ID),
      DB_TRX_ID (transaction ID), and
      DB_ROLL_PTR (a pointer to the rollback/undo
      data).
          DATA_: These
          symbols indicate the data type. There may be multiple
          xxxDATA_ symbols
          for a given column.
        xxx
          prtype: The column's “precise”
          type. This field includes information such as the column data
          type, character set code, nullability, signedness, and whether
          it is a binary string. This field is described in the
          innobase/include/data0type.h source file.
        
          len: The column length in bytes.
      Each INDEX part of the table section provides
      the name and characteristics of one table index:
          name: The index name. If the name is
          PRIMARY, the index is a primary key. If the
          name is GEN_CLUST_INDEX, the index is the
          clustered index that is created automatically if the table
          definition doesn't include a primary key or
          non-NULL unique index. See
          Section 14.2.6.2, “Clustered and Secondary Indexes”.
        
          id: The index ID.
        
          fields: The number of fields in the index,
          as a value in
          m/n
              m is the number of user-defined
              columns; that is, the number of columns you would see in
              the index definition in a CREATE TABLE
              statement.
            
              n is the total number of index
              columns, including those added internally. For the
              clustered index, the total includes the other columns in
              the table definition, plus any columns added internally.
              For a secondary index, the total includes the columns from
              the primary key that are not part of the secondary index.
          uniq: The number of leading fields that are
          enough to determine index values uniquely.
        
          type: The index type. This is a bit field.
          For example, 1 indicates a clustered index and 2 indicates a
          unique index, so a clustered index (which always contains
          unique values), will have a type value of
          3. An index with a type value of 0 is
          neither clustered nor unique. The flag values are defined in
          the innobase/include/dict0mem.h source
          file.
        
          root page: The index root page number.
        
          appr. key vals: The approximate index
          cardinality.
        
          leaf pages: The approximate number of leaf
          pages in the index.
        
          size pages: The approximate total number of
          pages in the index.
        
          FIELDS: The names of the fields in the
          index. For a clustered index that was generated automatically,
          the field list begins with the internal
          DB_ROW_ID (row ID) field.
          DB_TRX_ID and
          DB_ROLL_PTR are always added internally to
          the clustered index, following the fields that comprise the
          primary key. For a secondary index, the final fields are those
          from the primary key that are not part of the secondary index.
      The end of the table section lists the FOREIGN
      KEY definitions that apply to the table. This
      information appears whether the table is a referencing or
      referenced table.
The key to safe database management is making regular backups. Depending on your data volume, number of MySQL servers, and database workload, you can use these techniques, alone or in combination: hot backup with MySQL Enterprise Backup; cold backup by copying files while the MySQL server is shut down; physical backup for fast operation (especially for restore); logical backup with mysqldump for smaller data volumes or to record the structure of schema objects.
    The mysqlbackup command, part of the MySQL
    Enterprise Backup component, lets you back up a running MySQL
    instance, including InnoDB and
    MyISAM tables, with minimal disruption
    to operations while producing a consistent snapshot of the database.
    When mysqlbackup is copying
    InnoDB tables, reads and writes to both
    InnoDB and MyISAM tables can
    continue. During the copying of MyISAM tables,
    reads (but not writes) to those tables are permitted. MySQL
    Enterprise Backup can also create compressed backup files, and back
    up subsets of tables and databases. In conjunction with MySQL’s
    binary log, users can perform point-in-time recovery. MySQL
    Enterprise Backup is part of the MySQL Enterprise subscription. For
    more details, see Section 25.2, “MySQL Enterprise Backup Overview”.
    If you can shut down your MySQL server, you can make a binary backup
    that consists of all files used by InnoDB to
    manage its tables. Use the following procedure:
Do a slow shutdown of the MySQL server and make sure that it stops without errors.
        Copy all InnoDB data files
        (ibdata files and .ibd
        files) into a safe place.
      
        Copy all the .frm files for
        InnoDB tables to a safe place.
      
        Copy all InnoDB log files
        (ib_logfile files) to a safe place.
      
        Copy your my.cnf configuration file or
        files to a safe place.
    In addition to making binary backups as just described, regularly
    make dumps of your tables with mysqldump. A
    binary file might be corrupted without you noticing it. Dumped
    tables are stored into text files that are human-readable, so
    spotting table corruption becomes easier. Also, because the format
    is simpler, the chance for serious data corruption is smaller.
    mysqldump also has a
    --single-transaction option for
    making a consistent snapshot without locking out other clients. See
    Section 7.3.1, “Establishing a Backup Policy”.
  
    Replication works with InnoDB tables,
    so you can use MySQL replication capabilities to keep a copy of your
    database at database sites requiring high availability.
    To recover your InnoDB database to the present
    from the time at which the binary backup was made, you must run your
    MySQL server with binary logging turned on, even before taking the
    backup. To achieve point-in-time recovery after restoring a backup,
    you can apply changes from the binary log that occurred after the
    backup was made. See Section 7.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”.
  
    To recover from a crash of your MySQL server, the only requirement
    is to restart it. InnoDB automatically checks the
    logs and performs a roll-forward of the database to the present.
    InnoDB automatically rolls back uncommitted
    transactions that were present at the time of the crash. During
    recovery, mysqld displays output something like
    this:
  
InnoDB: Database was not shut down normally. InnoDB: Starting recovery from log files... InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 0 13674004 InnoDB: Doing recovery: scanned up to log sequence number 0 13739520 InnoDB: Doing recovery: scanned up to log sequence number 0 13805056 InnoDB: Doing recovery: scanned up to log sequence number 0 13870592 InnoDB: Doing recovery: scanned up to log sequence number 0 13936128 ... InnoDB: Doing recovery: scanned up to log sequence number 0 20555264 InnoDB: Doing recovery: scanned up to log sequence number 0 20620800 InnoDB: Doing recovery: scanned up to log sequence number 0 20664692 InnoDB: 1 uncommitted transaction(s) which must be rolled back InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx no 16745 InnoDB: Rolling back of trx no 16745 completed InnoDB: Rollback of uncommitted transactions completed InnoDB: Starting an apply batch of log records to the database... InnoDB: Apply batch completed InnoDB: Started mysqld: ready for connections
If your database becomes corrupted or disk failure occurs, you must perform the recovery using a backup. In the case of corruption, first find a backup that is not corrupted. After restoring the base backup, do a point-in-time recovery from the binary log files using mysqlbinlog and mysql to restore the changes that occurred after the backup was made.
    In some cases of database corruption, it is enough just to dump,
    drop, and re-create one or a few corrupt tables. You can use the
    CHECK TABLE SQL statement to check
    whether a table is corrupt, although CHECK
    TABLE naturally cannot detect every possible kind of
    corruption. You can use the Tablespace Monitor to check the
    integrity of the file space management inside the tablespace files.
  
    In some cases, apparent database page corruption is actually due to
    the operating system corrupting its own file cache, and the data on
    disk may be okay. It is best first to try restarting your computer.
    Doing so may eliminate errors that appeared to be database page
    corruption. If MySQL still has trouble starting because of
    InnoDB consistency problems, see
    Section 14.18.2, “Forcing InnoDB Recovery” for steps to start the
    instance in a diagnostic mode where you can dump the data.
      InnoDB
      crash recovery consists
      of several steps:
          Applying the redo log:
          Redo log application is the first step and is performed during
          initialization, before accepting any connections. If all
          changes were flushed from the
          buffer pool to the
          tablespaces
          (ibdata* and *.ibd
          files) at the time of the shutdown or crash, the redo log
          application can be skipped. If the redo log files are missing
          at startup, InnoDB skips the redo log
          application.
        
          Removing redo logs to speed up the recovery process is not
          recommended, even if some data loss is acceptable. Removing
          redo logs should only be considered an option after a clean
          shutdown is performed, with
          innodb_fast_shutdown set to
          0 or 1.
        
Rolling back incomplete transactions: Any transactions that were active at the time of crash or fast shutdown. The time it takes to roll back an incomplete transaction can be three or four times the amount of time a transaction is active before it is interrupted, depending on server load.
          You cannot cancel transactions that are in the process of
          being rolled back. In extreme cases, when rolling back
          transactions is expected to take an exceptionally long time,
          it may be faster to start InnoDB with an
          innodb_force_recovery setting
          of 3 or greater. See
          Section 14.18.2, “Forcing InnoDB Recovery” for more
          information.
        
Change buffer merge: Applying changes from the change buffer (part of the system tablespace) to leaf pages of secondary indexes, as the index pages are read to the buffer pool.
Purge: Deleting delete-marked records that are no longer visible for any active transaction.
The steps that follow redo log application do not depend on the redo log (other than for logging the writes) and are performed in parallel with normal processing. Of these, only rollback of incomplete transactions is special to crash recovery. The insert buffer merge and the purge are performed during normal processing.
      After redo log application, InnoDB attempts to
      accept connections as early as possible, to reduce downtime. As
      part of crash recovery, InnoDB rolls back any
      transactions that were not committed or in XA
      PREPARE state when the server crashed. The rollback is
      performed by a background thread, executed in parallel with
      transactions from new connections. Until the rollback operation is
      completed, new connections may encounter locking conflicts with
      recovered transactions.
    
      In most situations, even if the MySQL server was killed
      unexpectedly in the middle of heavy activity, the recovery process
      happens automatically and no action is needed from the DBA. If a
      hardware failure or severe system error corrupted
      InnoDB data, MySQL might refuse to start. In
      that case, see Section 14.18.2, “Forcing InnoDB Recovery” for the
      steps to troubleshoot such an issue.
    
      For information about the binary log and InnoDB
      crash recovery, see Section 5.2.4, “The Binary Log”.
    MySQL replication works for InnoDB tables as it
    does for MyISAM tables. It is also possible to
    use replication in a way where the storage engine on the slave is
    not the same as the original storage engine on the master. For
    example, you can replicate modifications to an
    InnoDB table on the master to a
    MyISAM table on the slave.
  
    To set up a new slave for a master, make a copy of the
    InnoDB tablespace and the log files, as well as
    the .frm files of the InnoDB
    tables, and move the copies to the slave. If the
    innodb_file_per_table option is
    enabled, copy the .ibd files as well. For the
    proper procedure to do this, see Section 14.15, “InnoDB Backup and Recovery”.
  
    To make a new slave without taking down the master or an existing
    slave, use the MySQL
    Enterprise Backup product. If you can shut down the master or
    an existing slave, take a cold
    backup of the InnoDB tablespaces and log
    files and use that to set up a slave.
  
Transactions that fail on the master do not affect replication at all. MySQL replication is based on the binary log where MySQL writes SQL statements that modify data. A transaction that fails (for example, because of a foreign key violation, or because it is rolled back) is not written to the binary log, so it is not sent to slaves. See Section 13.3.1, “START TRANSACTION, COMMIT, and ROLLBACK Syntax”.
Replication and CASCADE. 
      Cascading actions for InnoDB tables on the
      master are replicated on the slave only if
      the tables sharing the foreign key relation use
      InnoDB on both the master and slave. This is
      true whether you are using statement-based or row-based
      replication. Suppose that you have started replication, and then
      create two tables on the master using the following
      CREATE TABLE statements:
    
CREATE TABLE fc1 (
    i INT PRIMARY KEY,
    j INT
) ENGINE = InnoDB;
CREATE TABLE fc2 (
    m INT PRIMARY KEY,
    n INT,
    FOREIGN KEY ni (n) REFERENCES fc1 (i)
        ON DELETE CASCADE
) ENGINE = InnoDB;
    Suppose that the slave does not have InnoDB
    support enabled. If this is the case, then the tables on the slave
    are created, but they use the MyISAM storage
    engine, and the FOREIGN KEY option is ignored.
    Now we insert some rows into the tables on the master:
  
master>INSERT INTO fc1 VALUES (1, 1), (2, 2);Query OK, 2 rows affected (0.09 sec) Records: 2 Duplicates: 0 Warnings: 0 master>INSERT INTO fc2 VALUES (1, 1), (2, 2), (3, 1);Query OK, 3 rows affected (0.19 sec) Records: 3 Duplicates: 0 Warnings: 0
    At this point, on both the master and the slave, table
    fc1 contains 2 rows, and table
    fc2 contains 3 rows, as shown here:
  
master>SELECT * FROM fc1;+---+------+ | i | j | +---+------+ | 1 | 1 | | 2 | 2 | +---+------+ 2 rows in set (0.00 sec) master>SELECT * FROM fc2;+---+------+ | m | n | +---+------+ | 1 | 1 | | 2 | 2 | | 3 | 1 | +---+------+ 3 rows in set (0.00 sec) slave>SELECT * FROM fc1;+---+------+ | i | j | +---+------+ | 1 | 1 | | 2 | 2 | +---+------+ 2 rows in set (0.00 sec) slave>SELECT * FROM fc2;+---+------+ | m | n | +---+------+ | 1 | 1 | | 2 | 2 | | 3 | 1 | +---+------+ 3 rows in set (0.00 sec)
    Now suppose that you perform the following
    DELETE statement on the master:
  
master> DELETE FROM fc1 WHERE i=1;
Query OK, 1 row affected (0.09 sec)
    Due to the cascade, table fc2 on the master now
    contains only 1 row:
  
master> SELECT * FROM fc2;
+---+---+
| m | n |
+---+---+
| 2 | 2 |
+---+---+
1 row in set (0.00 sec)
    However, the cascade does not propagate on the slave because on the
    slave the DELETE for
    fc1 deletes no rows from fc2.
    The slave's copy of fc2 still contains all of the
    rows that were originally inserted:
  
slave> SELECT * FROM fc2;
+---+---+
| m | n |
+---+---+
| 1 | 1 |
| 3 | 1 |
| 2 | 2 |
+---+---+
3 rows in set (0.00 sec)
    This difference is due to the fact that the cascading deletes are
    handled internally by the InnoDB storage engine,
    which means that none of the changes are logged.
    The memcached daemon is frequently used as an
    in-memory caching layer in front of a MySQL database server. With
    the introduction of the InnoDB
    memcached plugin, MySQL now allows direct access
    to InnoDB tables using the
    memcached protocol and client libraries.
  
    The InnoDB memcached plugin
    provides an integrated memcached daemon that can
    automatically store and retrieve data from InnoDB
    tables, turning the MySQL server into a fast “key-value
    store”. Instead of formulating queries in SQL, you can
    perform simple get, set, and increment operations that avoid the
    performance overhead of SQL parsing and constructing a query
    optimization plan. You can also access the same
    InnoDB tables through SQL for convenience,
    complex queries, bulk operations, application compatibility, and
    other strengths of traditional database software.
  
    This “NoSQL-style” interface uses the
    memcached API to speed up database operations,
    letting InnoDB handle memory caching using its
    buffer pool mechanism. Data
    modified through memcached operations such as
    ADD, SET,
    INCR are stored to disk, using
    InnoDB mechanisms such as
    change buffering, the
    doublewrite buffer,
    and crash recovery. The
    combination of memcached simplicity and
    InnoDB reliability and consistency provides users
    with the best of both worlds, as explained in
    Section 14.17.1, “Benefits of the InnoDB / memcached Combination”. For architectural
    details about how the components fit together, see
    Section 14.17.2, “Architecture of InnoDB and memcached Integration”.
      This section outlines advantages of the
      memcached interface to
      InnoDB tables introduced in
      Section 14.17, “InnoDB Integration with memcached”. The combination of
      InnoDB tables and memcached
      offers advantages over using either by themselves:
          Direct access to the InnoDB storage engine
          avoids the parsing and planning overhead of SQL.
        
Running memcached in the same process space as the MySQL server avoids the network overhead of passing requests back and forth.
          Data that is written using the memcached
          protocol is transparently written to an
          InnoDB table, without going through the
          MySQL SQL layer. You can control the frequency of writes to
          achieve higher raw performance when updating non-critical
          data.
        
          Data that is requested through the
          memcached protocol is transparently queried
          from an InnoDB table, without going through
          the MySQL SQL layer.
        
          Subsequent requests for the same data is served from the
          InnoDB buffer pool. The buffer pool handles
          the in-memory caching. You can tune the performance of
          data-intensive operations using the familiar
          InnoDB configuration options.
        
Data can be unstructured or structured, depending on the type of application. You can make an all-new table for the data, or map the NoSQL-style processing to one or more existing tables.
          InnoDB can handle composing and decomposing
          multiple column values into a single
          memcached item value, reducing the amount
          of string parsing and concatenation required in your
          application. For example, you might store a string value
          2|4|6|8 in the memcached
          cache, and InnoDB splits that value based
          on a separator character, then stores the result into four
          numeric columns.
        
The transfer between memory and disk is handled automatically, simplifying application logic.
Data is stored in a MySQL database to protect against crashes, outages, and corruption.
You can still access the underlying table through SQL, for reporting, analysis, ad hoc queries, bulk loading, multi-step transactional computations, set operations such as union and intersection, and other operations well suited to the expressiveness and flexibility of SQL.
You can ensure high availability of the NoSQL data by using this feature on a master server in combination with MySQL replication.
          The integration of memcached with MySQL
          provides a painless way to make the in-memory data persistent,
          so you can use it for more significant kinds of data. You can
          put more add, incr, and
          similar write operations into your application, without
          worrying that the data could disappear at any moment. You can
          stop and start the memcached server without
          losing updates made to the cached data. To guard against
          unexpected outages, you can take advantage of
          InnoDB crash recovery, replication, and
          backup procedures.
        
          The way InnoDB does fast
          primary key lookups is
          a natural fit for memcached single-item
          queries. The direct, low-level database access path used by
          the memcached plugin is much more efficient
          for key-value lookups than equivalent SQL queries.
        
The serialization features of memcached, which can turn complex data structures, binary files, or even code blocks into storeable strings, offer a simple way to get such objects into a database.
          Because you can access the underlying data through SQL, you
          can produce reports, search or update across multiple keys,
          and call functions such as AVG() and
          MAX() on the memcached
          data. All of these operations are expensive or complicated
          with the standalone memcached.
        
          You do not need to manually load data into
          memcached at startup. As particular keys
          are requested by an application, the values are retrieved from
          the database automatically, and cached in memory using the
          InnoDB
          buffer pool.
        
Because memcached consumes relatively little CPU, and its memory footprint is easy to control, it can run comfortably alongside a MySQL instance on the same system.
          Because data consistency is enforced by the mechanisms used
          for regular InnoDB tables, you do not have
          to worry about stale memcached data or
          fallback logic to query the database in the case of a missing
          key.
This section describes how the memcached daemon is integrated into the MySQL Server.
      When integrated with MySQL Server, memcached is
      implemented as a MySQL plugin daemon, accessing the
      InnoDB storage engine directly and bypassing
      the SQL layer:
 
Features provided in the current release:
memcached as a daemon plugin of mysqld: both mysqld and memcached run in the same process space, with very low latency access to data.
          Direct access to InnoDB tables, bypassing
          the SQL parser, the optimizer, and even the Handler API layer.
        
          Standard memcached protocols, both the
          text-based protocol and the binary protocol. The
          InnoDB + memcached
          combination passes all 55 compatibility tests from the
          memcapable command.
        
Multi-column support: you can map multiple columns into the “value” part of the key/value store, with column values delimited by a user-specified separator character.
          By default, you use the memcached protocol
          to read and write data directly to InnoDB,
          and let MySQL manage the in-memory caching through the
          InnoDB
          buffer pool. The
          default settings represent the combination of high reliability
          with the fewest surprises for database applications. For
          example, the default settings avoid uncommitted data on the
          database side, or stale data returned for
          memcached get requests.
        
          Advanced users can configure the system as a traditional
          memcached server, with all data cached only
          in the memcached default engine (memory),
          or use a combination of the
          “memcached default engine”
          (memory caching) and the InnoDB
          memcached engine (InnoDB
          as backend persistent storage).
          You can control how often data is passed back and forth
          between InnoDB and memcached operations
          through the
          innodb_api_bk_commit_interval,
          daemon_memcached_r_batch_size,
          and
          daemon_memcached_w_batch_size
          configuration options. Both the batch size options default to
          a value of 1 for maximum reliability.
        
          You can specify memcached configuration
          options through the MySQL configuration variable
          daemon_memcached_option. For
          example, you might change the port that
          memcached listens on, reduce the maximum
          number of simultaneous connections, change the maximum memory
          size for a key/value pair, or enable debugging messages for
          the error log.
        
          A configuration option
          innodb_api_trx_level lets you
          control the transaction
          isolation level on
          queries processed by the memcached
          interface. Although memcached has no
          concept of
          transactions, you
          might use this property to control how soon
          memcached sees changes caused by SQL
          statements, if you issue DML
          statements on the same table that memcached
          interfaces with. By default, it is set to
          READ UNCOMMITTED.
        
          Another configuration option is
          innodb_api_enable_mdl.
          “MDL” stands for “metadata locking”.
          This basically locks the table from the MySQL level, so that
          the mapped table cannot be dropped or altered by
          DDL through the SQL interface.
          Without the lock, the table can be dropped from MySQL layer,
          but will be kept in the InnoDB storage until
          memcached or any other user stops using it.
      MySQL users might already be familiar with using
      memcached along with MySQL, as described in
      Section 16.6, “Using MySQL with memcached”. This section describes the
      similarities and differences between the information in that
      section, and when using the InnoDB integration
      features of the memcached that is built into
      MySQL. The link at the start of each item goes to the associated
      information about the traditional memcached
      server.
          Installation:
          Because the memcached library comes with
          the MySQL server, installation and setup are straightforward.
          You run a SQL script to set up a table for
          memcached to use, issue a one-time
          install plugin statement to enable
          memcached, and add to the MySQL
          configuration file or startup script any desired
          memcached options, for example to use a
          different port. You might still install the regular
          memcached distribution to get the
          additional utilities such as memcp,
          memcat, and memcapable.
        
          Deployment:
          It is typical to run large numbers of low-capacity
          memcached servers. Because the
          InnoDB + memcached
          combination has a 1:1 ratio between database and
          memcached servers, the typical deployment
          involves a smaller number of moderate or high-powered servers,
          machines that were already running MySQL. The benefit of this
          server configuration is more for improving the efficiency of
          each individual database server than in tapping into unused
          memory or distributing lookups across large numbers of
          servers. In the default configuration, very little memory is
          used for memcached, and the in-memory
          lookups are served from the InnoDB
          buffer pool, which
          automatically caches the most recently used and most
          frequently used data. As in a traditional MySQL server
          instance, keep the value of the
          innodb_buffer_pool_size
          configuration option as high as practical (without causing
          paging at the OS level), so that as much of the workload as
          possible is done in memory.
        
          Expiry: By
          default (that is, with the caching policy
          innodb_only), the latest data from the
          InnoDB table is always returned, so the
          expiry options have no practical effect. If you change the
          caching policy to caching or
          cache-only, the expiry options work as
          usual, but requested data might be stale if it was updated in
          the underlying table before it expires from the memory cache.
        
          Namespaces:
          memcached is like a single giant directory,
          where to keep files from conflicting with each other you might
          give them elaborate names with prefixes and suffixes. The
          integrated InnoDB /
          memcached server lets you use these same
          naming conventions for keys, with one addition. Key names of
          the format
          @@.table_id.keytable_id
          are decoded to reference a specific a table, using mapping
          data from the innodb_memcache.containers
          table. The key is looked up in or
          written to the specified table.
        
          The @@ notation only works for individual
          calls to the get, add,
          and set functions, not the others such as
          incr or delete. To
          designate the default table for all subsequent
          memcached operations within a session,
          perform a get request using the
          @@ notation and a table ID, but without the
          key portion. For example:
        
get @@table_x
          Subsequent get, set,
          incr, delete and other
          operations use the table designated by
          table_x in the
          innodb_memcache.containers.name column.
        
          Hashing and
          distribution: The default configuration, with the
          caching policy innodb_only, is suitable for
          the traditional deployment configuration where all data is
          available on all servers, such as a set of replication slave
          servers.
        
          If you physically divide the data, as in a sharded
          configuration, you can split the data across several machines
          running the InnoDB and
          memcached combined server, and use the
          traditional memcached hashing mechanism to
          route requests to a particular machine. On the MySQL side,
          typically you would let all the data be inserted by
          add requests to
          memcached so the appropriate values were
          stored in the database on the appropriate server.
        
These types of deployment best practices are still being codified.
          Memory usage:
          By default (with the caching policy
          innodb_only), the
          memcached protocol passes information back
          and forth with InnoDB tables, and the
          fixed-size InnoDB buffer pool handles the
          in-memory lookups rather than memcached
          memory usage growing and shrinking. Relatively little memory
          is used on the memcached side.
        
          If you switch the caching policy to caching
          or cache-only, the normal rules of
          memcached memory usage apply. Memory for
          the memcached data values is allocated in
          terms of “slabs”. You can control the slab size
          and maximum memory used for memcached.
        
          Either way, you can monitor and troubleshoot the integrated
          memcached daemon using the familiar
          statistics system,
          accessed through the standard protocol, for example over a
          telnet session. Because extra utilities are
          not included with the integrated daemon, to use the
          memcached-tool
          script, install a full memcached
          distribution.
        
Thread usage: MySQL threads and memcached threads must co-exist on the same server, so any limits imposed on threads by the operating system apply to this total number.
          Log usage:
          Because the memcached daemon is run
          alongside the MySQL server and writes to
          stderr, the -v,
          -vv, and -vvv options
          for logging write their output to the MySQL
          error log.
        
          memcached
          operations: All the familiar operations such as
          get, set,
          add, and delete are
          available. Serialization (that is, the exact string format to
          represent complex data structures) depends on the language
          interface.
        
          Using
          memcached as a MySQL front end: That
          is what the InnoDB integration with
          memcached is all about. Putting these
          components together improves the performance of your
          application. Making InnoDB handle data
          transfers between memory and disk simplifies the logic of your
          application.
        
          Utilities:
          The MySQL server includes the libmemcached
          library but not the additional command-line utilities. To get
          the commands such as memcp,
          memcat, and memcapable
          commands, install a full memcached
          distribution. When memrm and
          memflush remove items from the cache, they
          are also removed from the underlying InnoDB
          table.
        
          Programming
          interfaces: You can access the MySQL server through the
          InnoDB and memcached
          combination using the same language as always:
          C and
          C++,
          Java,
          Perl,
          Python,
          PHP, and
          Ruby.
          Specify the server hostname and port as with any other
          memcached server. By default, the
          integrated memcached server listens on the
          same port as usual, 11211. You can use both
          the text and
          binary protocols. You can customize the
          behavior
          of the memcached functions at runtime.
          Serialization (that is, the exact string format to represent
          complex data structures) depends on the language interface.
        
          Frequently asked
          questions: MySQL has had an extensive
          memcached FAQ for several releases. In
          MySQL 5.6, the answers are largely the same,
          except that using InnoDB tables as a
          storage medium for memcached data means
          that you can use this combination for more write-intensive
          applications than before, rather than as a read-only cache.
For a more detailed look at the workings of this feature, see Section 14.17.7, “Internals of the InnoDB memcached Plugin”.
      This section describes the steps to activate the
      InnoDB / memcached
      integration on a MySQL Server. Because the
      memcached daemon is tightly integrated with the
      MySQL Server to avoid network traffic and minimize latency, you
      perform this process on each MySQL instance that uses this
      feature.
Before setting up the memcached interface for any data, consult Section 14.17.4, “Security Considerations for the InnoDB memcached Plugin” to understand the security procedures needed to prevent unauthorized access.
Before you set up the plugin and the internal tables, verify that your server has the required prerequisite software.
Currently, the memcached Daemon Plugin is only supported on Linux, Solaris, and OS X platforms.
        You must have libevent installed, since it is
        required by memcached. The way to get this
        library is different if you use the MySQL installer or build
        from source:
            If you installed using the MySQL installer, the
            libevent library is not included in the
            installation. Use the installation method for your operating
            system to install libevent 1.4.3 or
            later: for example, depending on the operating system, you
            might use the command apt-get,
            yum, or port install.
            For example, on Ubuntu Linux:
          
sudo apt-get install libevent-dev
            If you install from a source code release,
            libevent 1.4.3 is bundled with the
            package and is located at the top level of the MySQL source
            code directory. If you use this bundled version of
            libevent, no action is required. If you
            want to use a local system version of
            libevent you must build MySQL with the
            -DWITH_LIBEVENT build option
            set to system or yes.
        When you build MySQL server, you must build with
        -DWITH_INNODB_MEMCACHED=ON. This
        build option generates two shared libraries in the MySQL plugin
        directory (plugin_dir) that are
        required to run InnoDB
        memcached:
            libmemcached.so: the
            memcached daemon plugin to MySQL.
          
            innodb_engine.so: an
            InnoDB API plugin to
            memcached.
        To configure the memcached plugin so it can
        interact with InnoDB tables, run the
        innodb_memcached_config.sql configuration
        script to install the necessary tables used behind the scenes:
      
mysql> source MYSQL_HOME/share/innodb_memcached_config.sql
This is a one-time operation. The tables remain in place if you later disable and re-enable the memcached support. For information about the layout and purpose of these tables, see Section 14.17.7, “Internals of the InnoDB memcached Plugin”.
        To activate the daemon plugin, use the install
        plugin statement, just as when installing any other
        MySQL plugin:
      
mysql> install plugin daemon_memcached soname "libmemcached.so";
Once the plugin is installed this way, it is automatically activated each time the MySQL server is booted or restarted.
When making major changes to the plugin configuration, you might need to turn off the plugin. To do so, issue the following statement:
mysql> uninstall plugin daemon_memcached;
        To re-enable it, issue the preceding install
        plugin statement again. All the previous configuration
        settings, internal tables, and data are preserved when the
        plugin is restarted this way.
      
For additional information about enabling and disabling plugins, see Section 5.1.8.1, “Installing and Uninstalling Plugins”.
        If you have any memcached specific
        configuration parameters, specify them on the
        mysqld command line or enter them in the
        MySQL configuration file, encoded in the argument to the
        daemon_memcached_option MySQL
        configuration option. The memcached
        configuration options take effect when the plugin is installed,
        which you do each time the MySQL server is started.
      
        For example, to make memcached listen on port
        11222 instead of the default port 11211, add
        -p11222 to the MySQL configuration option
        daemon_memcached_option:
      
mysqld .... --daemon_memcached_option="-p11222"
        You can add other memcached command line options to the
        daemon_memcached_option string.
        The other configuration options are:
            daemon_memcached_engine_lib_name
            (default innodb_engine.so)
          
            daemon_memcached_engine_lib_path
            (default NULL, representing the plugin directory).
          
            daemon_memcached_r_batch_size,
            batch commit size for read operations
            (get). It specifies after how many
            memcached read operations the system
            automatically does a
            commit. By default, this
            is set to 1 so that every get request can
            access the very latest committed data in the
            InnoDB table, whether the data was
            updated through memcached or by SQL. When
            its value is greater than 1, the counter for read operations
            is incremented once for every get call.
            The flush_all call resets both the read
            and write counters.
          
            daemon_memcached_w_batch_size,
            batch commit for any write operations
            (set, replace,
            append, prepend,
            incr, decr, and so on)
            By default, this is set as 1, so that no uncommitted data is
            lost in case of an outage, and any SQL queries on the
            underlying table can access the very latest data. When its
            value is greater than 1, the counter for write operations is
            incremented once for every add,
            set, incr,
            decr, and delete call.
            The flush_all call resets both the read
            and write counters.
By default, you do not need to change anything with the first two configuration options. Those options allow you to load any other storage engine for memcached (such as the NDB memcached engine).
These configuration parameters are in your MySQL configuration file or MySQL boot command line. They take effect when you load the memcached plugin.
Now you have everything set up. You can directly interact with InnoDB tables through the memcached interface. To verify that the feature is working properly, see Section 14.17.3.3, “Verifying the InnoDB and memcached Setup”.
Now that everything is set up, you can experiment with the InnoDB and memcached combination:
Here is an example using the Unix, Linux, or OS X command shell:
# Point memcached-related commands at the memcached attached to the mysqld process. export MEMCACHED_SERVERS=127.0.0.1:11211 # Store the contents of a modestly sized text file in memcached, with the data passed # to MySQL and stored in a table. The key is the basename of the file, 'mime.types'. memcp /etc/apache2/mime.types # Retrieve the data we just stored, from the memory cache. memcat mime.types
Here is an example using telnet to send memcached commands and receive results through the ASCII protocol:
telnet 127.0.0.1 11211set a11 10 0 9123456789STOREDget a11VALUE a11 0 9 123456789 ENDquit
To prove that all the same data has been stored in MySQL, connect to the MySQL server and issue:
mysql> select * from test.demo_test;
Now, shut down the MySQL server, which also shuts off the integrated memcached server. Further attempts to access the memcached data now fail with a connection error. Normally, the memcached data would disappear at this point, and you would write application logic to load the data back into memory when memcached was restarted. But the MySQL / memcached integration automates this process:
Restart the MySQL server.
            Run the install plugin statement to start
            the InnoDB memcached plugin again.
          
            Now any memcat commands or
            get operations once again return the
            key/value pairs you stored in the earlier
            memcached session. When a key is
            requested and the associated value is not already in the
            memory cache, it is automatically queried from the MySQL
            table, by default test.demo_test.
        Consult this section before deploying the
        InnoDB memcached plugin on
        any production servers, or even test servers if the MySQL
        instance contains any sensitive information.
      Because memcached does not use an
      authentication mechanism by default, and the optional SASL
      authentication is not as strong as traditional DBMS security
      measures, make sure to keep only non-sensitive data in the MySQL
      instance using the InnoDB
      memcached plugin, and wall off any servers
      using this configuration from potential intruders. Do not allow
      memcached access to such servers from the
      Internet, only from within a firewalled intranet, ideally from a
      subnet whose membership you can restrict.
SASL support gives you the capability to protect your MySQL database from unauthenticated access through memcached clients. This section explains the steps to enable this option. The steps to enable such support are almost identical to those you would do to enable SASL for a traditional memcached server.
SASL stands for “Simple Authentication and Security Layer”, a standard for adding authentication support to connection-based protocols. memcached added SASL support starting in its 1.4.3 release.
SASL authentication is only supported with the binary protocol.
        For the InnoDB + memcached combination, the
        table that stores the memcached data must be
        registered in the container system table. And
        memcached clients can only access such a
        registered table.  Even though the DBA can add access
        restrictions on a table that is registered with the
        memcached plugin, they have no control over
        who can access it through memcached
        applications. This is why we provide a means (through SASL) to
        control who can access InnoDB tables
        associated with the memcached plugin.
      
        The following section shows how to build, enable, and test an
        SASL-enabled InnoDB
        memcached plugin.
        By default, SASL-enabled InnoDB
        memcached is not included in the release
        package, since it relies on building
        memcached with SASL libraries. To enable this
        feature, download the MySQL source and rebuild the
        InnoDB memcached plugin
        after downloading the SASL libraries:
First, get the SASL development and utility libraries. For example, on Ubuntu, you can get these libraries through:
sudo apt-get -f install libsasl2-2 sasl2-bin libsasl2-2 libsasl2-dev libsasl2-modules
            Then build the InnoDB
            memcached plugin (shared libraries) with
            SASL capability, by adding
            ENABLE_MEMCACHED_SASL=1 to the
            cmake options. In addition,
            memcached provides a simple cleartext
            password support, which is easier to use for testing. To
            enable this, set the option
            ENABLE_MEMCACHED_SASL_PWDB=1.
          
Overall, you will add following three options to the cmake:
cmake ... -DWITH_INNODB_MEMCACHED=1 -DENABLE_MEMCACHED_SASL=1 -DENABLE_MEMCACHED_SASL_PWDB=1
            The third step is to install the InnoDB
            memcached plugin as before, as explained
            in Section 14.17.3, “Getting Started with InnoDB Memcached Plugin”.
          
As previously mentioned, memcached provides a simple cleartext password support through SASL, which will be used for this demo.
                Create a user named testname and its
                password as testpasswd in a file:
              
echo "testname:testpasswd:::::::" >/home/jy/memcached-sasl-db
                Let memcached know about it by
                setting the environment variable
                MEMCACHED_SASL_PWDB:
              
export MEMCACHED_SASL_PWDB=/home/jy/memcached-sasl-db
Also tell memcached that it is a cleartext password:
echo "mech_list: plain" > /home/jy/work2/msasl/clients/memcached.conf export SASL_CONF_PATH=/home/jy/work2/msasl/clients
            Then reboot the server, and add a
            daemon_memcached_option
            option -S to enable SASL:
          
mysqld ... --daemon_memcached_option="-S"
Now the setup is complete. To test it, you might need an SASL-enabled client, such as this SASL-enabled libmemcached.
memcp --servers=localhost:11211 --binary --username=testname --password=testpasswd myfile.txt memcat --servers=localhost:11211 --binary --username=testname --password=testpasswd myfile.txt
            Without appropriate user name or password, the above
            operation is rejected with the error message
            memcache error AUTHENTICATION FAILURE.
            Otherwise, the operation succeed. You can also examine the
            cleartext password set in the
            memcached-sasl-db file to verify it.
There are other methods to test SASL authentication with memcached. But the one described above is the most straightforward.
      Typically, writing an application for the
      InnoDB memcached interface
      involves some degree of rewriting or adapting existing code that
      uses MySQL or the memcached API:
Instead of many memcached servers running on low-powered machines, you have the same number of memcached servers as MySQL servers, running on relatively high-powered machines with substantial disk storage and memory. You might reuse some existing code that works with the memcached API, but some adaptation is likely needed due to the different server configuration.
          The data stored through this interface all goes into
          VARCHAR,
          TEXT, or
          BLOB columns, and must be
          converted to do numeric operations. You can do the conversion
          on the application side, or by using the
          CAST() function in queries.
        
Coming from a database background, you might be used to general-purpose SQL tables with many columns. The tables accessed by the memcached code likely have only a few or even just a single column holding data values.
You might adapt parts of your application that do single-row queries, inserts, updates, or deletes, to squeeze more performance out of critical sections of code. Both queries (read) and DML (write) operations can be substantially faster when performed through the memcached interface. The speedup for writes is typically greater than the speedup for reads, so you might focus on adapting the code that performs logging or records interactive choices on a web site.
The following sections explore these aspects in more detail.
Consider these aspects of memcached applications when adapting an existing MySQL schema or application to use the memcached interface:
            memcached keys cannot contain spaces or
            newlines, because those characters are used as separators in
            the ASCII protocol. If you are using lookup values that
            contain spaces, transform or hash them into values without
            spaces before using them as keys in calls to
            add(), set(),
            get() and so on. Although theoretically
            those characters are allowed in keys in programs that use
            the binary protocol, you should always restrict the
            characters used in keys to ensure compatibility with a broad
            range of clients.
          
            If you have a short numeric
            primary key column
            in an InnoDB table, you can use that as
            the unique lookup key for memcached by
            converting the integer to a string value. If the
            memcached server is being used for more
            than one application, or with more than one
            InnoDB table, consider modifying the name
            to make sure it is unique. For example, you might prepend
            the table name, or the database name and the table name,
            before the numeric value.
              As of MySQL 5.6.14, the InnoDB
              memcached plugin supports inserts and
              reads on mapped InnoDB tables that have
              an INTEGER defined as the primary key.
You cannot use a partitioned table for data queried or stored through the memcached interface.
            The memcached protocol passes numeric
            values around as strings. To store numeric values in the
            underlying InnoDB table, for example to
            implement counters that can be used in SQL functions such as
            SUM() or AVG():
                Use VARCHAR columns with
                enough characters to hold all the digits of the largest
                expected number (and additional characters if
                appropriate for the negative sign, decimal point, or
                both).
              
                In any query that performs arithmetic using the column
                values, use the CAST() function to
                convert from string to integer or other numeric type.
                For example:
              
-- Alphabetic entries are returned as zero. select cast(c2 as unsigned integer) from demo_test; -- Since there could be numeric values of 0, can't disqualify them. -- Test the string values to find the ones that are integers, and average only those. select avg(cast(c2 as unsigned integer)) from demo_test where c2 between '0' and '9999999999'; -- Views let you hide the complexity of queries. The results are already converted; -- no need to repeat conversion functions and WHERE clauses each time. create view numbers as select c1 key, cast(c2 as unsigned integer) val from demo_test where c2 between '0' and '9999999999'; select sum(val) from numbers;
                Alphabetic values in the result set are turned into 0 by
                the call to CAST(). When using
                functions such as AVG() that depend
                on the number of rows in the result set, include
                WHERE clauses to filter out any
                non-numeric values.
            If the InnoDB column you use as a key can
            be longer than 250 bytes, hash it to a value that is less
            than 250 bytes.
          
            To use an existing table with the
            memcached interface, define an entry for
            it in the innodb_memcache.containers
            table. To make that the table the default for all requests
            relayed through memcached, specify the
            value default in the
            name column, then restart the MySQL
            server to make that change take effect. If you are using
            multiple tables for different classes of
            memcached data, set up multiple entries
            in the innodb_memcache.containers table
            with name values of your choosing, then
            issue a memcached request of the form
            get @@ or
            nameset @@
            within the application to switch the table used for
            subsequent requests through the memcached
            API.
          name
            For an example of using a table other than the predefined
            test.demo_test table, see
            Example 14.23, “Specifying the Table and Column Mapping for an InnoDB + memcached
          Application”. For the
            required layout and meaning of the columns in such a table,
            see Section 14.17.7, “Internals of the InnoDB memcached Plugin”.
          
            To use multiple MySQL column values with
            memcached key/value pairs, in the
            innodb_memcache.containers entry
            associated with the MySQL table, specify in the
            value_columns field several column names
            separated by comma, semicolon, space, or pipe characters;
            for example, col1,col2,col3 or
            col1|col2|col3.
          
            Concatenate the column values into a single string using the
            pipe character as a separator, before passing that string to
            memcached add or
            set calls. The string is unpacked
            automatically into the various columns. Each
            get call returns a single string with the
            column values, also delimited by the pipe separator
            character. you unpack those values using the appropriate
            syntax depending on your application language.
Example 14.23 Specifying the Table and Column Mapping for an InnoDB + memcached Application
          Here is an example showing how to use your own table for a
          MySQL application going through the InnoDB
          memcached plugin for data manipulation.
        
          First, we set up a table to hold some country data: the
          population, area in metric units, and 'R'
          or 'L' indicating if people drive on the
          right or on the left.
        
use test; CREATE TABLE `multicol` ( `country` varchar(128) NOT NULL DEFAULT '', `population` varchar(10) DEFAULT NULL, `area_sq_km` varchar(9) DEFAULT NULL, `drive_side` varchar(1) DEFAULT NULL, `c3` int(11) DEFAULT NULL, `c4` bigint(20) unsigned DEFAULT NULL, `c5` int(11) DEFAULT NULL, PRIMARY KEY (`country`), ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
          Now we make a descriptor for this table so that the
          InnoDB memcached plugin
          knows how to access it:
              The sample entry in the CONTAINERS
              table has a name column
              'aaa'; we set up another identifier
              'bbb'. If we made a single master table
              for all memcached applications to use,
              we would make the ID 'default' and skip
              the @@ requests to switch tables.
            
              We specify the test.multicol table. The
              schema name is stored in one column and the table name is
              stored in another column.
            
              The key column will be our unique
              country value. That column was
              specified as the primary key when we created the table
              above, so we also specify the index name
              'PRIMARY' here.
            
Rather than a single column to hold a composite data value, we will divide the data among three table columns, so we specify a comma-separated list of those columns that will be used when storing or retrieving values.
              And for the flags, expire, and CAS values, we specify
              corresponding columns based on the settings from the
              sample table demo.test. These values
              are typically not significant in applications using the
              InnoDB memcached
              plugin, because MySQL keeps the data synchronized and
              there is no need to worry about data expiring or being
              stale.
insert into innodb_memcache.containers
  (name,db_schema,db_table,key_columns,value_columns,flags,cas_column,
  expire_time_column,unique_idx_name_on_key)
values
  ('bbb','test','multicol','country','population,area_sq_km,drive_side',
  'c3','c4','c5','PRIMARY');
commit;
Here is a sample Python program showing how we would access this table from a program:
No database authorization is needed, since all data manipulation is done through the memcached interface. All we need to know is the port number the memcached daemon is listening to on the local system.
We load sample values for a few arbitrary countries. (Area and population figures from Wikipedia.)
              To make the program use the multicol
              table, we call the switch_table()
              function that does a dummy GET or
              SET request using @@
              notation. The name in the request is
              bbb, which is the value we stored in
              innodb_memcache.containers.name. (In a
              real application, we would use a more descriptive name.
              This example just illustrates that you specify a table
              identifier, not the table name, with the GET
              @@... request.
            
              The utility functions to insert and query the data
              demonstrate how we might turn a Python data structure into
              pipe-separated values for sending to MySQL with
              ADD or SET requests,
              and unpack the pipe-separated values returned by
              GET requests. This extra processing is
              only required when mapping the single
              memcached value to multiple MySQL table
              columns.
import sys, os
import memcache
def connect_to_memcached():
  memc = memcache.Client(['127.0.0.1:11211'], debug=0);
  print "Connected to memcached."
  return memc
def banner(message):
  print
  print "=" * len(message)
  print message
  print "=" * len(message)
country_data = [
("Canada","34820000","9984670","R"),
("USA","314242000","9826675","R"),
("Ireland","6399152","84421","L"),
("UK","62262000","243610","L"),
("Mexico","113910608","1972550","R"),
("Denmark","5543453","43094","R"),
("Norway","5002942","385252","R"),
("UAE","8264070","83600","R"),
("India","1210193422","3287263","L"),
("China","1347350000","9640821","R"),
]
def switch_table(memc,table):
  key = "@@" + table
  print "Switching default table to '" + table + "' by issuing GET for '" + key + "'."
  result = memc.get(key)
def insert_country_data(memc):
  banner("Inserting initial data via memcached interface")
  for item in country_data:
    country = item[0]
    population = item[1]
    area = item[2]
    drive_side = item[3]
    key = country
    value = "|".join([population,area,drive_side])
    print "Key = " + key
    print "Value = " + value
    if memc.add(key,value):
      print "Added new key, value pair."
    else:
      print "Updating value for existing key."
      memc.set(key,value)
def query_country_data(memc):
  banner("Retrieving data for all keys (country names)")
  for item in country_data:
    key = item[0]
    result = memc.get(key)
    print "Here is the result retrieved from the database for key " + key + ":"
    print result
    (m_population, m_area, m_drive_side) = result.split("|")
    print "Unpacked population value: " + m_population
    print "Unpacked area value      : " + m_area
    print "Unpacked drive side value: " + m_drive_side
if __name__ == '__main__':
  memc = connect_to_memcached()
  switch_table(memc,"bbb")
  insert_country_data(memc)
  query_country_data(memc)
  sys.exit(0)
Here are some SQL queries to illustrate the state of the MySQL data after the script is run, and show how you could access the same data directly through SQL, or from an application written in any language using the appropriate MySQL Connector or API.
          The table descriptor 'bbb' is in place,
          allowing us to switch to the multicol table
          by issuing a memcached request GET
          @bbb:
        
mysql: use innodb_memcache; Database changed mysql: select * from containers; +------+-----------+-----------+-------------+----------------------------------+-------+------------+--------------------+------------------------+ | name | db_schema | db_table | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key | +------+-----------+-----------+-------------+----------------------------------+-------+------------+--------------------+------------------------+ | aaa | test | demo_test | c1 | c2 | c3 | c4 | c5 | PRIMARY | | bbb | test | multicol | country | population,area_sq_km,drive_side | c3 | c4 | c5 | PRIMARY | +------+-----------+-----------+-------------+----------------------------------+-------+------------+--------------------+------------------------+ 2 rows in set (0.01 sec)
          After running the script, the data is in the
          multicol table, available for traditional
          MySQL queries or
          DML statements:
        
mysql: use test; Database changed mysql: select * from multicol; +---------+------------+------------+------------+------+------+------+ | country | population | area_sq_km | drive_side | c3 | c4 | c5 | +---------+------------+------------+------------+------+------+------+ | Canada | 34820000 | 9984670 | R | 0 | 11 | 0 | | China | 1347350000 | 9640821 | R | 0 | 20 | 0 | | Denmark | 5543453 | 43094 | R | 0 | 16 | 0 | | India | 1210193422 | 3287263 | L | 0 | 19 | 0 | | Ireland | 6399152 | 84421 | L | 0 | 13 | 0 | | Mexico | 113910608 | 1972550 | R | 0 | 15 | 0 | | Norway | 5002942 | 385252 | R | 0 | 17 | 0 | | UAE | 8264070 | 83600 | R | 0 | 18 | 0 | | UK | 62262000 | 243610 | L | 0 | 14 | 0 | | USA | 314242000 | 9826675 | R | 0 | 12 | 0 | +---------+------------+------------+------------+------+------+------+ 10 rows in set (0.00 sec) mysql: desc multicol; +------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------+---------------------+------+-----+---------+-------+ | country | varchar(128) | NO | PRI | | | | population | varchar(10) | YES | | NULL | | | area_sq_km | varchar(9) | YES | | NULL | | | drive_side | varchar(1) | YES | | NULL | | | c3 | int(11) | YES | | NULL | | | c4 | bigint(20) unsigned | YES | | NULL | | | c5 | int(11) | YES | | NULL | | +------------+---------------------+------+-----+---------+-------+ 7 rows in set (0.01 sec)
          Allow sufficient size to hold all necessary digits, decimal
          points, sign characters, leading zeros, and so on when
          defining the length for columns that will be treated as
          numbers. Too-long values in a string column such as a
          VARCHAR are truncated by removing some
          characters, which might produce a nonsensical numeric value.
        
          We can produce reports through SQL queries, doing calculations
          and tests across any columns, not just the
          country key column. (Because these examples
          use data from only a few countries, the numbers are for
          illustration purposes only.) Here, we find the average
          population of countries where people drive on the right, and
          the average size of countries whose names start with
          “U”:
        
mysql: select avg(population) from multicol where drive_side = 'R'; +-------------------+ | avg(population) | +-------------------+ | 261304724.7142857 | +-------------------+ 1 row in set (0.00 sec) mysql: select sum(area_sq_km) from multicol where country like 'U%'; +-----------------+ | sum(area_sq_km) | +-----------------+ | 10153885 | +-----------------+ 1 row in set (0.00 sec)
          Because the population and
          area_sq_km columns store character data
          rather than strongly typed numeric data, functions such as
          avg() and sum() work by
          converting each value to a number first. This approach
          does not work for operators such as
          < or >: for
          example, when comparing character-based values, 9
          > 1000, which is not you expect from a clause
          such as ORDER BY population DESC. For the
          most accurate type treatment, perform queries against views
          that cast numeric columns to the appropriate types. This
          technique lets you issue very simple SELECT
          * queries from your database applications, while
          ensuring that all casting, filtering, and ordering is correct.
          Here, we make a view that can be queried to find the top 3
          countries in descending order of population, with the results
          always reflecting the latest data from the
          multicol table, and with the population and
          area figures always treated as numbers:
        
mysql: create view populous_countries as
  select
    country,
    cast(population as unsigned integer) population,
    cast(area_sq_km as unsigned integer) area_sq_km,
    drive_side from multicol
  order by cast(population as unsigned integer) desc
  limit 3;
Query OK, 0 rows affected (0.01 sec)
mysql: select * from populous_countries;
+---------+------------+------------+------------+
| country | population | area_sq_km | drive_side |
+---------+------------+------------+------------+
| China   | 1347350000 |    9640821 | R          |
| India   | 1210193422 |    3287263 | L          |
| USA     |  314242000 |    9826675 | R          |
+---------+------------+------------+------------+
3 rows in set (0.00 sec)
mysql: desc populous_countries;
+------------+---------------------+------+-----+---------+-------+
| Field      | Type                | Null | Key | Default | Extra |
+------------+---------------------+------+-----+---------+-------+
| country    | varchar(128)        | NO   |     |         |       |
| population | bigint(10) unsigned | YES  |     | NULL    |       |
| area_sq_km | int(9) unsigned     | YES  |     | NULL    |       |
| drive_side | varchar(1)          | YES  |     | NULL    |       |
+------------+---------------------+------+-----+---------+-------+
4 rows in set (0.02 sec)
        Consider these aspects of MySQL and InnoDB
        tables when adapting an existing memcached
        application to use the MySQL integration:
            If you have key values longer than a few bytes, you might
            find it more efficient to use a numeric auto-increment
            column for the primary
            key in the InnoDB table, and
            create a unique
            secondary index
            on the column holding the memcached key
            values. This is because InnoDB performs
            best for large-scale insertions if the primary key values
            are added in sorted order (as they are with auto-increment
            values), and the primary key values are duplicated in each
            secondary index, which can take up unnecessary space when
            the primary key is a long string value.
          
            If you store several different classes of information in
            memcached, you might set up a separate
            InnoDB table for each kind of data.
            Define additional table identifiers in the
            innodb_memcache.containers table, and use
            the notation
            @@
            to store or retrieve items from different tables. Physically
            dividing the items lets you tune the characteristics of each
            table for best space utilization, performance, and
            reliability. For example, you might enable
            compression for a
            table that holds blog posts, but not for one that holds
            thumbnail images. You might back up one table more
            frequently than another because it holds critical data. You
            might create additional
            secondary
            indexes on tables that are frequently used to
            generate reports through SQL.
          table_id.key
            Preferably, set up a stable set of table definitions for use
            with the memcached interface and leave
            them in place permanently. Changes to the
            containers table take effect the next
            time that table is queried. The entries in that table are
            processed at startup, and are consulted whenever an
            unrecognized table ID is requested by the
            @@ notation. Thus, new entries are
            visible as soon as you try to use the associated table ID,
            but changes to existing entries require a server restart
            before they take effect.
          
            When you use the default caching policy
            innodb_only, your calls to
            add(), set(),
            incr(), and so on can succeed but still
            trigger debugging messages such as while expecting
            'STORED', got unexpected response 'NOT_STORED.
            This is because in the innodb_only
            configuration, new and updated values are sent directly to
            the InnoDB table without being saved in
            the memory cache.
        Because using InnoDB in combination with
        memcached involves writing all data to disk,
        whether immediately or sometime later, understand that raw
        performance is expected to be somewhat lower than using
        memcached by itself. Focus your tuning goals
        for the InnoDB memcached
        plugin on achieving higher performance than equivalent SQL
        operations.
      
Benchmarks suggest that both queries and DML operations (inserts, updates, and deletes) are faster going through the memcached interface than with traditional SQL. DML operations typically see a larger speedup. Thus, the types of applications you might adapt to use the memcached interface first are those that are write-intensive. You might also use MySQL as a data store for types of write-intensive applications that formerly used some fast, lightweight mechanism where reliability was not a priority.
        The types of queries that are most suited to the simple
        GET request style are those with a single
        clause, or a set of AND conditions, in the
        WHERE clause:
      
SQL: select col from tbl where key = 'key_value'; memcached: GET key_value SQL: select col from tbl where col1 = val1 and col2 = val2 and col3 = val3; memcached: # Since you must always know these 3 values to look up the key, # combine them into a unique string and use that as the key # for all ADD, SET, and GET operations. key_value = val1 + ":" + val2 + ":" + val3 GET key_value SQL: select 'key exists!' from tbl where exists (select col1 from tbl where key = 'key_value') limit 1; memcached: # Test for existence of key by asking for its value and checking if the call succeeds, # ignoring the value itself. For existence checking, you typically only store a very # short value such as "1". GET key_value
        For best performance, deploy the InnoDB
        memcached plugin on machines that are
        configured like typical database servers: in particular, with
        the majority of system RAM devoted to the
        InnoDB
        buffer pool through the
        innodb_buffer_pool_size
        configuration option. For systems with multi-gigabyte buffer
        pools, consider raising the value of the
        innodb_buffer_pool_instances
        configuration option for maximum throughput when most operations
        involve data already cached in memory.
        InnoDB has a number of settings that let you
        choose the balance between high reliability in case of a crash,
        and the amount of I/O overhead during high write workloads. For
        example, consider setting the configuration options
        innodb_doublewrite=0 and
        innodb_flush_log_at_trx_commit=2.
        Measure the performance with different settings for the
        innodb_flush_method option. If
        the binary log is not
        turned on for the server, use the setting
        innodb_support_xa=0.
      
For other ways to reduce or tune I/O for table operations, see Section 8.5.8, “Optimizing InnoDB Disk I/O”.
        The default value of 1 for the configuration options
        daemon_memcached_r_batch_size
        and
        daemon_memcached_w_batch_size
        is intended for maximum reliability of results and safety of
        stored or updated data.
      
        Depending on the type of application, you might increase one or
        both of these settings to reduce the overhead of frequent
        commit operations. On a busy
        system, you might increase
        daemon_memcached_r_batch_size,
        knowing that changes to the data made through SQL might not
        become visible to memcached immediately (that
        is, until N more
        get operations were processed). When
        processing data where every write operation must be reliably
        stored, you would leave
        daemon_memcached_w_batch_size
        set to 1. You might increase it when processing large numbers of
        updates intended to only be used for statistical analysis, where
        it is not critical if the last N
        updates are lost in case of a crash.
      
        For example, imagine a system that monitors traffic crossing a
        busy bridge, recording approximately 100,000 vehicles each day.
        If the application simply counts different types of vehicles to
        analyze traffic patterns, it might change
        daemon_memcached_w_batch_size
        from 1 to 100, reducing
        the I/O overhead for commit operations by 99%. In case of an
        unexpected outage, only a maximum of 100 records could be lost,
        which might be an acceptable margin of error. If instead the
        application was doing automated toll collection for each car, it
        would keep
        daemon_memcached_w_batch_size
        set to 1 to ensure that every toll record was
        immediately saved to disk.
      
        Because of the way InnoDB organizes the
        memcached key values on disk, if you have a
        large number of keys to create, it can be faster to sort all the
        data items by the key value in your application and
        add them in sorted order, rather than
        creating them in arbitrary order.
      
The memslap command, which is part of the regular memcached distribution but not included with the MySQL server, can be useful for benchmarking different configurations. It can also be used to generate sample key/value pairs that you can use in your own benchmarking. See Section 16.6.3.3.6, “libmemcached Command-Line Utilities” for details.
        Unlike with the traditional memcached, with
        the InnoDB + memcached
        combination you can control how “durable” are the
        data values produced through calls to add,
        set, incr, and so on.
        Because MySQL places a high priority on durability and
        consistency of data, by default all data written through the
        memcached interface is always stored to disk,
        and calls to get always return the most
        recent value from disk. Although this default setting does not
        give the highest possible raw performance, it is still very fast
        compared to the traditional SQL interface for
        InnoDB tables.
      
As you gain experience with this feature, you can make the decision to relax the durability settings for non-critical classes of data, at the risk of possibly losing some updated values in case of an outage, or returning data that is slightly out-of-date.
One tradeoff between durability and raw performance is how frequently new and changed data is committed. If the data is critical, you want it to be committed immediately so that it is safe in case of any crash or outage. If the data is less critical, such as counters that would be reset after a crash, or debugging or logging data where you could afford to lose a few seconds worth, you might prefer the higher raw throughput that comes with less frequent commits.
        When a memcached operation causes an insert,
        update, or delete in the underlying InnoDB
        table, that change might be committed to the underlying table
        instantly (if
        daemon_memcached_w_batch_size=1)
        or some time later (if that configuration option value is
        greater than 1). In either case, the change cannot be rolled
        back. If you increase the value of
        daemon_memcached_w_batch_size=1
        to avoid high I/O overhead during busy times, commits could
        become very infrequent when the workload decreases. As a safety
        measure, a background thread automatically commits changes made
        through the memcached API at regular
        intervals. The interval is controlled by the
        innodb_api_bk_commit_interval
        configuration option, and by default is 5 seconds.
      
        When a memcached operation causes an insert
        or update in the underlying InnoDB table, the
        changed data is immediately visible to other
        memcached requests because the new value
        remains in the memory cache, even if it is not committed yet on
        the MySQL side.
        When a memcached operation such as
        get or incr causes a query
        or DML operation in the underlying InnoDB
        table, you can control whether it sees the very latest data
        written to the table, only data that has been committed, or
        other variations of transaction
        isolation level. You
        control this feature through the
        innodb_api_trx_level
        configuration option. The numeric values specified with this
        option correspond to the familiar isolation level names such as
        REPEATABLE READ. See the
        description of the
        innodb_api_trx_level option for
        the full list.
      
The stricter the isolation level, the more certain you can be that the data you retrieve will not be rolled back or changed suddenly so that a subsequent query sees a different value. But that strictness comes with greater locking overhead that can cause waits. For a NoSQL-style application that does not use long-running transactions, you can typically stay with the default isolation level or switch to a less strict one.
        The innodb_api_disable_rowlock
        option can be used to disable row locks when
        InnoDB memcached performs
        DML operations. By default,
        innodb_api_disable_rowlock is set to
        OFF which means that
        memcached requests row locks for get and set
        operations. When innodb_api_disable_rowlock
        is set to ON, memcached
        requests a table lock instead of row locks.
      
        The innodb_api_disable_rowlock option is not
        dynamic. It must be specified at startup on the
        mysqld command line or entered in the MySQL
        configuration file.
        By default, you can perform DDL
        operations such as ALTER TABLE on
        the tables being used by the InnoDB
        memcached plugin. To avoid potential
        slowdowns when these tables are being used for high-throughput
        applications, you can disable DDL operations on these tables by
        turning on the
        innodb_api_enable_mdl
        configuration option at startup. This option is less appropriate
        when you are accessing the same underlying tables through both
        the memcached interface and SQL, because it
        blocks CREATE INDEX statements on
        the tables, which could be important for configuring the system
        to run reporting queries.
        Table innodb_memcache.cache_policies
        specifies whether to store data written through the
        memcached on disk
        (innodb_only, the default); to store the data
        in memory only, as in the traditional
        memcached (cache-only); or
        both (caching).
      
        With the caching setting, if
        memcached cannot find a key in memory, it
        searches for the value in an InnoDB table.
        Values returned from get calls under the
        caching setting could be out-of-date, if they
        were updated on disk in the InnoDB table but
        not yet expired from the memory cache.
      
        The caching policy can be set independently for
        get, set (including
        incr and decr),
        delete, and flush
        operations. For example:
            You might allow get and
            set operations to query or update a table
            and the memcached memory cache at the
            same time (through the caching setting),
            while making delete,
            flush, or both operate only on the
            in-memory copy (through the cache_only
            setting). That way, deleting or flushing an item just
            expires it from the cache, and the latest value is returned
            from the InnoDB table the next time the
            item is requested.
mysql> desc innodb_memcache.cache_policies;
+---------------+-------------------------------------------------------+------+-----+---------+-------+
| Field         | Type                                                  | Null | Key | Default | Extra |
+---------------+-------------------------------------------------------+------+-----+---------+-------+
| policy_name   | varchar(40)                                           | NO   | PRI | NULL    |       |
| get_policy    | enum('innodb_only','cache_only','caching','disabled') | NO   |     | NULL    |       |
| set_policy    | enum('innodb_only','cache_only','caching','disabled') | NO   |     | NULL    |       |
| delete_policy | enum('innodb_only','cache_only','caching','disabled') | NO   |     | NULL    |       |
| flush_policy  | enum('innodb_only','cache_only','caching','disabled') | NO   |     | NULL    |       |
+---------------+-------------------------------------------------------+------+-----+---------+-------+
mysql> select * from innodb_memcache.cache_policies;
+--------------+-------------+-------------+---------------+--------------+
| policy_name  | get_policy  | set_policy  | delete_policy | flush_policy |
+--------------+-------------+-------------+---------------+--------------+
| cache_policy | innodb_only | innodb_only | innodb_only   | innodb_only  |
+--------------+-------------+-------------+---------------+--------------+
mysql> update innodb_memcache.cache_policies set set_policy = 'caching'
    -> where policy_name = 'cache_policy';
        The cache_policies values are only read at
        startup, and are tightly integrated with the operation of the
        memcached plugin. After changing any of the
        values in this table, uninstall the plugin and reinstall it:
      
mysql> uninstall plugin daemon_memcached; Query OK, 0 rows affected (2.00 sec) mysql> install plugin daemon_memcached soname "libmemcached.so"; Query OK, 0 rows affected (0.00 sec)
        Benchmarks suggest that the InnoDB
        memcached plugin speeds up
        DML operations (inserts,
        updates, and deletes) more than it speeds up queries. You might
        focus your initial development efforts on write-intensive
        applications that are I/O-bound, and look for opportunities to
        use MySQL for new kinds of write-intensive applications.
INSERT INTO t1 (key,val) VALUES (some_key,some_value); SELECT val FROM t1 WHERE key =some_key; UPDATE t1 SET val =new_valueWHERE key =some_key; UPDATE t1 SET val = val + x WHERE key =some_key; DELETE FROM t1 WHERE key =some_key;
            Single-row DML statements are the most straightforward kinds
            of statements to turn into memcached
            operations: INSERT becomes
            add, UPDATE becomes
            set, incr or
            decr, and DELETE
            becomes delete. When issued through the
            memcached interface, these operations are
            guaranteed to affect only 1 row because
            key is unique within the table.
          
            In the preceding SQL examples, t1 refers
            to the table currently being used by the
            InnoDB memcached
            plugin based on the configuration settings in the
            innodb_memcache.containers table,
            key refers to the column listed under
            key_columns, and val
            refers to the column listed under
            value_columns.
          
TRUNCATE TABLE t1; DELETE FROM t1;
            Corresponds to the flush_all operation,
            when t1 is configured as the table for
            memcached operations as in the previous
            step. Removes all the rows in the table.
        You can access the InnoDB table (by default,
        test.demo_test) through the standard SQL
        interfaces. However, there are some restrictions:
            When query a table through SQL that is also being accessed
            through the memcached interface, remember
            that memcached operations can be
            configured to be committed periodically rather than after
            every write operation. This behavior is controlled by the
            daemon_memcached_w_batch_size
            option. If this option is set to a value greater than 1, use
            READ UNCOMMITTED queries
            to find the just-inserted rows:
          
mysql> set session TRANSACTION ISOLATION LEVEL read uncommitted; Query OK, 0 rows affected (0.00 sec) mysql> select * from demo_test; +------+------+------+------+-----------+------+------+------+------+------+------+ | cx | cy | c1 | cz | c2 | ca | CB | c3 | cu | c4 | C5 | +------+------+------+------+-----------+------+------+------+------+------+------+ | NULL | NULL | a11 | NULL | 123456789 | NULL | NULL | 10 | NULL | 3 | NULL | +------+------+------+------+-----------+------+------+------+------+------+------+ 1 row in set (0.00 sec)
            When modifying a table using SQL that is also accessed
            through the memcached interface, you can
            configure memcached operations to start a
            new transaction periodically rather than for every read
            operation. This behavior is controlled by the
            daemon_memcached_r_batch_size
            option. If this option is set to a value greater than 1,
            changes made to the table using SQL are not immediately
            visible to memcached operations.
          
            The InnoDB table is locked IS (shared
            intention) or IX (exclusive intentional) for all operations
            in a transaction. If you increase
            daemon_memcached_r_batch_size
            and
            daemon_memcached_w_batch_size
            substantially from their default value of 1, the table is
            most likely intentionally locked between each operation,
            preventing you from running
            DDL statements on the table.
      Because the InnoDB memcached
      daemon plugin supports the MySQL
      binary log, any updates
      made on a master server
      through the memcached interface can be
      replicated for backup, balancing intensive read workloads, and
      high availability. All memcached commands are
      supported for binary logging.
    
      You do not need to set up the InnoDB
      memcached plugin on the
      slave servers. In this
      configuration, the primary advantage is increased write throughput
      on the master. The speed of the replication mechanism is not
      affected.
    
      The following sections show how to use the binary log capability,
      to use the InnoDB memcached
      plugin along with MySQL replication. It assumes you have already
      done the basic setup described in
      Section 14.17.3, “Getting Started with InnoDB Memcached Plugin”.
          To use the InnoDB
          memcached plugin with the MySQL
          binary log, enable the
          innodb_api_enable_binlog
          configuration option on the
          master server. This
          option can only be set at server boot time. You must also
          enable the MySQL binary log on the master server with the
          --log-bin option. You can add
          these options to your server configuration file such as
          my.cnf, or on the
          mysqld command line.
        
mysqld ... --log-bin -–innodb_api_enable_binlog=1
Then configure your master and slave server, as described in Section 17.1.1, “How to Set Up Replication”.
Use mysqldump to create a master data snapshot, and sync it to the slave server.
master shell: mysqldump --all-databases --lock-all-tables > dbdump.db slave shell: mysql < dbdump.db
          On the master server, issue show master
          status to obtain the Master Binary Log Coordinates:
        
mysql> show master status;
          On the slave server, use a change master to
          statement to set up a slave server with the above coordinates:
        
mysql> CHANGE MASTER TO MASTER_HOST='localhost', MASTER_USER='root', MASTER_PASSWORD='', MASTER_PORT = 13000, MASTER_LOG_FILE='0.000001, MASTER_LOG_POS=114;
Then start the slave:
mysql> start slave;
If the error log prints output similar to the following, the slave is ready for replication:
111002 18:47:15 [Note] Slave I/O thread: connected to master 'root@localhost:13000', replication started in log '0.000001' at position 114
To test the server with the above replication setup, we use the memcached telnet interface, and also query the master and slave servers using SQL to verify the results.
      In our configuration setup SQL, one example table
      demo_test is created in the
      test database for use by
      memcached. We will use this default table for
      the demonstrations:
          Use set to insert a record, key
          test1, value t1, and
          flag 10:
        
telnet 127.0.0.1 11211 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. set test1 10 0 2 t1 STORED
      In the master server, you can see that the row is inserted.
      c1 maps to the key, c2 maps
      to the value, c3 is the flag,
      c4 is the cas value, and
      c5 is the expiration.
    
mysql> select * from test.demo_test;
| c1 | c2 | c3 | c4 | c5 | 
|---|---|---|---|---|
| test1 | t1 | 10 | 2 | 0 | 
1 row in set (0.00 sec)
In the slave server, you will see the same record is inserted by replication:
mysql> select * from test.demo_test;
| c1 | c2 | c3 | c4 | c5 | 
|---|---|---|---|---|
| test1 | t1 | 10 | 2 | 0 | 
1 row in set (0.00 sec)
          Use set command to update the key
          test1 to a new value
          new:
Connected to 127.0.0.1. Escape character is '^]'. set test1 10 0 3 new STORED
      From the slave server, the update is replicated (notice the
      cas value also updated):
    
mysql> select * from test.demo_test;
| c1 | c2 | c3 | c4 | c5 | 
|---|---|---|---|---|
| test1 | new | 10 | 3 | 0 | 
1 row in set (0.00 sec)
          Delete the record with a delete command:
Connected to 127.0.0.1. Escape character is '^]'. delete test1 DELETED
When the delete is replicated to the slave, the record on the slave is also deleted:
mysql> select * from test.demo_test; Empty set (0.00 sec)
          Truncate the table with the flush_all
          command.
First, insert two records by telnetting to the master server:
Connected to 127.0.0.1. Escape character is '^]' set test2 10 0 5 again STORED set test3 10 0 6 again1 STORED
In the slave server, confirm these two records are replicated:
mysql> select * from test.demo_test;
| c1 | c2 | c3 | c4 | c5 | 
|---|---|---|---|---|
| test2 | again | 10 | 5 | 0 | 
| test3 | again1 | 10 | 6 | 0 | 
2 rows in set (0.00 sec)
      Call flush_all in the telnet interface to
      truncate the table:
    
Connected to 127.0.0.1. Escape character is '^]'. flush_all OK
Then check that the truncation operation is replicated on the slave server:
mysql> select * from test.demo_test; Empty set (0.00 sec)
      All memcached commands are supported in terms
      of replication.
Binlog Format:
          Most memcached operations are mapped to
          DML statements (analogous to
          insert, delete, update). Since there is no actual SQL
          statement being processed by the MySQL server, all
          memcached commands (except for
          flush_all) use Row-Based Replication (RBR)
          logging. This is independent of any server
          binlog_format setting.
        
          The memcached flush_all
          command is mapped to the TRUNCATE
          TABLE command. Since
          DDL commands can only use
          statement-based logging, this flush_all
          command is replicated by sending a
          TRUNCATE TABLE statement.
Transactions:
          The concept of
          transactions has not
          typically been part of memcached
          applications. We use
          daemon_memcached_r_batch_size
          and
          daemon_memcached_w_batch_size
          to control the read and write transaction batch size for
          performance considerations. These settings do not affect
          replication: each SQL operation on the underlying table is
          replicated right after successful completion.
        
          The default value of
          daemon_memcached_w_batch_size
          is 1, so each memcached write operation is
          committed immediately. This default setting incurs a certain
          amount of performance overhead, to avoid any inconsistency in
          the data visible on the master and slave servers. The
          replicated records will always be available immediately on the
          slave server. If you set
          daemon_memcached_w_batch_size greater than
          1, records inserted or updated through the
          memcached interface are not immediately
          visible on the master server; to view these records on the
          master server before they are committed, issue set
          transaction isolation level read uncommitted.
      The InnoDB memcached engine
      accesses InnoDB through
      InnoDB APIs. Most of the APIs are directly
      adopted from embedded InnoDB.
      InnoDB API functions are passed to
      InnoDB memcached as
      “callback functions”. InnoDB API
      functions access the InnoDB table directly, and
      are mostly DML operations except for the
      TRUNCATE TABLE operation.
    
      All memcached commands, listed below, are
      implemented through the InnoDB
      memcached API. The following table outlines how
      each memcached command is mapped to a DML
      operation.
Table 14.9 memcached Commands and Associated DML Operation
| memcached Command | DML Operation | 
|---|---|
| get | a read/fetch command | 
| set | a search followed by an insertion or update (depending on whether or not a key exists) | 
| add | a search followed by an insertion or update | 
| replace | a search followed by an update | 
| append | a search followed by an update (appends data to the result before update) | 
| prepend | a search followed by an update (prepends data to the result before update) | 
| incr | a search followed by an update | 
| decr | a search followed by an update | 
| delete | a search followed by a deletion | 
| flush_all | truncate table | 
      This section describes the underlying tables used by the
      InnoDB memcached plugin.
    
      The innodb_memcached_config.sql configuration
      script installs three tables required by the
      InnoDB memcached plugin. The
      tables are created in a dedicated
      innodb_memcache database:
    
mysql> USE innodb_memcache; Database changed mysql> SHOW TABLES; +---------------------------+ | Tables_in_innodb_memcache | +---------------------------+ | cache_policies | | config_options | | containers | +---------------------------+ 3 rows in set (0.01 sec)
      The containers table is the most important of
      the three tables. Entries in this table are
      “containers” for InnoDB tables
      that are used to store memcached values. The
      containers map the columns of the InnoDB table
      to the values outlined in the table below. This mapping is
      necessary for memcached to work with
      InnoDB tables.
    
      The containers table has a default entry for
      the test.demo_test table. To use the
      InnoDB memcached plugin with
      your own InnoDB table, you must add an entry
      for your table to the containers table.
Table 14.10 containers Columns
| Column | Description | 
|---|---|
| name | The name given to the container. | 
| db_schema | The name of the database in which the InnoDBtable
              resides. This is a required value. | 
| db_table | The name of the InnoDBtable that stores
              memcached values. This is a required
              value. | 
| key_columns | The column in the InnoDBtable that contains lookup
              key values for memcached operations.
              This is a required value. | 
| value_columns | The columns (one or more) in the InnoDBtable that
              storememcacheddata. Multiple columns
              can be specified using the separator character that is
              specified in theinnodb_memcached.config_optionstable.
              By default, the separator is a pipe character
              (“|”). To specify multiple columns, separate
              them with the defined separator character. For example:col1|col2|col3. This is a required
              value. | 
| flags | Specifies the columns in the InnoDBtable that are
              used as flags (a user-defined numeric value that is stored
              and retrieved along with the main value) for
              memcached. A flag value can be used as
              a column specifier for some operations (such asincr,prepend) if
              memcached value is mapped to multiple
              columns, so that an operation is performed on a specified
              column. For example, if you have mapped a value to 3
              columns, and only want the increment operation performed
              on one of these columns, you can useflagsto specify which column will be
              used for these operations. If you do not use theflagscolumn, set its value to0to indicate that it is unused. | 
| cas_column | The column in the InnoDBtable that stores
              compare-and-swap (cas) values. Thecas_columnvalue andexpire_time_columnvalue are related to
              the way memcached hashes requests to
              different servers and caches data in memory. Because theInnoDBmemcached
              plugin is tightly integrated with a single
              memcached daemon, and the in-memory
              caching mechanism is handled by MySQL and the
              buffer pool, these
              columns are rarely needed in this type of deployment. If
              you do not use these columns, set their value to0to indicate that the columns are
              unused. | 
| expire_time_column | The column in the InnoDBtable that stores expiration
              values. Thecas_columnvalue andexpire_time_columnvalue are related to
              the way memcached hashes requests to
              different servers and caches data in memory. Because theInnoDBmemcached
              plugin is tightly integrated with a single
              memcached daemon, and the in-memory
              caching mechanism is handled by MySQL and the
              buffer pool, these
              columns are rarely needed in this type of deployment. If
              you do not use these columns, set their value to0to indicate that the columns are
              unused. As of MySQL 5.6.25, maximum expire time is defined
              asINT_MAX32or 2147483647 seconds
              (approximately 68 years). | 
| unique_idx_name_on_key | The name of the index on the key column. It must be a unique index. It
              can be the primary
              key or a
              secondary
              index. Preferably, make the key column the primary
              key of the InnoDBtable. Doing so saves
              a lookup step over using a secondary index for this
              column. You cannot make a
              covering index
              for memcached lookups;InnoDBreturns an error if you try to
              define a composite secondary index over both the key and
value columns. | 
          You must supply a value for db_schema,
          db_name, key_columns,
          value_columns and
          unique_idx_name_on_key. Otherwise, the
          setup will fail. Specify 0 for
          flags, cas_column, and
          expire_time_column if they are unused.
          Failing to do so could cause your setup to fail.
        
          key_columns: The maximum limit for a
          memcached key is 250 characters, which is
          enforced by memcached. If a mapped key
          longer than the maximum limit is used, the operation will
          fail. The mapped key must be a non-Null
          CHAR or
          VARCHAR type.
        
          value_columns: Must be mapped to a
          CHAR,
          VARCHAR, or
          BLOB column. There is no length
          restriction and the value can be NULL.
        
          cas_column: The cas
          value is a 64 bit integer. It must be mapped to a
          BIGINT of at least 8 bytes. If
          you do not use this column, set its value to
          0 to indicate that it is unused.
        
          expiration_time_column: Must mapped to an
          INTEGER of at least 4 bytes.
          Expiration time is defined as a 32-bit integer for Unix time
          (the number of seconds since January 1, 1970, as a 32-bit
          value), or the number of seconds starting from the current
          time. For the latter, the number of seconds may not exceed
          60*60*24*30 (the number of seconds in 30 days). If the number
          sent by a client is larger, the server will consider it to be
          a real Unix time value rather than an offset from the current
          time. If you do not use this column, set its value to
          0 to indicate that it is unused.
        
          flags: Must be mapped to an
          INTEGER of at least 32-bits and
          can be NULL. If you do not use this column, set its value to
          0 to indicate that it is unused.
A pre-check is performed at plugin load time to enforce column constraints. If any mismatches are found, the plugin will not load.
      The cache_policies table defines a cache policy
      for your InnoDB memcached
      setup. You can specify individual policies for
      get, set,
      delete, and flush operations
      within a single cache policy. The default setting for all
      operations is innodb_only.
          innodb_only: Use InnoDB
          as the data store of memcached.
        
          cache-only: Use the traditional
          memcached engine as the data store.
        
          caching: Use both InnoDB
          and the traditional memcached engine as
          data stores. In this case, if memcached
          cannot find a key in memory, it searches for the value in an
          InnoDB table.
        
          disable: Disable caching.
Table 14.11 cache_policies Columns
| Column | Description | 
|---|---|
| policy_name | Name of the cache policy. The default cache policy name is cache_policy. | 
| get_policy | The cache policy for get operations. Valid values are innodb_only,cache-only,caching,
              ordisabled. The default setting isinnodb_only. | 
| set_policy | The cache policy for set operations. Valid values are innodb_only,cache-only,caching,
              ordisabled. The default setting isinnodb_only. | 
| delete_policy | The cache policy for delete operations. Valid values are innodb_only,cache-only,caching,
              ordisabled. The default setting isinnodb_only. | 
| flush_policy | The cache policy for flush operations. Valid values are innodb_only,cache-only,caching,
              ordisabled. The default setting isinnodb_only. | 
      The config_options table stores
      memcached-related settings that can be changed
      at runtime, using SQL. Currently, supported configuration options
      are separator and
      table_map_delimiter:
Table 14.12 config_options Columns
| Column | Description | 
|---|---|
| Name | Name of the memcached-related configuration option.
                Currently, the following configuration options are
                supported through the config_optionstable:
 | 
| Value | The value assigned to the memcached-related configuration option. | 
          During plugin initialization, when InnoDB
          memcached is configured with information
          defined in the containers table, each
          mapped column that is parsed from
          value_columns is verified against the
          mapped table. If multiple columns are mapped, there is a check
          to ensure that each column exists and is the right type.
        
          At run-time, for memcached insert
          operations, if there are more delimiters in the value than the
          number of mapped columns, only the number of mapped values are
          taken. For example, if there are 6 mapped columns and 7
          delimited values are provided, only the first 6 delimited
          values are taken. The 7th delimited value is ignored.
        
If there are fewer delimited values than mapped columns, unfilled columns are set to NULL. If an unfilled column cannot be NULL, the insert will fail.
If a table has more columns than mapped values, the extra columns do not affect output results.
      The innodb_memcached_config.sql configuration
      script creates a table demo_test in the
      test database as an example. It also allows the
      InnoDB memcached plugin to
      work immediately, without creating any additional tables.
    
      The entries in the container table define which
      column is used for what purpose as described above:
    
mysql> select * from innodb_memcache.containers; +------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+ | name | db_schema | db_table | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key | +------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+ | aaa | test | demo_test | c1 | c2 | c3 | c4 | c5 | PRIMARY | +------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+ 1 row in set (0.00 sec) mysql> desc test.demo_test; +-------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------------------+------+-----+---------+-------+ | c1 | varchar(32) | NO | PRI | | | | c2 | varchar(1024) | YES | | NULL | | | c3 | int(11) | YES | | NULL | | | c4 | bigint(20) unsigned | YES | | NULL | | | c5 | int(11) | YES | | NULL | | +-------+---------------------+------+-----+---------+-------+ 5 rows in set (0.01 sec)
      When no table ID is requested through the @@
      notation in the key name:
          If a row has a name value of
          default, the corresponding table is used by
          the memcached plugin. Thus, when you make
          your first entry in
          innodb_memcache.containers to move beyond
          the demo_test table, use a
          name value of default.
        
          If there is no
          innodb_memcache.containers.name value of
          default, the row with the first
          name value in alphabetical order is used.
      The following list shows some potential issues you might encounter
      using the InnoDB memcached
      plugin, and solutions or workarounds where available:
If you see this error in your MySQL error log, the server might fail to start:
failed to set rlimit for open files. Try running as root or requesting smaller maxconns value.
The error message is actually from the memcached daemon. One solution is to raise the OS limit for the number of open files. The command varies depending on the operating system. For example, here are the commands to check and increase the limit on several operating systems:
# Linux $ ulimit -n 1024 ulimit -n 4096 $ ulimit -n 4096 # OS X Lion (10.6) $ ulimit -n 256 ulimit -n 4096 $ ulimit -n 4096
          The other solution is to reduce the number of concurrent
          connections available for the memcached
          daemon, using the -c option which defaults
          to 1024. Encode that memcached option using
          the MySQL option
          daemon_memcached_option
          inside the MySQL configuration file:
        
[mysqld] ... loose-daemon_memcached_option='-c 64'
          To troubleshoot problems where the
          memcached daemon is unable to store data in
          or retrieve data from the InnoDB table,
          specify the memcached option
          -vvv through the MySQL configuration option
          daemon_memcached_option.
          Examine the MySQL error log for debug output related to
          memcached operations.
        
If the column specified to hold the memcached item values is the wrong data type, such as a numeric type instead of a string type, attempts to store key/value pairs will fail with no specific error code or message.
          If the daemon_memcached plugin causes any
          issues with starting the MySQL server, disable it during
          troubleshooting by adding this line under the
          [mysqld] group in your MySQL configuration
          file:
        
daemon_memcached=OFF
          For example, if you run the install plugin
          command before running the
          innodb_memcached_config.sql configuration
          script to set up the necessary database and tables, the server
          might crash and be unable to start. Or, if you set up an
          incorrect entry in the
          innodb_memcache.containers table, the
          server might be unable to start.
        
To permanently turn off the memcached plugin for a MySQL instance, issue the following command:
mysql> uninstall plugin daemon_memcached;
          If you run more than one instance of MySQL on the same
          machine, with the memcached daemon plugin
          enabled in each, make sure to specify a unique
          memcached port for each one using the
          daemon_memcached_option
          configuration option.
        
          You might find that a SQL statement cannot find an expected
          table, or there is no data in the table, but
          memcached API calls still work and retrieve
          the expected data. This can happen if you do not set up the
          entry in the innodb_memcache.containers
          table, or do not switch to that table by issuing a
          GET or SET request with
          the key
          @@, or
          make a change to an existing entry in
          table_idinnodb_memcache.containers without
          restarting the MySQL server afterward. The free-form storage
          mechanism is flexible enough that your requests to store or
          retrieve a multi-column value like
          col1|col2|col3 will usually still work,
          even if the daemon is using the
          test.demo_test table which stores all the
          data within a single column.
        
          When defining your own InnoDB table for use
          with InnoDB memcached,
          and columns in your table are defined as NOT NULL, ensure that
          values are supplied for the NOT NULL columns when inserting a
          descriptor for the InnoDB table into the
          memcached containers table
          (innodb_memcached.containers). If your
          descriptor INSERT statement
          contains fewer delimited values than there are mapped columns,
          unfilled columns are set to NULL. Attempting to insert a NULL
          value into a NOT NULL column causes the
          INSERT to fail, which may only
          become evident after you reinitialize the
          InnoDB memcached plugin
          to apply changes to the containers table.
        
          If cas_column and
          expire_time_column of the
          innodb_memcached.containers table are set
          to NULL, the following error will be returned when attempting
          to load the memcached plugin:
        
InnoDB_Memcached: column 6 in the entry for config table 'containers' in
database 'innodb_memcache' has an invalid NULL value.
       
          The memcached plugin rejects usage of NULL
          in the cas_column and
          expire_time_column columns. Set the value
          of these columns to 0 if the columns are
          unused.
        
As the length of the memcached key and values increase, you encounter size and length limits at different points:
When the key exceeds 250 bytes in size, memcached operations return an error. This is currently a fixed limit within memcached.
              You might encounter InnoDB-related
              limits when the value exceeds 768 bytes in size, or 3072
              bytes in size, or 1/2 of the size specified by
              innodb_page_size. These
              limits primarily apply if you intend to create an index on
              the value column to run report-generating queries on that
              column from SQL. See Section 14.5.7, “Limits on InnoDB Tables”
              for details.
            
The maximum size for the combination of the key and the value is 1 MB.
          If you share configuration files across MySQL servers with
          different versions, using the latest configuration options for
          the memcached plugin could cause startup
          errors for older MySQL versions. To avoid compatibility
          problems, use the loose forms of these
          option names, for example
          loose-daemon_memcached_option='-c 64'
          instead of daemon_memcached_option='-c 64'.
        
There is no restriction or check in place to validate character set settings. memcached stores and retrieves keys and values in bytes and is therefore not character set sensitive. However, you must ensure that the memcached client and the MySQL table use the same character set.
    The following general guidelines apply to troubleshooting
    InnoDB problems:
When an operation fails or you suspect a bug, look at the MySQL server error log (see Section 5.2.2, “The Error Log”).
        If the failure is related to a
        deadlock, run with the
        innodb_print_all_deadlocks
        option enabled so that details about each
        InnoDB deadlock are printed to the MySQL
        server error log.
      
        Issues relating to the InnoDB data dictionary
        include failed CREATE TABLE
        statements (orphaned table files), inability to open
        .InnoDB files, and system cannot
        find the path specified errors. For information
        about these sorts of problems and errors, see
        Section 14.18.3, “Troubleshooting InnoDB Data Dictionary Operations”.
      
        When troubleshooting, it is usually best to run the MySQL server
        from the command prompt, rather than through
        mysqld_safe or as a Windows service. You can
        then see what mysqld prints to the console,
        and so have a better grasp of what is going on. On Windows,
        start mysqld with the
        --console option to direct the
        output to the console window.
      
        
        
        Enable the InnoDB Monitors to obtain
        information about a problem (see
        Section 14.14, “InnoDB Monitors”). If the problem is
        performance-related, or your server appears to be hung, you
        should enable the standard Monitor to print information about
        the internal state of InnoDB. If the problem
        is with locks, enable the Lock Monitor. If the problem is in
        creation of tables or other data dictionary operations, enable
        the Table Monitor to print the contents of the
        InnoDB internal data dictionary. To see
        tablespace information enable the Tablespace Monitor.
      
        InnoDB temporarily enables standard
        InnoDB Monitor output under the following
        conditions:
A long semaphore wait
            InnoDB cannot find free blocks in the
            buffer pool
          
Over 67% of the buffer pool is occupied by lock heaps or the adaptive hash index
        If you suspect that a table is corrupt, run
        CHECK TABLE on that table.
      The troubleshooting steps for InnoDB I/O
      problems depend on when the problem occurs: during startup of the
      MySQL server, or during normal operations when a DML or DDL
      statement fails due to problems at the file system level.
      If something goes wrong when InnoDB attempts to
      initialize its tablespace or its log files, delete all files
      created by InnoDB: all
      ibdata files and all
      ib_logfile files. If you already created some
      InnoDB tables, also delete the corresponding
      .frm files for these tables, and any
      .ibd files if you are using multiple
      tablespaces, from the MySQL database directories. Then try the
      InnoDB database creation again. For easiest
      troubleshooting, start the MySQL server from a command prompt so
      that you see what is happening.
      If InnoDB prints an operating system error
      during a file operation, usually the problem has one of the
      following solutions:
          Make sure the InnoDB data file directory
          and the InnoDB log directory exist.
        
Make sure mysqld has access rights to create files in those directories.
          Make sure mysqld can read the proper
          my.cnf or my.ini
          option file, so that it starts with the options that you
          specified.
        
Make sure the disk is not full and you are not exceeding any disk quota.
Make sure that the names you specify for subdirectories and data files do not clash.
          Doublecheck the syntax of the
          innodb_data_home_dir and
          innodb_data_file_path values.
          In particular, any MAX value in the
          innodb_data_file_path option
          is a hard limit, and exceeding that limit causes a fatal
          error.
      To investigate database page corruption, you might dump your
      tables from the database with
      SELECT ... INTO
      OUTFILE. Usually, most of the data obtained in this way
      is intact. Serious corruption might cause SELECT * FROM
       statements or
      tbl_nameInnoDB background operations to crash or
      assert, or even cause InnoDB roll-forward
      recovery to crash. In such cases, you can use the
      innodb_force_recovery option to
      force the InnoDB storage engine to start up
      while preventing background operations from running, so that you
      can dump your tables. For example, you can add the following line
      to the [mysqld] section of your option file
      before restarting the server:
    
[mysqld] innodb_force_recovery = 1
        Only set innodb_force_recovery
        to a value greater than 0 in an emergency situation, so that you
        can start InnoDB and dump your tables. Before
        doing so, ensure that you have a backup copy of your database in
        case you need to recreate it. Values of 4 or greater can
        permanently corrupt data files. Only use an
        innodb_force_recovery setting
        of 4 or greater on a production server instance after you have
        successfully tested the setting on separate physical copy of
        your database. When forcing InnoDB recovery,
        you should always start with
        innodb_force_recovery=1 and
        only increase the value incrementally, as necessary.
      innodb_force_recovery is 0 by
      default (normal startup without forced recovery). The permissible
      nonzero values for
      innodb_force_recovery are 1 to 6.
      A larger value includes the functionality of lesser values. For
      example, a value of 3 includes all of the functionality of values
      1 and 2.
    
      If you are able to dump your tables with an
      innodb_force_recovery value of 3
      or less, then you are relatively safe that only some data on
      corrupt individual pages is lost. A value of 4 or greater is
      considered dangerous because data files can be permanently
      corrupted. A value of 6 is considered drastic because database
      pages are left in an obsolete state, which in turn may introduce
      more corruption into B-trees and other database structures.
    
      As a safety measure, InnoDB prevents
      INSERT,
      UPDATE, or
      DELETE operations when
      innodb_force_recovery is greater
      than 0. As of MySQL 5.6.15, an
      innodb_force_recovery setting of
      4 or greater places InnoDB in read-only mode.
          1
          (SRV_FORCE_IGNORE_CORRUPT)
        
          Lets the server run even if it detects a corrupt
          page. Tries to make
          SELECT * FROM
           jump over
          corrupt index records and pages, which helps in dumping
          tables.
        tbl_name
          2
          (SRV_FORCE_NO_BACKGROUND)
        
Prevents the master thread and any purge threads from running. If a crash would occur during the purge operation, this recovery value prevents it.
          3
          (SRV_FORCE_NO_TRX_UNDO)
        
Does not run transaction rollbacks after crash recovery.
          4
          (SRV_FORCE_NO_IBUF_MERGE)
        
          Prevents insert
          buffer merge operations. If they would cause a crash,
          does not do them. Does not calculate table
          statistics. This value
          can permanently corrupt data files. After using this value, be
          prepared to drop and recreate all secondary indexes. As of
          MySQL 5.6.15, sets InnoDB to read-only.
        
          5
          (SRV_FORCE_NO_UNDO_LOG_SCAN)
        
          Does not look at undo
          logs when starting the database:
          InnoDB treats even incomplete transactions
          as committed. This value can permanently corrupt data files.
          As of MySQL 5.6.15, sets InnoDB to
          read-only.
        
          6
          (SRV_FORCE_NO_LOG_REDO)
        
          Does not do the redo log
          roll-forward in connection with recovery. This value can
          permanently corrupt data files. Leaves database pages in an
          obsolete state, which in turn may introduce more corruption
          into B-trees and other database structures. As of MySQL
          5.6.15, sets InnoDB to read-only.
      You can SELECT from tables to dump
      them, or with an
      innodb_force_recovery value of 3
      or less you can DROP or
      CREATE tables. If you know that a given table
      is causing a crash on rollback, you can drop it. If you encounter
      a runaway rollback caused by a failing mass import or
      ALTER TABLE, you can kill the
      mysqld process and set
      innodb_force_recovery to
      3 to bring the database up without the
      rollback, and then DROP the table that is
      causing the runaway rollback.
    
      If corruption within the table data prevents you from dumping the
      entire table contents, a query with an ORDER BY
       clause might
      be able to dump the portion of the table after the corrupted part.
    primary_key DESC
      If a high innodb_force_recovery
      value is required to start InnoDB, there may be
      corrupted data structures that could cause complex queries
      (queries containing WHERE, ORDER
      BY, or other clauses) to fail. In this case, you may
      only be able to run basic SELECT * FROM t
      queries.
      Information about table definitions is stored both in the
      .frm files, and in the InnoDB
      data dictionary. If
      you move .frm files around, or if the server
      crashes in the middle of a data dictionary operation, these
      sources of information can become inconsistent.
    
      If a data dictionary corruption or consistency issue prevents you
      from starting InnoDB, see
      Section 14.18.2, “Forcing InnoDB Recovery” for information about
      manual recovery.
      A symptom of an out-of-sync data dictionary is that a
      CREATE TABLE statement fails. If
      this occurs, look in the server's error log. If the log says that
      the table already exists inside the InnoDB
      internal data dictionary, you have an orphaned table inside the
      InnoDB tablespace files that has no
      corresponding .frm file. The error message
      looks like this:
    
InnoDB: Error: table test/parent already exists in InnoDB internal InnoDB: data dictionary. Have you deleted the .frm file InnoDB: and not used DROP TABLE? Have you used DROP DATABASE InnoDB: for InnoDB tables in MySQL version <= 3.23.43? InnoDB: See the Restrictions section of the InnoDB manual. InnoDB: You can drop the orphaned table inside InnoDB by InnoDB: creating an InnoDB table with the same name in another InnoDB: database and moving the .frm file to the current database. InnoDB: Then MySQL thinks the table exists, and DROP TABLE will InnoDB: succeed.
      You can drop the orphaned table by following the instructions
      given in the error message. If you are still unable to use
      DROP TABLE successfully, the
      problem may be due to name completion in the
      mysql client. To work around this problem,
      start the mysql client with the
      --skip-auto-rehash
      option and try DROP TABLE again.
      (With name completion on, mysql tries to
      construct a list of table names, which fails when a problem such
      as just described exists.)
      Another symptom of an out-of-sync data dictionary is that MySQL
      prints an error that it cannot open a .InnoDB
      file:
    
ERROR 1016: Can't open file: 'child2.InnoDB'. (errno: 1)
In the error log you can find a message like this:
InnoDB: Cannot find table test/child2 from the internal data dictionary InnoDB: of InnoDB though the .frm file for the table exists. Maybe you InnoDB: have deleted and recreated InnoDB data files but have forgotten InnoDB: to delete the corresponding .frm files of InnoDB tables?
      This means that there is an orphaned .frm
      file without a corresponding table inside
      InnoDB. You can drop the orphaned
      .frm file by deleting it manually.
      If MySQL crashes in the middle of an ALTER
      TABLE operation, you may be left with an orphaned
      intermediate table. Intermediate table names begin with
      “#sql-”. In your data directory you will see an
      #sql-*.ibd file and possibly an accompanying
      #sql-*.frm file. The intermediate table is
      also listed in Table
      Monitor output and referenced in InnoDB
      INFORMATION_SCHEMA tables.
    
      Removing an orphaned intermediate table requires a table format
      file (a .frm file) that matches the table
      schema defined in the #sql-*.ibd file (it
      must have the same columns and indexes). Depending on when the
      crash occurred during the ALTER TABLE
      operation, the orphaned #sql-*.ibd file could
      have a pre-ALTER or
      post-ALTER schema definition, and the data in
      the accompanying #sql-*.frm file (if present)
      may or may not match.
    
To remove the orphaned intermediate table, perform the following steps:
          Determine if the #sql-*.ibd file has a
          pre-ALTER or post-ALTER
          schema definition. You can view the columns and indexes of the
          intermediate table using the
          Table Monitor or by
          querying InnoDB
          INFORMATION_SCHEMA tables.
          INNODB_SYS_TABLES provides the
          TABLE_ID for the intermediate table, which
          you can use to retrieve column and index information from
          INNODB_SYS_COLUMNS, and
          INNODB_SYS_INDEXES.
        
          Once you have determined if the
          #sql-*.ibd file has a
          pre-ALTER or post-ALTER
          schema definition, create a matching
          #sql-*.frm file in a different database
          directory. For example, if an intermediate table has a
          post-ALTER schema definition, create an
          .frm file that matches the altered schema
          definition:
        
mysql> CREATE TABLE tmp LIKE employees.salaries; ALTER TABLE tmp DROP COLUMN to_date;
Query OK, 0 rows affected (0.02 sec)
      
Query OK, 0 rows affected (0.06 sec)
Records: 0  Duplicates: 0  Warnings: 0    
          Copy the .frm file to the database
          directory where the orphaned table is located and rename it to
          match the name of the #sql-*.ibd file
        
shell> cp tmp.frm employees/#sql-ib87.frm
          Drop the intermediate table by issuing a DROP
          TABLE statement, prefixing the name of the table
          with #mysql50# and enclosing the table name
          in backticks. For example:
        
mysql> DROP TABLE `#mysql50##sql-ib87`; Query OK, 0 rows affected (0.01 sec)
          The #mysql50# prefix tells MySQL to ignore
          file name safe encoding introduced in MySQL
          5.1. Enclosing the table name in backticks is required to
          perform SQL statements on table names with special characters
          such as “#”.
        
          If there is a leftover #sql-*.frm file,
          drop it. MySQL reports an “unknown table” error,
          which can be ignored.
        
mysql> DROP TABLE `#mysql50##sql-36ab_2`; ERROR 1051 (42S02): Unknown table 'employees.#mysql50##sql-36ab_2'
      With innodb_file_per_table
      enabled, the following message might occur if the
      .frm or .ibd files (or both)
      are missing:
    
InnoDB: in InnoDB data dictionary has tablespace id N,
InnoDB: but tablespace with that id or name does not exist. Have
InnoDB: you deleted or moved .ibd files?
InnoDB: This may also be a table created with CREATE TEMPORARY TABLE
InnoDB: whose .ibd and .frm files MySQL automatically removed, but the
InnoDB: table still exists in the InnoDB internal data dictionary.
If this occurs, try the following procedure to resolve the problem:
          Create a matching .frm file in some other
          database directory and copy it to the database directory where
          the orphan table is located.
        
          Issue DROP TABLE for the
          original table. That should successfully drop the table and
          InnoDB should print a warning to the error
          log that the .ibd file was missing.
      Error handling in InnoDB is not always the same
      as specified in the SQL standard. According to the standard, any
      error during an SQL statement should cause rollback of that
      statement. InnoDB sometimes rolls back only
      part of the statement, or the whole transaction. The following
      items describe how InnoDB performs error
      handling:
          If you run out of file space in a
          tablespace, a MySQL
          Table is full error occurs and
          InnoDB rolls back the SQL statement.
        
          A transaction deadlock
          causes InnoDB to
          roll back the entire
          transaction. Retry the
          whole transaction when this happens.
        
          A lock wait timeout causes InnoDB to roll
          back only the single statement that was waiting for the lock
          and encountered the timeout. (To have the entire transaction
          roll back, start the server with the
          --innodb_rollback_on_timeout
          option.) Retry the statement if using the current behavior, or
          the entire transaction if using
          --innodb_rollback_on_timeout.
        
Both deadlocks and lock wait timeouts are normal on busy servers and it is necessary for applications to be aware that they may happen and handle them by retrying. You can make them less likely by doing as little work as possible between the first change to data during a transaction and the commit, so the locks are held for the shortest possible time and for the smallest possible number of rows. Sometimes splitting work between different transactions may be practical and helpful.
          When a transaction rollback occurs due to a deadlock or lock
          wait timeout, it cancels the effect of the statements within
          the transaction. But if the start-transaction statement was
          START
          TRANSACTION or
          BEGIN
          statement, rollback does not cancel that statement. Further
          SQL statements become part of the transaction until the
          occurrence of COMMIT,
          ROLLBACK, or
          some SQL statement that causes an implicit commit.
        
          A duplicate-key error rolls back the SQL statement, if you
          have not specified the IGNORE option in
          your statement.
        
          A row too long error rolls back the SQL
          statement.
        
          Other errors are mostly detected by the MySQL layer of code
          (above the InnoDB storage engine level),
          and they roll back the corresponding SQL statement. Locks are
          not released in a rollback of a single SQL statement.
      During implicit rollbacks, as well as during the execution of an
      explicit
      ROLLBACK SQL
      statement, SHOW PROCESSLIST
      displays Rolling back in the
      State column for the relevant connection.
      The following is a nonexhaustive list of common
      InnoDB-specific errors that you may encounter,
      with information about why each occurs and how to resolve the
      problem.
          1005 (ER_CANT_CREATE_TABLE)
        
          Cannot create table. If the error message refers to error 150,
          table creation failed because a
          foreign key
          constraint was not correctly formed. If the error
          message refers to error −1, table creation probably
          failed because the table includes a column name that matched
          the name of an internal InnoDB table.
        
          1016 (ER_CANT_OPEN_FILE)
        
          Cannot find the InnoDB table from the
          InnoDB data
          files, although the .frm file for
          the table exists. See
          Section 14.18.3, “Troubleshooting InnoDB Data Dictionary Operations”.
        
          1114 (ER_RECORD_FILE_FULL)
        
          InnoDB has run out of free space in the
          tablespace. Reconfigure the tablespace to add a new data file.
        
          1205 (ER_LOCK_WAIT_TIMEOUT)
        
          Lock wait timeout expired. The statement that waited too long
          was rolled back (not the
          entire transaction).
          You can increase the value of the
          innodb_lock_wait_timeout
          configuration option if SQL statements should wait longer for
          other transactions to complete, or decrease it if too many
          long-running transactions are causing
          locking problems and
          reducing concurrency
          on a busy system.
        
          1206 (ER_LOCK_TABLE_FULL)
        
          The total number of locks exceeds the amount of memory
          InnoDB devotes to managing locks. To avoid
          this error, increase the value of
          innodb_buffer_pool_size.
          Within an individual application, a workaround may be to break
          a large operation into smaller pieces. For example, if the
          error occurs for a large
          INSERT, perform several smaller
          INSERT operations.
        
          1213 (ER_LOCK_DEADLOCK)
        
          The transaction
          encountered a deadlock
          and was automatically rolled
          back so that your application could take corrective
          action. To recover from this error, run all the operations in
          this transaction again. A deadlock occurs when requests for
          locks arrive in inconsistent order between transactions. The
          transaction that was rolled back released all its locks, and
          the other transaction can now get all the locks it requested.
          Thus when you re-run the transaction that was rolled back, it
          might have to wait for other transactions to complete, but
          typically the deadlock does not recur. If you encounter
          frequent deadlocks, make the sequence of locking operations
          (LOCK TABLES, SELECT ... FOR
          UPDATE, and so on) consistent between the different
          transactions or applications that experience the issue. See
          Section 14.2.2.9, “How to Cope with Deadlocks” for details.
        
          1216 (ER_NO_REFERENCED_ROW)
        
You are trying to add a row but there is no parent row, and a foreign key constraint fails. Add the parent row first.
          1217 (ER_ROW_IS_REFERENCED)
        
You are trying to delete a parent row that has children, and a foreign key constraint fails. Delete the children first.
          ERROR 1553 (HY000): Cannot drop index
          '
        fooIdx': needed in a foreign key
          constraint
This error message is reported when you attempt to drop the last index that can enforce a particular referential constraint.
          For optimal performance with DML statements,
          InnoDB requires an index to exist on
          foreign key columns,
          so that UPDATE and
          DELETE operations on a
          parent table can
          easily check whether corresponding rows exist in the
          child table. MySQL
          creates or drops such indexes automatically when needed, as a
          side-effect of CREATE TABLE,
          CREATE INDEX, and
          ALTER TABLE statements.
        
          When you drop an index, InnoDB checks
          whether the index is not used for checking a foreign key
          constraint. It is still OK to drop the index if there is
          another index that can be used to enforce the same constraint.
          InnoDB prevents you from dropping the last
          index that can enforce a particular referential constraint.
To print the error message for an operating system error number, you can use one of the following options. The perror program is provided with the MySQL distribution.
$ perror 123 $ perl -MPOSIX -le 'print strerror 123' $ python -c 'import os; print os.strerror(123)'
The following table provides a partial list of Linux system error codes.
| Number | Macro | Description | 
|---|---|---|
| 1 | EPERM | Operation not permitted | 
| 2 | ENOENT | No such file or directory | 
| 3 | ESRCH | No such process | 
| 4 | EINTR | Interrupted system call | 
| 5 | EIO | I/O error | 
| 6 | ENXIO | No such device or address | 
| 7 | E2BIG | Arg list too long | 
| 8 | ENOEXEC | Exec format error | 
| 9 | EBADF | Bad file number | 
| 10 | ECHILD | No child processes | 
| 11 | EAGAIN | Try again | 
| 12 | ENOMEM | Out of memory | 
| 13 | EACCES | Permission denied | 
| 14 | EFAULT | Bad address | 
| 15 | ENOTBLK | Block device required | 
| 16 | EBUSY | Device or resource busy | 
| 17 | EEXIST | File exists | 
| 18 | EXDEV | Cross-device link | 
| 19 | ENODEV | No such device | 
| 20 | ENOTDIR | Not a directory | 
| 21 | EISDIR | Is a directory | 
| 22 | EINVAL | Invalid argument | 
| 23 | ENFILE | File table overflow | 
| 24 | EMFILE | Too many open files | 
| 25 | ENOTTY | Inappropriate ioctl for device | 
| 26 | ETXTBSY | Text file busy | 
| 27 | EFBIG | File too large | 
| 28 | ENOSPC | No space left on device | 
| 29 | ESPIPE | File descriptor does not allow seeking | 
| 30 | EROFS | Read-only file system | 
| 31 | EMLINK | Too many links | 
The following table provides a list of some common Windows system error codes. For a complete list, see the Microsoft Web site.
| Number | Macro | Description | 
|---|---|---|
| 1 | ERROR_INVALID_FUNCTION | Incorrect function. | 
| 2 | ERROR_FILE_NOT_FOUND | The system cannot find the file specified. | 
| 3 | ERROR_PATH_NOT_FOUND | The system cannot find the path specified. | 
| 4 | ERROR_TOO_MANY_OPEN_FILES | The system cannot open the file. | 
| 5 | ERROR_ACCESS_DENIED | Access is denied. | 
| 6 | ERROR_INVALID_HANDLE | The handle is invalid. | 
| 7 | ERROR_ARENA_TRASHED | The storage control blocks were destroyed. | 
| 8 | ERROR_NOT_ENOUGH_MEMORY | Not enough storage is available to process this command. | 
| 9 | ERROR_INVALID_BLOCK | The storage control block address is invalid. | 
| 10 | ERROR_BAD_ENVIRONMENT | The environment is incorrect. | 
| 11 | ERROR_BAD_FORMAT | An attempt was made to load a program with an incorrect format. | 
| 12 | ERROR_INVALID_ACCESS | The access code is invalid. | 
| 13 | ERROR_INVALID_DATA | The data is invalid. | 
| 14 | ERROR_OUTOFMEMORY | Not enough storage is available to complete this operation. | 
| 15 | ERROR_INVALID_DRIVE | The system cannot find the drive specified. | 
| 16 | ERROR_CURRENT_DIRECTORY | The directory cannot be removed. | 
| 17 | ERROR_NOT_SAME_DEVICE | The system cannot move the file to a different disk drive. | 
| 18 | ERROR_NO_MORE_FILES | There are no more files. | 
| 19 | ERROR_WRITE_PROTECT | The media is write protected. | 
| 20 | ERROR_BAD_UNIT | The system cannot find the device specified. | 
| 21 | ERROR_NOT_READY | The device is not ready. | 
| 22 | ERROR_BAD_COMMAND | The device does not recognize the command. | 
| 23 | ERROR_CRC | Data error (cyclic redundancy check). | 
| 24 | ERROR_BAD_LENGTH | The program issued a command but the command length is incorrect. | 
| 25 | ERROR_SEEK | The drive cannot locate a specific area or track on the disk. | 
| 26 | ERROR_NOT_DOS_DISK | The specified disk or diskette cannot be accessed. | 
| 27 | ERROR_SECTOR_NOT_FOUND | The drive cannot find the sector requested. | 
| 28 | ERROR_OUT_OF_PAPER | The printer is out of paper. | 
| 29 | ERROR_WRITE_FAULT | The system cannot write to the specified device. | 
| 30 | ERROR_READ_FAULT | The system cannot read from the specified device. | 
| 31 | ERROR_GEN_FAILURE | A device attached to the system is not functioning. | 
| 32 | ERROR_SHARING_VIOLATION | The process cannot access the file because it is being used by another process. | 
| 33 | ERROR_LOCK_VIOLATION | The process cannot access the file because another process has locked a portion of the file. | 
| 34 | ERROR_WRONG_DISK | The wrong diskette is in the drive. Insert %2 (Volume Serial Number: %3) into drive %1. | 
| 36 | ERROR_SHARING_BUFFER_EXCEEDED | Too many files opened for sharing. | 
| 38 | ERROR_HANDLE_EOF | Reached the end of the file. | 
| 39 | ERROR_HANDLE_DISK_FULL | The disk is full. | 
| 87 | ERROR_INVALID_PARAMETER | The parameter is incorrect. | 
| 112 | ERROR_DISK_FULL | The disk is full. | 
| 123 | ERROR_INVALID_NAME | The file name, directory name, or volume label syntax is incorrect. | 
| 1450 | ERROR_NO_SYSTEM_RESOURCES | Insufficient system resources exist to complete the requested service. |