You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

4074 lines
102 KiB

10 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
Merge Google encryption commit 195158e9889365dc3298f8c1f3bcaa745992f27f Author: Minli Zhu <minliz@google.com> Date: Mon Nov 25 11:05:55 2013 -0800 Innodb redo log encryption/decryption. Use start lsn of a log block as part of AES CTR counter. Record key version with each checkpoint. Internally key version 0 means no encryption. Tests done (see test_innodb_log_encryption.sh for detail): - Verify flag innodb_encrypt_log on or off, combined with various key versions passed through CLI, and dynamically set after startup, will not corrupt database. This includes tests from being unencrypted to encrypted, and encrypted to unencrypted. - Verify start-up with no redo logs succeeds. - Verify fresh start-up succeeds. Change-Id: I4ce4c2afdf3076be2fce90ebbc2a7ce01184b612 commit c1b97273659f07866758c25f4a56f680a1fbad24 Author: Jonas Oreland <jonaso@google.com> Date: Tue Dec 3 18:47:27 2013 +0100 encryption of aria data&index files this patch implements encryption of aria data & index files. this is implemented as 1) add read/write hooks (renamed from callbacks) that does encrypt/decrypt (also add pre_read and post_write hooks) 2) modify page headers for data/index to contain key version (making the data-page header size different for with/without encryption) 3) modify index page 0 to contain IV (and crypt header) 4) AES CRT crypt functions 5) counter block is implemented using combination of page no, lsn and table specific id NOTE: 1) log files are not encrypted, this is not needed for if aria is only used for internal temporary tables and they are not transactional (i.e not logged) 2) all encrypted tables are using PAGE_CHECKSUM (crc) normal internal temporary tables are (currently) not CHECKSUM:ed 3) This patch adds insert-order semantics to aria block_format. The default behaviour of aria block-format is best-fit, meaning that rows gets allocated to page trying to fill the pages as much as possible. However, certain sql constructs materialize temporary result in tmp-tables, and expect that a table scan will later return the rows in the same order they were inserted. This implementation of insert-order is only enabled when explicitly requested by sql-layer. CHANGES: 1) found bug in ma_write that made code try to abort a record that was never written unsure why this is not exposed Change-Id: Ia82bbaa92e2c0629c08693c5add2f56b815c0509 commit 89dc1ab651fe0205d55b4eb588f62df550aa65fc Author: Jonas Oreland <jonaso@google.com> Date: Mon Feb 17 08:04:50 2014 -0800 Implement encryption of innodb datafiles. Pages are encrypted before written to disk and decrypted when read from disk. Each page except first page (page 0) in tablespace is encrypted. Page 0 is unencrypted and contains IV for the tablespace. FIL_PAGE_FILE_FLUSH_LSN on each page (except page 0) is used to store a 32-bit key-version, so that multiple keys can be active in a tablespace simultaneous. The other 32-bit of the FIL_PAGE_FILE_FLUSH_LSN field contains a checksum that is computed after encryption. This checksum is used by innochecksum and when restoring from double-write-buffer. The encryption is performed using AES CRT. Monitoring of encryption is enabled using new IS-table INNODB_TABLESPACES_ENCRYPTION. In addition to that new status variables innodb_encryption_rotation_{ pages_read_from_cache, pages_read_from_disk, pages_modified,pages_flushed } has been added. The following tunables are introduces - innodb_encrypt_tables - innodb_encryption_threads - innodb_encryption_rotate_key_age - innodb_encryption_rotation_iops Change-Id: I8f651795a30b52e71b16d6bc9cb7559be349d0b2 commit a17eef2f6948e58219c9e26fc35633d6fd4de1de Author: Andrew Ford <andrewford@google.com> Date: Thu Jan 2 15:43:09 2014 -0800 Key management skeleton with debug hooks. Change-Id: Ifd6aa3743d7ea291c70083f433a059c439aed866 commit 68a399838ad72264fd61b3dc67fecd29bbdb0af1 Author: Andrew Ford <andrewford@google.com> Date: Mon Oct 28 16:27:44 2013 -0700 Add AES-128 CTR and GCM encryption classes. Change-Id: I116305eced2a233db15306bc2ef5b9d398d1a3a2
11 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
MDEV-13328 ALTER TABLE…DISCARD TABLESPACE takes a lot of time With a big buffer pool that contains many data pages, DISCARD TABLESPACE took a long time, because it would scan the entire buffer pool to remove any pages that belong to the tablespace. With a large buffer pool, this would take a lot of time, especially when the table-to-discard is empty. The minimum amount of work that DISCARD TABLESPACE must do is to remove the pages of the to-be-discarded table from the buf_pool->flush_list because any writes to the data file must be prevented before the file is deleted. If DISCARD TABLESPACE does not evict the pages from the buffer pool, then IMPORT TABLESPACE must do it, because we must prevent pre-DISCARD, not-yet-evicted pages from being mistaken for pages of the imported tablespace. It would not be a useful fix to simply move the buffer pool scan to the IMPORT TABLESPACE step. What we can do is to actively evict those pages that could be mistaken for imported pages. In this way, when importing a small table into a big buffer pool, the import should still run relatively fast. Import is bypassing the buffer pool when reading pages for the adjustment phase. In the adjustment phase, if a page exists in the buffer pool, we could replace it with the page from the imported file. Unfortunately I did not get this to work properly, so instead we will simply evict any matching page from the buffer pool. buf_page_get_gen(): Implement BUF_EVICT_IF_IN_POOL, a new mode where the requested page will be evicted if it is found. There must be no unwritten changes for the page. buf_remove_t: Remove. Instead, use trx!=NULL to signify that a write to file is desired, and use a separate parameter bool drop_ahi. buf_LRU_flush_or_remove_pages(), fil_delete_tablespace(): Replace buf_remove_t. buf_LRU_remove_pages(), buf_LRU_remove_all_pages(): Remove. PageConverter::m_mtr: A dummy mini-transaction buffer PageConverter::PageConverter(): Complete the member initialization list. PageConverter::operator()(): Evict any 'shadow' pages from the buffer pool so that pre-existing (garbage) pages cannot be mistaken for pages that exist in the being-imported file. row_discard_tablespace(): Remove a bogus comment that seems to refer to IMPORT TABLESPACE, not DISCARD TABLESPACE.
8 years ago
MDEV-13328 ALTER TABLE…DISCARD TABLESPACE takes a lot of time With a big buffer pool that contains many data pages, DISCARD TABLESPACE took a long time, because it would scan the entire buffer pool to remove any pages that belong to the tablespace. With a large buffer pool, this would take a lot of time, especially when the table-to-discard is empty. The minimum amount of work that DISCARD TABLESPACE must do is to remove the pages of the to-be-discarded table from the buf_pool->flush_list because any writes to the data file must be prevented before the file is deleted. If DISCARD TABLESPACE does not evict the pages from the buffer pool, then IMPORT TABLESPACE must do it, because we must prevent pre-DISCARD, not-yet-evicted pages from being mistaken for pages of the imported tablespace. It would not be a useful fix to simply move the buffer pool scan to the IMPORT TABLESPACE step. What we can do is to actively evict those pages that could be mistaken for imported pages. In this way, when importing a small table into a big buffer pool, the import should still run relatively fast. Import is bypassing the buffer pool when reading pages for the adjustment phase. In the adjustment phase, if a page exists in the buffer pool, we could replace it with the page from the imported file. Unfortunately I did not get this to work properly, so instead we will simply evict any matching page from the buffer pool. buf_page_get_gen(): Implement BUF_EVICT_IF_IN_POOL, a new mode where the requested page will be evicted if it is found. There must be no unwritten changes for the page. buf_remove_t: Remove. Instead, use trx!=NULL to signify that a write to file is desired, and use a separate parameter bool drop_ahi. buf_LRU_flush_or_remove_pages(), fil_delete_tablespace(): Replace buf_remove_t. buf_LRU_remove_pages(), buf_LRU_remove_all_pages(): Remove. PageConverter::m_mtr: A dummy mini-transaction buffer PageConverter::PageConverter(): Complete the member initialization list. PageConverter::operator()(): Evict any 'shadow' pages from the buffer pool so that pre-existing (garbage) pages cannot be mistaken for pages that exist in the being-imported file. row_discard_tablespace(): Remove a bogus comment that seems to refer to IMPORT TABLESPACE, not DISCARD TABLESPACE.
8 years ago
10 years ago
MDEV-12253: Buffer pool blocks are accessed after they have been freed Problem was that bpage was referenced after it was already freed from LRU. Fixed by adding a new variable encrypted that is passed down to buf_page_check_corrupt() and used in buf_page_get_gen() to stop processing page read. This patch should also address following test failures and bugs: MDEV-12419: IMPORT should not look up tablespace in PageConverter::validate(). This is now removed. MDEV-10099: encryption.innodb_onlinealter_encryption fails sporadically in buildbot MDEV-11420: encryption.innodb_encryption-page-compression failed in buildbot MDEV-11222: encryption.encrypt_and_grep failed in buildbot on P8 Removed dict_table_t::is_encrypted and dict_table_t::ibd_file_missing and replaced these with dict_table_t::file_unreadable. Table ibd file is missing if fil_get_space(space_id) returns NULL and encrypted if not. Removed dict_table_t::is_corrupted field. Ported FilSpace class from 10.2 and using that on buf_page_check_corrupt(), buf_page_decrypt_after_read(), buf_page_encrypt_before_write(), buf_dblwr_process(), buf_read_page(), dict_stats_save_defrag_stats(). Added test cases when enrypted page could be read while doing redo log crash recovery. Also added test case for row compressed blobs. btr_cur_open_at_index_side_func(), btr_cur_open_at_rnd_pos_func(): Avoid referencing block that is NULL. buf_page_get_zip(): Issue error if page read fails. buf_page_get_gen(): Use dberr_t for error detection and do not reference bpage after we hare freed it. buf_mark_space_corrupt(): remove bpage from LRU also when it is encrypted. buf_page_check_corrupt(): @return DB_SUCCESS if page has been read and is not corrupted, DB_PAGE_CORRUPTED if page based on checksum check is corrupted, DB_DECRYPTION_FAILED if page post encryption checksum matches but after decryption normal page checksum does not match. In read case only DB_SUCCESS is possible. buf_page_io_complete(): use dberr_t for error handling. buf_flush_write_block_low(), buf_read_ahead_random(), buf_read_page_async(), buf_read_ahead_linear(), buf_read_ibuf_merge_pages(), buf_read_recv_pages(), fil_aio_wait(): Issue error if page read fails. btr_pcur_move_to_next_page(): Do not reference page if it is NULL. Introduced dict_table_t::is_readable() and dict_index_t::is_readable() that will return true if tablespace exists and pages read from tablespace are not corrupted or page decryption failed. Removed buf_page_t::key_version. After page decryption the key version is not removed from page frame. For unencrypted pages, old key_version is removed at buf_page_encrypt_before_write() dict_stats_update_transient_for_index(), dict_stats_update_transient() Do not continue if table decryption failed or table is corrupted. dict0stats.cc: Introduced a dict_stats_report_error function to avoid code duplication. fil_parse_write_crypt_data(): Check that key read from redo log entry is found from encryption plugin and if it is not, refuse to start. PageConverter::validate(): Removed access to fil_space_t as tablespace is not available during import. Fixed error code on innodb.innodb test. Merged test cased innodb-bad-key-change5 and innodb-bad-key-shutdown to innodb-bad-key-change2. Removed innodb-bad-key-change5 test. Decreased unnecessary complexity on some long lasting tests. Removed fil_inc_pending_ops(), fil_decr_pending_ops(), fil_get_first_space(), fil_get_next_space(), fil_get_first_space_safe(), fil_get_next_space_safe() functions. fil_space_verify_crypt_checksum(): Fixed bug found using ASAN where FIL_PAGE_END_LSN_OLD_CHECKSUM field was incorrectly accessed from row compressed tables. Fixed out of page frame bug for row compressed tables in fil_space_verify_crypt_checksum() found using ASAN. Incorrect function was called for compressed table. Added new tests for discard, rename table and drop (we should allow them even when page decryption fails). Alter table rename is not allowed. Added test for restart with innodb-force-recovery=1 when page read on redo-recovery cant be decrypted. Added test for corrupted table where both page data and FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION is corrupted. Adjusted the test case innodb_bug14147491 so that it does not anymore expect crash. Instead table is just mostly not usable. fil0fil.h: fil_space_acquire_low is not visible function and fil_space_acquire and fil_space_acquire_silent are inline functions. FilSpace class uses fil_space_acquire_low directly. recv_apply_hashed_log_recs() does not return anything.
9 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
10 years ago
10 years ago
10 years ago
10 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
10 years ago
10 years ago
10 years ago
10 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
9 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
9 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
9 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
9 years ago
10 years ago
10 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12219 Discard temporary undo logs at transaction commit Starting with MySQL 5.7, temporary tables in InnoDB are handled differently from persistent tables. Because temporary tables are private to a connection, concurrency control and multi-versioning (MVCC) are not applicable. For performance reasons, purge is disabled as well. Rollback is supported for temporary tables; that is why we have the temporary undo logs in the first place. Because MVCC and purge are disabled for temporary tables, we should discard all temporary undo logs already at transaction commit, just like we discard the persistent insert_undo logs. Before this change, update_undo logs were being preserved. trx_temp_undo_t: A wrapper for temporary undo logs, comprising a rollback segment and a single temporary undo log. trx_rsegs_t::m_noredo: Use trx_temp_undo_t. (Instead of insert_undo, update_undo, there will be a single undo.) trx_is_noredo_rseg_updated(), trx_is_rseg_assigned(): Remove. trx_undo_add_page(): Remove the parameter undo_ptr. Acquire and release the rollback segment mutex inside the function. trx_undo_free_last_page(): Remove the parameter trx. trx_undo_truncate_end(): Remove the parameter trx, and add the parameter is_temp. Clean up the code a bit. trx_undo_assign_undo(): Split the parameter undo_ptr into rseg, undo. trx_undo_commit_cleanup(): Renamed from trx_undo_insert_cleanup(). Replace the parameter undo_ptr with undo. This will discard the temporary undo or insert_undo log at commit/rollback. trx_purge_add_update_undo_to_history(), trx_undo_update_cleanup(): Remove 3 parameters. Always operate on the persistent update_undo. trx_serialise(): Renamed from trx_serialisation_number_get(). trx_write_serialisation_history(): Simplify the code flow. If there are no persistent changes, do not update MONITOR_TRX_COMMIT_UNDO. trx_commit_in_memory(): Simplify the logic, and add assertions. trx_undo_page_report_modify(): Keep a direct reference to the persistent update_undo log. trx_undo_report_row_operation(): Simplify some code. Always assign TRX_UNDO_INSERT for temporary undo logs. trx_prepare_low(): Keep only one parameter. Prepare all 3 undo logs. trx_roll_try_truncate(): Remove the parameter undo_ptr. Try to truncate all 3 undo logs of the transaction. trx_roll_pop_top_rec_of_trx_low(): Remove. trx_roll_pop_top_rec_of_trx(): Remove the redundant parameter trx->roll_limit. Clear roll_limit when exhausting the undo logs. Consider all 3 undo logs at once, prioritizing the persistent undo logs. row_undo(): Minor cleanup. Let trx_roll_pop_top_rec_of_trx() reset the trx->roll_limit.
9 years ago
MDEV-12288 Reset DB_TRX_ID when the history is removed, to speed up MVCC Let InnoDB purge reset DB_TRX_ID,DB_ROLL_PTR when the history is removed. [TODO: It appears that the resetting is not taking place as often as it could be. We should test that a simple INSERT should eventually cause row_purge_reset_trx_id() to be invoked unless DROP TABLE is invoked soon enough.] The InnoDB clustered index record system columns DB_TRX_ID,DB_ROLL_PTR are used by multi-versioning. After the history is no longer needed, these columns can safely be reset to 0 and 1<<55 (to indicate a fresh insert). When a reader sees 0 in the DB_TRX_ID column, it can instantly determine that the record is present the read view. There is no need to acquire the transaction system mutex to check if the transaction exists, because writes can never be conducted by a transaction whose ID is 0. The persistent InnoDB undo log used to be split into two parts: insert_undo and update_undo. The insert_undo log was discarded at transaction commit or rollback, and the update_undo log was processed by the purge subsystem. As part of this change, we will only generate a single undo log for new transactions, and the purge subsystem will reset the DB_TRX_ID whenever a clustered index record is touched. That is, all persistent undo log will be preserved at transaction commit or rollback, to be removed by purge. The InnoDB redo log format is changed in two ways: We remove the redo log record type MLOG_UNDO_HDR_REUSE, and we introduce the MLOG_ZIP_WRITE_TRX_ID record for updating the DB_TRX_ID,DB_ROLL_PTR in a ROW_FORMAT=COMPRESSED table. This is also changing the format of persistent InnoDB data files: undo log and clustered index leaf page records. It will still be possible via import and export to exchange data files with earlier versions of MariaDB. The change to clustered index leaf page records is simple: we allow DB_TRX_ID to be 0. When it comes to the undo log, we must be able to upgrade from earlier MariaDB versions after a clean shutdown (no redo log to apply). While it would be nice to perform a slow shutdown (innodb_fast_shutdown=0) before an upgrade, to empty the undo logs, we cannot assume that this has been done. So, separate insert_undo log may exist for recovered uncommitted transactions. These transactions may be automatically rolled back, or they may be in XA PREPARE state, in which case InnoDB will preserve the transaction until an explicit XA COMMIT or XA ROLLBACK. Upgrade has been tested by starting up MariaDB 10.2 with ./mysql-test-run --manual-gdb innodb.read_only_recovery and then starting up this patched server with and without --innodb-read-only. trx_undo_ptr_t::undo: Renamed from update_undo. trx_undo_ptr_t::old_insert: Renamed from insert_undo. trx_rseg_t::undo_list: Renamed from update_undo_list. trx_rseg_t::undo_cached: Merged from update_undo_cached and insert_undo_cached. trx_rseg_t::old_insert_list: Renamed from insert_undo_list. row_purge_reset_trx_id(): New function to reset the columns. This will be called for all undo processing in purge that does not remove the clustered index record. trx_undo_update_rec_get_update(): Allow trx_id=0 when copying the old DB_TRX_ID of the record to the undo log. ReadView::changes_visible(): Allow id==0. (Return true for it. This is what speeds up the MVCC.) row_vers_impl_x_locked_low(), row_vers_build_for_semi_consistent_read(): Implement a fast path for DB_TRX_ID=0. Always initialize the TRX_UNDO_PAGE_TYPE to 0. Remove undo->type. MLOG_UNDO_HDR_REUSE: Remove. This changes the redo log format! innobase_start_or_create_for_mysql(): Set srv_undo_sources before starting any transactions. The parsing of the MLOG_ZIP_WRITE_TRX_ID record was successfully tested by running the following: ./mtr --parallel=auto --mysqld=--debug=d,ib_log innodb_zip.bug56680 grep MLOG_ZIP_WRITE_TRX_ID var/*/log/mysqld.1.err
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12253: Buffer pool blocks are accessed after they have been freed Problem was that bpage was referenced after it was already freed from LRU. Fixed by adding a new variable encrypted that is passed down to buf_page_check_corrupt() and used in buf_page_get_gen() to stop processing page read. This patch should also address following test failures and bugs: MDEV-12419: IMPORT should not look up tablespace in PageConverter::validate(). This is now removed. MDEV-10099: encryption.innodb_onlinealter_encryption fails sporadically in buildbot MDEV-11420: encryption.innodb_encryption-page-compression failed in buildbot MDEV-11222: encryption.encrypt_and_grep failed in buildbot on P8 Removed dict_table_t::is_encrypted and dict_table_t::ibd_file_missing and replaced these with dict_table_t::file_unreadable. Table ibd file is missing if fil_get_space(space_id) returns NULL and encrypted if not. Removed dict_table_t::is_corrupted field. Ported FilSpace class from 10.2 and using that on buf_page_check_corrupt(), buf_page_decrypt_after_read(), buf_page_encrypt_before_write(), buf_dblwr_process(), buf_read_page(), dict_stats_save_defrag_stats(). Added test cases when enrypted page could be read while doing redo log crash recovery. Also added test case for row compressed blobs. btr_cur_open_at_index_side_func(), btr_cur_open_at_rnd_pos_func(): Avoid referencing block that is NULL. buf_page_get_zip(): Issue error if page read fails. buf_page_get_gen(): Use dberr_t for error detection and do not reference bpage after we hare freed it. buf_mark_space_corrupt(): remove bpage from LRU also when it is encrypted. buf_page_check_corrupt(): @return DB_SUCCESS if page has been read and is not corrupted, DB_PAGE_CORRUPTED if page based on checksum check is corrupted, DB_DECRYPTION_FAILED if page post encryption checksum matches but after decryption normal page checksum does not match. In read case only DB_SUCCESS is possible. buf_page_io_complete(): use dberr_t for error handling. buf_flush_write_block_low(), buf_read_ahead_random(), buf_read_page_async(), buf_read_ahead_linear(), buf_read_ibuf_merge_pages(), buf_read_recv_pages(), fil_aio_wait(): Issue error if page read fails. btr_pcur_move_to_next_page(): Do not reference page if it is NULL. Introduced dict_table_t::is_readable() and dict_index_t::is_readable() that will return true if tablespace exists and pages read from tablespace are not corrupted or page decryption failed. Removed buf_page_t::key_version. After page decryption the key version is not removed from page frame. For unencrypted pages, old key_version is removed at buf_page_encrypt_before_write() dict_stats_update_transient_for_index(), dict_stats_update_transient() Do not continue if table decryption failed or table is corrupted. dict0stats.cc: Introduced a dict_stats_report_error function to avoid code duplication. fil_parse_write_crypt_data(): Check that key read from redo log entry is found from encryption plugin and if it is not, refuse to start. PageConverter::validate(): Removed access to fil_space_t as tablespace is not available during import. Fixed error code on innodb.innodb test. Merged test cased innodb-bad-key-change5 and innodb-bad-key-shutdown to innodb-bad-key-change2. Removed innodb-bad-key-change5 test. Decreased unnecessary complexity on some long lasting tests. Removed fil_inc_pending_ops(), fil_decr_pending_ops(), fil_get_first_space(), fil_get_next_space(), fil_get_first_space_safe(), fil_get_next_space_safe() functions. fil_space_verify_crypt_checksum(): Fixed bug found using ASAN where FIL_PAGE_END_LSN_OLD_CHECKSUM field was incorrectly accessed from row compressed tables. Fixed out of page frame bug for row compressed tables in fil_space_verify_crypt_checksum() found using ASAN. Incorrect function was called for compressed table. Added new tests for discard, rename table and drop (we should allow them even when page decryption fails). Alter table rename is not allowed. Added test for restart with innodb-force-recovery=1 when page read on redo-recovery cant be decrypted. Added test for corrupted table where both page data and FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION is corrupted. Adjusted the test case innodb_bug14147491 so that it does not anymore expect crash. Instead table is just mostly not usable. fil0fil.h: fil_space_acquire_low is not visible function and fil_space_acquire and fil_space_acquire_silent are inline functions. FilSpace class uses fil_space_acquire_low directly. recv_apply_hashed_log_recs() does not return anything.
9 years ago
MDEV-6076 Persistent AUTO_INCREMENT for InnoDB This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with the notable difference that the file format changes are limited to repurposing a previously unused data field in B-tree pages. For persistent InnoDB tables, write the last used AUTO_INCREMENT value to the root page of the clustered index, in the previously unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC. Unlike some other previously unused InnoDB data fields, this one was actually always zero-initialized, at least since MySQL 3.23.49. The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the root page. The SX latch will allow concurrent read access to the root page. (The field PAGE_ROOT_AUTO_INC will only be read on the first-time call to ha_innobase::open() from the SQL layer. The PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so read/write races are not possible.) During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level function btr_cur_search_to_nth_level(), adding no extra page access. [Adaptive hash index lookup will be disabled during INSERT.] If some rare UPDATE modifies an AUTO_INCREMENT column, the PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in ha_innobase::update_row(). When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC field. During ALTER TABLE, the initial AUTO_INCREMENT value will be copied from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will update PAGE_ROOT_AUTO_INC in real time. innodb_col_no(): Determine the dict_table_t::cols[] element index corresponding to a Field of a non-virtual column. (The MySQL 5.7 implementation of virtual columns breaks the 1:1 relationship between Field::field_index and dict_table_t::cols[]. Virtual columns are omitted from dict_table_t::cols[]. Therefore, we must translate the field_index of AUTO_INCREMENT columns into an index of dict_table_t::cols[].) Upgrade from old data files: By default, the AUTO_INCREMENT sequence in old data files would appear to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain the value 0 in each clustered index page. In new data files, PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain any AUTO_INCREMENT column. For backward compatibility, we use the old method of SELECT MAX(auto_increment_column) for initializing the sequence. btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format data file. btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc() that will resort to reading MAX(auto_increment_column) for data files that did not use AUTO_INCREMENT yet. It was manually tested that during the execution of innodb.autoinc_persist the compatibility logic is not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty clustered index root pages). initialize_auto_increment(): Replaces ha_innobase::innobase_initialize_autoinc(). This initializes the AUTO_INCREMENT metadata. Only called from ha_innobase::open(). ha_innobase::info_low(): Do not try to lazily initialize dict_table_t::autoinc. It must already have been initialized by ha_innobase::open() or ha_innobase::create(). Note: The adjustments to class ha_innopart were not tested, because the source code (native InnoDB partitioning) is not being compiled.
9 years ago
MDEV-6076 Persistent AUTO_INCREMENT for InnoDB This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with the notable difference that the file format changes are limited to repurposing a previously unused data field in B-tree pages. For persistent InnoDB tables, write the last used AUTO_INCREMENT value to the root page of the clustered index, in the previously unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC. Unlike some other previously unused InnoDB data fields, this one was actually always zero-initialized, at least since MySQL 3.23.49. The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the root page. The SX latch will allow concurrent read access to the root page. (The field PAGE_ROOT_AUTO_INC will only be read on the first-time call to ha_innobase::open() from the SQL layer. The PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so read/write races are not possible.) During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level function btr_cur_search_to_nth_level(), adding no extra page access. [Adaptive hash index lookup will be disabled during INSERT.] If some rare UPDATE modifies an AUTO_INCREMENT column, the PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in ha_innobase::update_row(). When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC field. During ALTER TABLE, the initial AUTO_INCREMENT value will be copied from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will update PAGE_ROOT_AUTO_INC in real time. innodb_col_no(): Determine the dict_table_t::cols[] element index corresponding to a Field of a non-virtual column. (The MySQL 5.7 implementation of virtual columns breaks the 1:1 relationship between Field::field_index and dict_table_t::cols[]. Virtual columns are omitted from dict_table_t::cols[]. Therefore, we must translate the field_index of AUTO_INCREMENT columns into an index of dict_table_t::cols[].) Upgrade from old data files: By default, the AUTO_INCREMENT sequence in old data files would appear to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain the value 0 in each clustered index page. In new data files, PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain any AUTO_INCREMENT column. For backward compatibility, we use the old method of SELECT MAX(auto_increment_column) for initializing the sequence. btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format data file. btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc() that will resort to reading MAX(auto_increment_column) for data files that did not use AUTO_INCREMENT yet. It was manually tested that during the execution of innodb.autoinc_persist the compatibility logic is not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty clustered index root pages). initialize_auto_increment(): Replaces ha_innobase::innobase_initialize_autoinc(). This initializes the AUTO_INCREMENT metadata. Only called from ha_innobase::open(). ha_innobase::info_low(): Do not try to lazily initialize dict_table_t::autoinc. It must already have been initialized by ha_innobase::open() or ha_innobase::create(). Note: The adjustments to class ha_innopart were not tested, because the source code (native InnoDB partitioning) is not being compiled.
9 years ago
  1. /*****************************************************************************
  2. Copyright (c) 2012, 2016, Oracle and/or its affiliates. All Rights Reserved.
  3. Copyright (c) 2015, 2018, MariaDB Corporation.
  4. This program is free software; you can redistribute it and/or modify it under
  5. the terms of the GNU General Public License as published by the Free Software
  6. Foundation; version 2 of the License.
  7. This program is distributed in the hope that it will be useful, but WITHOUT
  8. ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
  9. FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
  10. You should have received a copy of the GNU General Public License along with
  11. this program; if not, write to the Free Software Foundation, Inc.,
  12. 51 Franklin Street, Suite 500, Boston, MA 02110-1335 USA
  13. *****************************************************************************/
  14. /**************************************************//**
  15. @file row/row0import.cc
  16. Import a tablespace to a running instance.
  17. Created 2012-02-08 by Sunny Bains.
  18. *******************************************************/
  19. #include "ha_prototypes.h"
  20. #include "row0import.h"
  21. #include "btr0pcur.h"
  22. #include "que0que.h"
  23. #include "dict0boot.h"
  24. #include "ibuf0ibuf.h"
  25. #include "pars0pars.h"
  26. #include "row0sel.h"
  27. #include "row0mysql.h"
  28. #include "srv0start.h"
  29. #include "row0quiesce.h"
  30. #include "fil0pagecompress.h"
  31. #include "trx0undo.h"
  32. #include "ut0new.h"
  33. #include <vector>
  34. #ifdef HAVE_MY_AES_H
  35. #include <my_aes.h>
  36. #endif
  37. /** The size of the buffer to use for IO.
  38. @param n physical page size
  39. @return number of pages */
  40. #define IO_BUFFER_SIZE(n) ((1024 * 1024) / n)
  41. /** For gathering stats on records during phase I */
  42. struct row_stats_t {
  43. ulint m_n_deleted; /*!< Number of deleted records
  44. found in the index */
  45. ulint m_n_purged; /*!< Number of records purged
  46. optimisatically */
  47. ulint m_n_rows; /*!< Number of rows */
  48. ulint m_n_purge_failed; /*!< Number of deleted rows
  49. that could not be purged */
  50. };
  51. /** Index information required by IMPORT. */
  52. struct row_index_t {
  53. index_id_t m_id; /*!< Index id of the table
  54. in the exporting server */
  55. byte* m_name; /*!< Index name */
  56. ulint m_space; /*!< Space where it is placed */
  57. ulint m_page_no; /*!< Root page number */
  58. ulint m_type; /*!< Index type */
  59. ulint m_trx_id_offset; /*!< Relevant only for clustered
  60. indexes, offset of transaction
  61. id system column */
  62. ulint m_n_user_defined_cols; /*!< User defined columns */
  63. ulint m_n_uniq; /*!< Number of columns that can
  64. uniquely identify the row */
  65. ulint m_n_nullable; /*!< Number of nullable
  66. columns */
  67. ulint m_n_fields; /*!< Total number of fields */
  68. dict_field_t* m_fields; /*!< Index fields */
  69. const dict_index_t*
  70. m_srv_index; /*!< Index instance in the
  71. importing server */
  72. row_stats_t m_stats; /*!< Statistics gathered during
  73. the import phase */
  74. };
  75. /** Meta data required by IMPORT. */
  76. struct row_import {
  77. row_import() UNIV_NOTHROW
  78. :
  79. m_table(),
  80. m_version(),
  81. m_hostname(),
  82. m_table_name(),
  83. m_autoinc(),
  84. m_page_size(0, 0, false),
  85. m_flags(),
  86. m_n_cols(),
  87. m_cols(),
  88. m_col_names(),
  89. m_n_indexes(),
  90. m_indexes(),
  91. m_missing(true) {}
  92. ~row_import() UNIV_NOTHROW;
  93. /** Find the index entry in in the indexes array.
  94. @param name index name
  95. @return instance if found else 0. */
  96. row_index_t* get_index(const char* name) const UNIV_NOTHROW;
  97. /** Get the number of rows in the index.
  98. @param name index name
  99. @return number of rows (doesn't include delete marked rows). */
  100. ulint get_n_rows(const char* name) const UNIV_NOTHROW;
  101. /** Find the ordinal value of the column name in the cfg table columns.
  102. @param name of column to look for.
  103. @return ULINT_UNDEFINED if not found. */
  104. ulint find_col(const char* name) const UNIV_NOTHROW;
  105. /** Get the number of rows for which purge failed during the
  106. convert phase.
  107. @param name index name
  108. @return number of rows for which purge failed. */
  109. ulint get_n_purge_failed(const char* name) const UNIV_NOTHROW;
  110. /** Check if the index is clean. ie. no delete-marked records
  111. @param name index name
  112. @return true if index needs to be purged. */
  113. bool requires_purge(const char* name) const UNIV_NOTHROW
  114. {
  115. return(get_n_purge_failed(name) > 0);
  116. }
  117. /** Set the index root <space, pageno> using the index name */
  118. void set_root_by_name() UNIV_NOTHROW;
  119. /** Set the index root <space, pageno> using a heuristic
  120. @return DB_SUCCESS or error code */
  121. dberr_t set_root_by_heuristic() UNIV_NOTHROW;
  122. /** Check if the index schema that was read from the .cfg file
  123. matches the in memory index definition.
  124. Note: It will update row_import_t::m_srv_index to map the meta-data
  125. read from the .cfg file to the server index instance.
  126. @return DB_SUCCESS or error code. */
  127. dberr_t match_index_columns(
  128. THD* thd,
  129. const dict_index_t* index) UNIV_NOTHROW;
  130. /** Check if the table schema that was read from the .cfg file
  131. matches the in memory table definition.
  132. @param thd MySQL session variable
  133. @return DB_SUCCESS or error code. */
  134. dberr_t match_table_columns(
  135. THD* thd) UNIV_NOTHROW;
  136. /** Check if the table (and index) schema that was read from the
  137. .cfg file matches the in memory table definition.
  138. @param thd MySQL session variable
  139. @return DB_SUCCESS or error code. */
  140. dberr_t match_schema(
  141. THD* thd) UNIV_NOTHROW;
  142. dict_table_t* m_table; /*!< Table instance */
  143. ulint m_version; /*!< Version of config file */
  144. byte* m_hostname; /*!< Hostname where the
  145. tablespace was exported */
  146. byte* m_table_name; /*!< Exporting instance table
  147. name */
  148. ib_uint64_t m_autoinc; /*!< Next autoinc value */
  149. page_size_t m_page_size; /*!< Tablespace page size */
  150. ulint m_flags; /*!< Table flags */
  151. ulint m_n_cols; /*!< Number of columns in the
  152. meta-data file */
  153. dict_col_t* m_cols; /*!< Column data */
  154. byte** m_col_names; /*!< Column names, we store the
  155. column naems separately becuase
  156. there is no field to store the
  157. value in dict_col_t */
  158. ulint m_n_indexes; /*!< Number of indexes,
  159. including clustered index */
  160. row_index_t* m_indexes; /*!< Index meta data */
  161. bool m_missing; /*!< true if a .cfg file was
  162. found and was readable */
  163. };
  164. /** Use the page cursor to iterate over records in a block. */
  165. class RecIterator {
  166. public:
  167. /** Default constructor */
  168. RecIterator() UNIV_NOTHROW
  169. {
  170. memset(&m_cur, 0x0, sizeof(m_cur));
  171. }
  172. /** Position the cursor on the first user record. */
  173. void open(buf_block_t* block) UNIV_NOTHROW
  174. {
  175. page_cur_set_before_first(block, &m_cur);
  176. if (!end()) {
  177. next();
  178. }
  179. }
  180. /** Move to the next record. */
  181. void next() UNIV_NOTHROW
  182. {
  183. page_cur_move_to_next(&m_cur);
  184. }
  185. /**
  186. @return the current record */
  187. rec_t* current() UNIV_NOTHROW
  188. {
  189. ut_ad(!end());
  190. return(page_cur_get_rec(&m_cur));
  191. }
  192. /**
  193. @return true if cursor is at the end */
  194. bool end() UNIV_NOTHROW
  195. {
  196. return(page_cur_is_after_last(&m_cur) == TRUE);
  197. }
  198. /** Remove the current record
  199. @return true on success */
  200. bool remove(
  201. const dict_index_t* index,
  202. page_zip_des_t* page_zip,
  203. ulint* offsets) UNIV_NOTHROW
  204. {
  205. /* We can't end up with an empty page unless it is root. */
  206. if (page_get_n_recs(m_cur.block->frame) <= 1) {
  207. return(false);
  208. }
  209. return(page_delete_rec(index, &m_cur, page_zip, offsets));
  210. }
  211. private:
  212. page_cur_t m_cur;
  213. };
  214. /** Class that purges delete marked reocords from indexes, both secondary
  215. and cluster. It does a pessimistic delete. This should only be done if we
  216. couldn't purge the delete marked reocrds during Phase I. */
  217. class IndexPurge {
  218. public:
  219. /** Constructor
  220. @param trx the user transaction covering the import tablespace
  221. @param index to be imported
  222. @param space_id space id of the tablespace */
  223. IndexPurge(
  224. trx_t* trx,
  225. dict_index_t* index) UNIV_NOTHROW
  226. :
  227. m_trx(trx),
  228. m_index(index),
  229. m_n_rows(0)
  230. {
  231. ib::info() << "Phase II - Purge records from index "
  232. << index->name;
  233. }
  234. /** Descructor */
  235. ~IndexPurge() UNIV_NOTHROW { }
  236. /** Purge delete marked records.
  237. @return DB_SUCCESS or error code. */
  238. dberr_t garbage_collect() UNIV_NOTHROW;
  239. /** The number of records that are not delete marked.
  240. @return total records in the index after purge */
  241. ulint get_n_rows() const UNIV_NOTHROW
  242. {
  243. return(m_n_rows);
  244. }
  245. private:
  246. /** Begin import, position the cursor on the first record. */
  247. void open() UNIV_NOTHROW;
  248. /** Close the persistent curosr and commit the mini-transaction. */
  249. void close() UNIV_NOTHROW;
  250. /** Position the cursor on the next record.
  251. @return DB_SUCCESS or error code */
  252. dberr_t next() UNIV_NOTHROW;
  253. /** Store the persistent cursor position and reopen the
  254. B-tree cursor in BTR_MODIFY_TREE mode, because the
  255. tree structure may be changed during a pessimistic delete. */
  256. void purge_pessimistic_delete() UNIV_NOTHROW;
  257. /** Purge delete-marked records.
  258. @param offsets current row offsets. */
  259. void purge() UNIV_NOTHROW;
  260. protected:
  261. // Disable copying
  262. IndexPurge();
  263. IndexPurge(const IndexPurge&);
  264. IndexPurge &operator=(const IndexPurge&);
  265. private:
  266. trx_t* m_trx; /*!< User transaction */
  267. mtr_t m_mtr; /*!< Mini-transaction */
  268. btr_pcur_t m_pcur; /*!< Persistent cursor */
  269. dict_index_t* m_index; /*!< Index to be processed */
  270. ulint m_n_rows; /*!< Records in index */
  271. };
  272. /** Functor that is called for each physical page that is read from the
  273. tablespace file. */
  274. class AbstractCallback
  275. {
  276. public:
  277. /** Constructor
  278. @param trx covering transaction */
  279. AbstractCallback(trx_t* trx, ulint space_id)
  280. :
  281. m_page_size(0, 0, false),
  282. m_trx(trx),
  283. m_space(space_id),
  284. m_xdes(),
  285. m_xdes_page_no(ULINT_UNDEFINED),
  286. m_space_flags(ULINT_UNDEFINED) UNIV_NOTHROW { }
  287. /** Free any extent descriptor instance */
  288. virtual ~AbstractCallback()
  289. {
  290. UT_DELETE_ARRAY(m_xdes);
  291. }
  292. /** Determine the page size to use for traversing the tablespace
  293. @param file_size size of the tablespace file in bytes
  294. @param block contents of the first page in the tablespace file.
  295. @retval DB_SUCCESS or error code. */
  296. virtual dberr_t init(
  297. os_offset_t file_size,
  298. const buf_block_t* block) UNIV_NOTHROW;
  299. /** @return true if compressed table. */
  300. bool is_compressed_table() const UNIV_NOTHROW
  301. {
  302. return(get_page_size().is_compressed());
  303. }
  304. /** @return the tablespace flags */
  305. ulint get_space_flags() const
  306. {
  307. return(m_space_flags);
  308. }
  309. /**
  310. Set the name of the physical file and the file handle that is used
  311. to open it for the file that is being iterated over.
  312. @param filename the physical name of the tablespace file
  313. @param file OS file handle */
  314. void set_file(const char* filename, pfs_os_file_t file) UNIV_NOTHROW
  315. {
  316. m_file = file;
  317. m_filepath = filename;
  318. }
  319. const page_size_t& get_page_size() const { return m_page_size; }
  320. const char* filename() const { return m_filepath; }
  321. /**
  322. Called for every page in the tablespace. If the page was not
  323. updated then its state must be set to BUF_PAGE_NOT_USED. For
  324. compressed tables the page descriptor memory will be at offset:
  325. block->frame + srv_page_size;
  326. @param offset - physical offset within the file
  327. @param block - block read from file, note it is not from the buffer pool
  328. @retval DB_SUCCESS or error code. */
  329. virtual dberr_t operator()(
  330. os_offset_t offset,
  331. buf_block_t* block) UNIV_NOTHROW = 0;
  332. /** @return the tablespace identifier */
  333. ulint get_space_id() const { return m_space; }
  334. bool is_interrupted() const { return trx_is_interrupted(m_trx); }
  335. /**
  336. Get the data page depending on the table type, compressed or not.
  337. @param block - block read from disk
  338. @retval the buffer frame */
  339. static byte* get_frame(const buf_block_t* block)
  340. {
  341. return block->page.zip.data
  342. ? block->page.zip.data : block->frame;
  343. }
  344. protected:
  345. /** Get the physical offset of the extent descriptor within the page.
  346. @param page_no page number of the extent descriptor
  347. @param page contents of the page containing the extent descriptor.
  348. @return the start of the xdes array in a page */
  349. const xdes_t* xdes(
  350. ulint page_no,
  351. const page_t* page) const UNIV_NOTHROW
  352. {
  353. ulint offset;
  354. offset = xdes_calc_descriptor_index(get_page_size(), page_no);
  355. return(page + XDES_ARR_OFFSET + XDES_SIZE * offset);
  356. }
  357. /** Set the current page directory (xdes). If the extent descriptor is
  358. marked as free then free the current extent descriptor and set it to
  359. 0. This implies that all pages that are covered by this extent
  360. descriptor are also freed.
  361. @param page_no offset of page within the file
  362. @param page page contents
  363. @return DB_SUCCESS or error code. */
  364. dberr_t set_current_xdes(
  365. ulint page_no,
  366. const page_t* page) UNIV_NOTHROW
  367. {
  368. m_xdes_page_no = page_no;
  369. UT_DELETE_ARRAY(m_xdes);
  370. m_xdes = NULL;
  371. ulint state;
  372. const xdes_t* xdesc = page + XDES_ARR_OFFSET;
  373. state = mach_read_ulint(xdesc + XDES_STATE, MLOG_4BYTES);
  374. if (state != XDES_FREE) {
  375. m_xdes = UT_NEW_ARRAY_NOKEY(xdes_t,
  376. m_page_size.physical());
  377. /* Trigger OOM */
  378. DBUG_EXECUTE_IF(
  379. "ib_import_OOM_13",
  380. UT_DELETE_ARRAY(m_xdes);
  381. m_xdes = NULL;
  382. );
  383. if (m_xdes == NULL) {
  384. return(DB_OUT_OF_MEMORY);
  385. }
  386. memcpy(m_xdes, page, m_page_size.physical());
  387. }
  388. return(DB_SUCCESS);
  389. }
  390. /** Check if the page is marked as free in the extent descriptor.
  391. @param page_no page number to check in the extent descriptor.
  392. @return true if the page is marked as free */
  393. bool is_free(ulint page_no) const UNIV_NOTHROW
  394. {
  395. ut_a(xdes_calc_descriptor_page(get_page_size(), page_no)
  396. == m_xdes_page_no);
  397. if (m_xdes != 0) {
  398. const xdes_t* xdesc = xdes(page_no, m_xdes);
  399. ulint pos = page_no % FSP_EXTENT_SIZE;
  400. return(xdes_get_bit(xdesc, XDES_FREE_BIT, pos));
  401. }
  402. /* If the current xdes was free, the page must be free. */
  403. return(true);
  404. }
  405. protected:
  406. /** The tablespace page size. */
  407. page_size_t m_page_size;
  408. /** File handle to the tablespace */
  409. pfs_os_file_t m_file;
  410. /** Physical file path. */
  411. const char* m_filepath;
  412. /** Covering transaction. */
  413. trx_t* m_trx;
  414. /** Space id of the file being iterated over. */
  415. ulint m_space;
  416. /** Minimum page number for which the free list has not been
  417. initialized: the pages >= this limit are, by definition, free;
  418. note that in a single-table tablespace where size < 64 pages,
  419. this number is 64, i.e., we have initialized the space about
  420. the first extent, but have not physically allocted those pages
  421. to the file. @see FSP_LIMIT. */
  422. ulint m_free_limit;
  423. /** Current size of the space in pages */
  424. ulint m_size;
  425. /** Current extent descriptor page */
  426. xdes_t* m_xdes;
  427. /** Physical page offset in the file of the extent descriptor */
  428. ulint m_xdes_page_no;
  429. /** Flags value read from the header page */
  430. ulint m_space_flags;
  431. };
  432. /** Determine the page size to use for traversing the tablespace
  433. @param file_size size of the tablespace file in bytes
  434. @param block contents of the first page in the tablespace file.
  435. @retval DB_SUCCESS or error code. */
  436. dberr_t
  437. AbstractCallback::init(
  438. os_offset_t file_size,
  439. const buf_block_t* block) UNIV_NOTHROW
  440. {
  441. const page_t* page = block->frame;
  442. m_space_flags = fsp_header_get_flags(page);
  443. if (!fsp_flags_is_valid(m_space_flags, true)) {
  444. ulint cflags = fsp_flags_convert_from_101(m_space_flags);
  445. if (cflags == ULINT_UNDEFINED) {
  446. ib::error() << "Invalid FSP_SPACE_FLAGS="
  447. << ib::hex(m_space_flags);
  448. return(DB_CORRUPTION);
  449. }
  450. m_space_flags = cflags;
  451. }
  452. /* Clear the DATA_DIR flag, which is basically garbage. */
  453. m_space_flags &= ~(1U << FSP_FLAGS_POS_RESERVED);
  454. m_page_size.copy_from(page_size_t(m_space_flags));
  455. if (!is_compressed_table() && !m_page_size.equals_to(univ_page_size)) {
  456. ib::error() << "Page size " << m_page_size.physical()
  457. << " of ibd file is not the same as the server page"
  458. " size " << srv_page_size;
  459. return(DB_CORRUPTION);
  460. } else if (file_size % m_page_size.physical() != 0) {
  461. ib::error() << "File size " << file_size << " is not a"
  462. " multiple of the page size "
  463. << m_page_size.physical();
  464. return(DB_CORRUPTION);
  465. }
  466. m_size = mach_read_from_4(page + FSP_SIZE);
  467. m_free_limit = mach_read_from_4(page + FSP_FREE_LIMIT);
  468. if (m_space == ULINT_UNDEFINED) {
  469. m_space = mach_read_from_4(FSP_HEADER_OFFSET + FSP_SPACE_ID
  470. + page);
  471. }
  472. return set_current_xdes(0, page);
  473. }
  474. /**
  475. Try and determine the index root pages by checking if the next/prev
  476. pointers are both FIL_NULL. We need to ensure that skip deleted pages. */
  477. struct FetchIndexRootPages : public AbstractCallback {
  478. /** Index information gathered from the .ibd file. */
  479. struct Index {
  480. Index(index_id_t id, ulint page_no)
  481. :
  482. m_id(id),
  483. m_page_no(page_no) { }
  484. index_id_t m_id; /*!< Index id */
  485. ulint m_page_no; /*!< Root page number */
  486. };
  487. typedef std::vector<Index, ut_allocator<Index> > Indexes;
  488. /** Constructor
  489. @param trx covering (user) transaction
  490. @param table table definition in server .*/
  491. FetchIndexRootPages(const dict_table_t* table, trx_t* trx)
  492. :
  493. AbstractCallback(trx, ULINT_UNDEFINED),
  494. m_table(table) UNIV_NOTHROW { }
  495. /** Destructor */
  496. virtual ~FetchIndexRootPages() UNIV_NOTHROW { }
  497. /** Called for each block as it is read from the file.
  498. @param offset physical offset in the file
  499. @param block block to convert, it is not from the buffer pool.
  500. @retval DB_SUCCESS or error code. */
  501. virtual dberr_t operator() (
  502. os_offset_t offset,
  503. buf_block_t* block) UNIV_NOTHROW;
  504. /** Update the import configuration that will be used to import
  505. the tablespace. */
  506. dberr_t build_row_import(row_import* cfg) const UNIV_NOTHROW;
  507. /** Table definition in server. */
  508. const dict_table_t* m_table;
  509. /** Index information */
  510. Indexes m_indexes;
  511. };
  512. /** Called for each block as it is read from the file. Check index pages to
  513. determine the exact row format. We can't get that from the tablespace
  514. header flags alone.
  515. @param offset physical offset in the file
  516. @param block block to convert, it is not from the buffer pool.
  517. @retval DB_SUCCESS or error code. */
  518. dberr_t
  519. FetchIndexRootPages::operator() (
  520. os_offset_t offset,
  521. buf_block_t* block) UNIV_NOTHROW
  522. {
  523. if (is_interrupted()) return DB_INTERRUPTED;
  524. const page_t* page = get_frame(block);
  525. ulint page_type = fil_page_get_type(page);
  526. if (block->page.id.page_no() * m_page_size.physical() != offset) {
  527. ib::error() << "Page offset doesn't match file offset:"
  528. " page offset: " << block->page.id.page_no()
  529. << ", file offset: "
  530. << (offset / m_page_size.physical());
  531. return DB_CORRUPTION;
  532. } else if (page_type == FIL_PAGE_TYPE_XDES) {
  533. return set_current_xdes(block->page.id.page_no(), page);
  534. } else if (fil_page_index_page_check(page)
  535. && !is_free(block->page.id.page_no())
  536. && page_is_root(page)) {
  537. index_id_t id = btr_page_get_index_id(page);
  538. m_indexes.push_back(Index(id, block->page.id.page_no()));
  539. if (m_indexes.size() == 1) {
  540. /* Check that the tablespace flags match the table flags. */
  541. ulint expected = dict_tf_to_fsp_flags(m_table->flags);
  542. if (!fsp_flags_match(expected, m_space_flags)) {
  543. ib_errf(m_trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  544. ER_TABLE_SCHEMA_MISMATCH,
  545. "Expected FSP_SPACE_FLAGS=0x%x, .ibd "
  546. "file contains 0x%x.",
  547. unsigned(expected),
  548. unsigned(m_space_flags));
  549. return(DB_CORRUPTION);
  550. }
  551. }
  552. }
  553. return DB_SUCCESS;
  554. }
  555. /**
  556. Update the import configuration that will be used to import the tablespace.
  557. @return error code or DB_SUCCESS */
  558. dberr_t
  559. FetchIndexRootPages::build_row_import(row_import* cfg) const UNIV_NOTHROW
  560. {
  561. Indexes::const_iterator end = m_indexes.end();
  562. ut_a(cfg->m_table == m_table);
  563. cfg->m_page_size.copy_from(m_page_size);
  564. cfg->m_n_indexes = m_indexes.size();
  565. if (cfg->m_n_indexes == 0) {
  566. ib::error() << "No B+Tree found in tablespace";
  567. return(DB_CORRUPTION);
  568. }
  569. cfg->m_indexes = UT_NEW_ARRAY_NOKEY(row_index_t, cfg->m_n_indexes);
  570. /* Trigger OOM */
  571. DBUG_EXECUTE_IF(
  572. "ib_import_OOM_11",
  573. UT_DELETE_ARRAY(cfg->m_indexes);
  574. cfg->m_indexes = NULL;
  575. );
  576. if (cfg->m_indexes == NULL) {
  577. return(DB_OUT_OF_MEMORY);
  578. }
  579. memset(cfg->m_indexes, 0x0, sizeof(*cfg->m_indexes) * cfg->m_n_indexes);
  580. row_index_t* cfg_index = cfg->m_indexes;
  581. for (Indexes::const_iterator it = m_indexes.begin();
  582. it != end;
  583. ++it, ++cfg_index) {
  584. char name[BUFSIZ];
  585. snprintf(name, sizeof(name), "index" IB_ID_FMT, it->m_id);
  586. ulint len = strlen(name) + 1;
  587. cfg_index->m_name = UT_NEW_ARRAY_NOKEY(byte, len);
  588. /* Trigger OOM */
  589. DBUG_EXECUTE_IF(
  590. "ib_import_OOM_12",
  591. UT_DELETE_ARRAY(cfg_index->m_name);
  592. cfg_index->m_name = NULL;
  593. );
  594. if (cfg_index->m_name == NULL) {
  595. return(DB_OUT_OF_MEMORY);
  596. }
  597. memcpy(cfg_index->m_name, name, len);
  598. cfg_index->m_id = it->m_id;
  599. cfg_index->m_space = m_space;
  600. cfg_index->m_page_no = it->m_page_no;
  601. }
  602. return(DB_SUCCESS);
  603. }
  604. /* Functor that is called for each physical page that is read from the
  605. tablespace file.
  606. 1. Check each page for corruption.
  607. 2. Update the space id and LSN on every page
  608. * For the header page
  609. - Validate the flags
  610. - Update the LSN
  611. 3. On Btree pages
  612. * Set the index id
  613. * Update the max trx id
  614. * In a cluster index, update the system columns
  615. * In a cluster index, update the BLOB ptr, set the space id
  616. * Purge delete marked records, but only if they can be easily
  617. removed from the page
  618. * Keep a counter of number of rows, ie. non-delete-marked rows
  619. * Keep a counter of number of delete marked rows
  620. * Keep a counter of number of purge failure
  621. * If a page is stamped with an index id that isn't in the .cfg file
  622. we assume it is deleted and the page can be ignored.
  623. 4. Set the page state to dirty so that it will be written to disk.
  624. */
  625. class PageConverter : public AbstractCallback {
  626. public:
  627. /** Constructor
  628. @param cfg config of table being imported.
  629. @param space_id tablespace identifier
  630. @param trx transaction covering the import */
  631. PageConverter(row_import* cfg, ulint space_id, trx_t* trx)
  632. :
  633. AbstractCallback(trx, space_id),
  634. m_cfg(cfg),
  635. m_index(cfg->m_indexes),
  636. m_current_lsn(log_get_lsn()),
  637. m_page_zip_ptr(0),
  638. m_rec_iter(),
  639. m_offsets_(), m_offsets(m_offsets_),
  640. m_heap(0),
  641. m_cluster_index(dict_table_get_first_index(cfg->m_table))
  642. {
  643. ut_ad(m_current_lsn);
  644. rec_offs_init(m_offsets_);
  645. }
  646. virtual ~PageConverter() UNIV_NOTHROW
  647. {
  648. if (m_heap != 0) {
  649. mem_heap_free(m_heap);
  650. }
  651. }
  652. /** Called for each block as it is read from the file.
  653. @param offset physical offset in the file
  654. @param block block to convert, it is not from the buffer pool.
  655. @retval DB_SUCCESS or error code. */
  656. virtual dberr_t operator() (
  657. os_offset_t offset,
  658. buf_block_t* block) UNIV_NOTHROW;
  659. private:
  660. /** Update the page, set the space id, max trx id and index id.
  661. @param block block read from file
  662. @param page_type type of the page
  663. @retval DB_SUCCESS or error code */
  664. dberr_t update_page(
  665. buf_block_t* block,
  666. ulint& page_type) UNIV_NOTHROW;
  667. /** Update the space, index id, trx id.
  668. @param block block to convert
  669. @return DB_SUCCESS or error code */
  670. dberr_t update_index_page(buf_block_t* block) UNIV_NOTHROW;
  671. /** Update the BLOB refrences and write UNDO log entries for
  672. rows that can't be purged optimistically.
  673. @param block block to update
  674. @retval DB_SUCCESS or error code */
  675. dberr_t update_records(buf_block_t* block) UNIV_NOTHROW;
  676. /** Validate the space flags and update tablespace header page.
  677. @param block block read from file, not from the buffer pool.
  678. @retval DB_SUCCESS or error code */
  679. dberr_t update_header(buf_block_t* block) UNIV_NOTHROW;
  680. /** Adjust the BLOB reference for a single column that is externally stored
  681. @param rec record to update
  682. @param offsets column offsets for the record
  683. @param i column ordinal value
  684. @return DB_SUCCESS or error code */
  685. dberr_t adjust_cluster_index_blob_column(
  686. rec_t* rec,
  687. const ulint* offsets,
  688. ulint i) UNIV_NOTHROW;
  689. /** Adjusts the BLOB reference in the clustered index row for all
  690. externally stored columns.
  691. @param rec record to update
  692. @param offsets column offsets for the record
  693. @return DB_SUCCESS or error code */
  694. dberr_t adjust_cluster_index_blob_columns(
  695. rec_t* rec,
  696. const ulint* offsets) UNIV_NOTHROW;
  697. /** In the clustered index, adjist the BLOB pointers as needed.
  698. Also update the BLOB reference, write the new space id.
  699. @param rec record to update
  700. @param offsets column offsets for the record
  701. @return DB_SUCCESS or error code */
  702. dberr_t adjust_cluster_index_blob_ref(
  703. rec_t* rec,
  704. const ulint* offsets) UNIV_NOTHROW;
  705. /** Purge delete-marked records, only if it is possible to do
  706. so without re-organising the B+tree.
  707. @param offsets current row offsets.
  708. @retval true if purged */
  709. bool purge(const ulint* offsets) UNIV_NOTHROW;
  710. /** Adjust the BLOB references and sys fields for the current record.
  711. @param index the index being converted
  712. @param rec record to update
  713. @param offsets column offsets for the record
  714. @return DB_SUCCESS or error code. */
  715. dberr_t adjust_cluster_record(
  716. const dict_index_t* index,
  717. rec_t* rec,
  718. const ulint* offsets) UNIV_NOTHROW;
  719. /** Find an index with the matching id.
  720. @return row_index_t* instance or 0 */
  721. row_index_t* find_index(index_id_t id) UNIV_NOTHROW
  722. {
  723. row_index_t* index = &m_cfg->m_indexes[0];
  724. for (ulint i = 0; i < m_cfg->m_n_indexes; ++i, ++index) {
  725. if (id == index->m_id) {
  726. return(index);
  727. }
  728. }
  729. return(0);
  730. }
  731. private:
  732. /** Config for table that is being imported. */
  733. row_import* m_cfg;
  734. /** Current index whose pages are being imported */
  735. row_index_t* m_index;
  736. /** Current system LSN */
  737. lsn_t m_current_lsn;
  738. /** Alias for m_page_zip, only set for compressed pages. */
  739. page_zip_des_t* m_page_zip_ptr;
  740. /** Iterator over records in a block */
  741. RecIterator m_rec_iter;
  742. /** Record offset */
  743. ulint m_offsets_[REC_OFFS_NORMAL_SIZE];
  744. /** Pointer to m_offsets_ */
  745. ulint* m_offsets;
  746. /** Memory heap for the record offsets */
  747. mem_heap_t* m_heap;
  748. /** Cluster index instance */
  749. dict_index_t* m_cluster_index;
  750. };
  751. /**
  752. row_import destructor. */
  753. row_import::~row_import() UNIV_NOTHROW
  754. {
  755. for (ulint i = 0; m_indexes != 0 && i < m_n_indexes; ++i) {
  756. UT_DELETE_ARRAY(m_indexes[i].m_name);
  757. if (m_indexes[i].m_fields == NULL) {
  758. continue;
  759. }
  760. dict_field_t* fields = m_indexes[i].m_fields;
  761. ulint n_fields = m_indexes[i].m_n_fields;
  762. for (ulint j = 0; j < n_fields; ++j) {
  763. UT_DELETE_ARRAY(const_cast<char*>(fields[j].name()));
  764. }
  765. UT_DELETE_ARRAY(fields);
  766. }
  767. for (ulint i = 0; m_col_names != 0 && i < m_n_cols; ++i) {
  768. UT_DELETE_ARRAY(m_col_names[i]);
  769. }
  770. UT_DELETE_ARRAY(m_cols);
  771. UT_DELETE_ARRAY(m_indexes);
  772. UT_DELETE_ARRAY(m_col_names);
  773. UT_DELETE_ARRAY(m_table_name);
  774. UT_DELETE_ARRAY(m_hostname);
  775. }
  776. /** Find the index entry in in the indexes array.
  777. @param name index name
  778. @return instance if found else 0. */
  779. row_index_t*
  780. row_import::get_index(
  781. const char* name) const UNIV_NOTHROW
  782. {
  783. for (ulint i = 0; i < m_n_indexes; ++i) {
  784. const char* index_name;
  785. row_index_t* index = &m_indexes[i];
  786. index_name = reinterpret_cast<const char*>(index->m_name);
  787. if (strcmp(index_name, name) == 0) {
  788. return(index);
  789. }
  790. }
  791. return(0);
  792. }
  793. /** Get the number of rows in the index.
  794. @param name index name
  795. @return number of rows (doesn't include delete marked rows). */
  796. ulint
  797. row_import::get_n_rows(
  798. const char* name) const UNIV_NOTHROW
  799. {
  800. const row_index_t* index = get_index(name);
  801. ut_a(name != 0);
  802. return(index->m_stats.m_n_rows);
  803. }
  804. /** Get the number of rows for which purge failed uding the convert phase.
  805. @param name index name
  806. @return number of rows for which purge failed. */
  807. ulint
  808. row_import::get_n_purge_failed(
  809. const char* name) const UNIV_NOTHROW
  810. {
  811. const row_index_t* index = get_index(name);
  812. ut_a(name != 0);
  813. return(index->m_stats.m_n_purge_failed);
  814. }
  815. /** Find the ordinal value of the column name in the cfg table columns.
  816. @param name of column to look for.
  817. @return ULINT_UNDEFINED if not found. */
  818. ulint
  819. row_import::find_col(
  820. const char* name) const UNIV_NOTHROW
  821. {
  822. for (ulint i = 0; i < m_n_cols; ++i) {
  823. const char* col_name;
  824. col_name = reinterpret_cast<const char*>(m_col_names[i]);
  825. if (strcmp(col_name, name) == 0) {
  826. return(i);
  827. }
  828. }
  829. return(ULINT_UNDEFINED);
  830. }
  831. /**
  832. Check if the index schema that was read from the .cfg file matches the
  833. in memory index definition.
  834. @return DB_SUCCESS or error code. */
  835. dberr_t
  836. row_import::match_index_columns(
  837. THD* thd,
  838. const dict_index_t* index) UNIV_NOTHROW
  839. {
  840. row_index_t* cfg_index;
  841. dberr_t err = DB_SUCCESS;
  842. cfg_index = get_index(index->name);
  843. if (cfg_index == 0) {
  844. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  845. ER_TABLE_SCHEMA_MISMATCH,
  846. "Index %s not found in tablespace meta-data file.",
  847. index->name());
  848. return(DB_ERROR);
  849. }
  850. if (cfg_index->m_n_fields != index->n_fields) {
  851. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  852. ER_TABLE_SCHEMA_MISMATCH,
  853. "Index field count %u doesn't match"
  854. " tablespace metadata file value " ULINTPF,
  855. index->n_fields, cfg_index->m_n_fields);
  856. return(DB_ERROR);
  857. }
  858. cfg_index->m_srv_index = index;
  859. const dict_field_t* field = index->fields;
  860. const dict_field_t* cfg_field = cfg_index->m_fields;
  861. for (ulint i = 0; i < index->n_fields; ++i, ++field, ++cfg_field) {
  862. if (strcmp(field->name(), cfg_field->name()) != 0) {
  863. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  864. ER_TABLE_SCHEMA_MISMATCH,
  865. "Index field name %s doesn't match"
  866. " tablespace metadata field name %s"
  867. " for field position " ULINTPF,
  868. field->name(), cfg_field->name(), i);
  869. err = DB_ERROR;
  870. }
  871. if (cfg_field->prefix_len != field->prefix_len) {
  872. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  873. ER_TABLE_SCHEMA_MISMATCH,
  874. "Index %s field %s prefix len %u"
  875. " doesn't match metadata file value %u",
  876. index->name(), field->name(),
  877. field->prefix_len, cfg_field->prefix_len);
  878. err = DB_ERROR;
  879. }
  880. if (cfg_field->fixed_len != field->fixed_len) {
  881. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  882. ER_TABLE_SCHEMA_MISMATCH,
  883. "Index %s field %s fixed len %u"
  884. " doesn't match metadata file value %u",
  885. index->name(), field->name(),
  886. field->fixed_len,
  887. cfg_field->fixed_len);
  888. err = DB_ERROR;
  889. }
  890. }
  891. return(err);
  892. }
  893. /** Check if the table schema that was read from the .cfg file matches the
  894. in memory table definition.
  895. @param thd MySQL session variable
  896. @return DB_SUCCESS or error code. */
  897. dberr_t
  898. row_import::match_table_columns(
  899. THD* thd) UNIV_NOTHROW
  900. {
  901. dberr_t err = DB_SUCCESS;
  902. const dict_col_t* col = m_table->cols;
  903. for (ulint i = 0; i < m_table->n_cols; ++i, ++col) {
  904. const char* col_name;
  905. ulint cfg_col_index;
  906. col_name = dict_table_get_col_name(
  907. m_table, dict_col_get_no(col));
  908. cfg_col_index = find_col(col_name);
  909. if (cfg_col_index == ULINT_UNDEFINED) {
  910. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  911. ER_TABLE_SCHEMA_MISMATCH,
  912. "Column %s not found in tablespace.",
  913. col_name);
  914. err = DB_ERROR;
  915. } else if (cfg_col_index != col->ind) {
  916. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  917. ER_TABLE_SCHEMA_MISMATCH,
  918. "Column %s ordinal value mismatch, it's at %u"
  919. " in the table and " ULINTPF
  920. " in the tablespace meta-data file",
  921. col_name, col->ind, cfg_col_index);
  922. err = DB_ERROR;
  923. } else {
  924. const dict_col_t* cfg_col;
  925. cfg_col = &m_cols[cfg_col_index];
  926. ut_a(cfg_col->ind == cfg_col_index);
  927. if (cfg_col->prtype != col->prtype) {
  928. ib_errf(thd,
  929. IB_LOG_LEVEL_ERROR,
  930. ER_TABLE_SCHEMA_MISMATCH,
  931. "Column %s precise type mismatch.",
  932. col_name);
  933. err = DB_ERROR;
  934. }
  935. if (cfg_col->mtype != col->mtype) {
  936. ib_errf(thd,
  937. IB_LOG_LEVEL_ERROR,
  938. ER_TABLE_SCHEMA_MISMATCH,
  939. "Column %s main type mismatch.",
  940. col_name);
  941. err = DB_ERROR;
  942. }
  943. if (cfg_col->len != col->len) {
  944. ib_errf(thd,
  945. IB_LOG_LEVEL_ERROR,
  946. ER_TABLE_SCHEMA_MISMATCH,
  947. "Column %s length mismatch.",
  948. col_name);
  949. err = DB_ERROR;
  950. }
  951. if (cfg_col->mbminlen != col->mbminlen
  952. || cfg_col->mbmaxlen != col->mbmaxlen) {
  953. ib_errf(thd,
  954. IB_LOG_LEVEL_ERROR,
  955. ER_TABLE_SCHEMA_MISMATCH,
  956. "Column %s multi-byte len mismatch.",
  957. col_name);
  958. err = DB_ERROR;
  959. }
  960. if (cfg_col->ind != col->ind) {
  961. err = DB_ERROR;
  962. }
  963. if (cfg_col->ord_part != col->ord_part) {
  964. ib_errf(thd,
  965. IB_LOG_LEVEL_ERROR,
  966. ER_TABLE_SCHEMA_MISMATCH,
  967. "Column %s ordering mismatch.",
  968. col_name);
  969. err = DB_ERROR;
  970. }
  971. if (cfg_col->max_prefix != col->max_prefix) {
  972. ib_errf(thd,
  973. IB_LOG_LEVEL_ERROR,
  974. ER_TABLE_SCHEMA_MISMATCH,
  975. "Column %s max prefix mismatch.",
  976. col_name);
  977. err = DB_ERROR;
  978. }
  979. }
  980. }
  981. return(err);
  982. }
  983. /** Check if the table (and index) schema that was read from the .cfg file
  984. matches the in memory table definition.
  985. @param thd MySQL session variable
  986. @return DB_SUCCESS or error code. */
  987. dberr_t
  988. row_import::match_schema(
  989. THD* thd) UNIV_NOTHROW
  990. {
  991. /* Do some simple checks. */
  992. if ((m_table->flags ^ m_flags) & ~DICT_TF_MASK_DATA_DIR) {
  993. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  994. "Table flags don't match, server table has 0x%x"
  995. " and the meta-data file has 0x" ULINTPFx,
  996. m_table->flags, m_flags);
  997. return(DB_ERROR);
  998. } else if (m_table->n_cols != m_n_cols) {
  999. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  1000. "Number of columns don't match, table has %u"
  1001. " columns but the tablespace meta-data file has "
  1002. ULINTPF " columns",
  1003. m_table->n_cols, m_n_cols);
  1004. return(DB_ERROR);
  1005. } else if (UT_LIST_GET_LEN(m_table->indexes) != m_n_indexes) {
  1006. /* If the number of indexes don't match then it is better
  1007. to abort the IMPORT. It is easy for the user to create a
  1008. table matching the IMPORT definition. */
  1009. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  1010. "Number of indexes don't match, table has " ULINTPF
  1011. " indexes but the tablespace meta-data file has "
  1012. ULINTPF " indexes",
  1013. UT_LIST_GET_LEN(m_table->indexes), m_n_indexes);
  1014. return(DB_ERROR);
  1015. }
  1016. dberr_t err = match_table_columns(thd);
  1017. if (err != DB_SUCCESS) {
  1018. return(err);
  1019. }
  1020. /* Check if the index definitions match. */
  1021. const dict_index_t* index;
  1022. for (index = UT_LIST_GET_FIRST(m_table->indexes);
  1023. index != 0;
  1024. index = UT_LIST_GET_NEXT(indexes, index)) {
  1025. dberr_t index_err;
  1026. index_err = match_index_columns(thd, index);
  1027. if (index_err != DB_SUCCESS) {
  1028. err = index_err;
  1029. }
  1030. }
  1031. return(err);
  1032. }
  1033. /**
  1034. Set the index root <space, pageno>, using index name. */
  1035. void
  1036. row_import::set_root_by_name() UNIV_NOTHROW
  1037. {
  1038. row_index_t* cfg_index = m_indexes;
  1039. for (ulint i = 0; i < m_n_indexes; ++i, ++cfg_index) {
  1040. dict_index_t* index;
  1041. const char* index_name;
  1042. index_name = reinterpret_cast<const char*>(cfg_index->m_name);
  1043. index = dict_table_get_index_on_name(m_table, index_name);
  1044. /* We've already checked that it exists. */
  1045. ut_a(index != 0);
  1046. index->page = cfg_index->m_page_no;
  1047. }
  1048. }
  1049. /**
  1050. Set the index root <space, pageno>, using a heuristic.
  1051. @return DB_SUCCESS or error code */
  1052. dberr_t
  1053. row_import::set_root_by_heuristic() UNIV_NOTHROW
  1054. {
  1055. row_index_t* cfg_index = m_indexes;
  1056. ut_a(m_n_indexes > 0);
  1057. // TODO: For now use brute force, based on ordinality
  1058. if (UT_LIST_GET_LEN(m_table->indexes) != m_n_indexes) {
  1059. ib::warn() << "Table " << m_table->name << " should have "
  1060. << UT_LIST_GET_LEN(m_table->indexes) << " indexes but"
  1061. " the tablespace has " << m_n_indexes << " indexes";
  1062. }
  1063. dict_mutex_enter_for_mysql();
  1064. ulint i = 0;
  1065. dberr_t err = DB_SUCCESS;
  1066. for (dict_index_t* index = UT_LIST_GET_FIRST(m_table->indexes);
  1067. index != 0;
  1068. index = UT_LIST_GET_NEXT(indexes, index)) {
  1069. if (index->type & DICT_FTS) {
  1070. index->type |= DICT_CORRUPT;
  1071. ib::warn() << "Skipping FTS index: " << index->name;
  1072. } else if (i < m_n_indexes) {
  1073. UT_DELETE_ARRAY(cfg_index[i].m_name);
  1074. ulint len = strlen(index->name) + 1;
  1075. cfg_index[i].m_name = UT_NEW_ARRAY_NOKEY(byte, len);
  1076. /* Trigger OOM */
  1077. DBUG_EXECUTE_IF(
  1078. "ib_import_OOM_14",
  1079. UT_DELETE_ARRAY(cfg_index[i].m_name);
  1080. cfg_index[i].m_name = NULL;
  1081. );
  1082. if (cfg_index[i].m_name == NULL) {
  1083. err = DB_OUT_OF_MEMORY;
  1084. break;
  1085. }
  1086. memcpy(cfg_index[i].m_name, index->name, len);
  1087. cfg_index[i].m_srv_index = index;
  1088. index->page = cfg_index[i].m_page_no;
  1089. ++i;
  1090. }
  1091. }
  1092. dict_mutex_exit_for_mysql();
  1093. return(err);
  1094. }
  1095. /**
  1096. Purge delete marked records.
  1097. @return DB_SUCCESS or error code. */
  1098. dberr_t
  1099. IndexPurge::garbage_collect() UNIV_NOTHROW
  1100. {
  1101. dberr_t err;
  1102. ibool comp = dict_table_is_comp(m_index->table);
  1103. /* Open the persistent cursor and start the mini-transaction. */
  1104. open();
  1105. while ((err = next()) == DB_SUCCESS) {
  1106. rec_t* rec = btr_pcur_get_rec(&m_pcur);
  1107. ibool deleted = rec_get_deleted_flag(rec, comp);
  1108. if (!deleted) {
  1109. ++m_n_rows;
  1110. } else {
  1111. purge();
  1112. }
  1113. }
  1114. /* Close the persistent cursor and commit the mini-transaction. */
  1115. close();
  1116. return(err == DB_END_OF_INDEX ? DB_SUCCESS : err);
  1117. }
  1118. /**
  1119. Begin import, position the cursor on the first record. */
  1120. void
  1121. IndexPurge::open() UNIV_NOTHROW
  1122. {
  1123. mtr_start(&m_mtr);
  1124. mtr_set_log_mode(&m_mtr, MTR_LOG_NO_REDO);
  1125. btr_pcur_open_at_index_side(
  1126. true, m_index, BTR_MODIFY_LEAF, &m_pcur, true, 0, &m_mtr);
  1127. btr_pcur_move_to_next_user_rec(&m_pcur, &m_mtr);
  1128. if (rec_is_default_row(btr_pcur_get_rec(&m_pcur), m_index)) {
  1129. ut_ad(btr_pcur_is_on_user_rec(&m_pcur));
  1130. /* Skip the 'default row' pseudo-record. */
  1131. } else {
  1132. btr_pcur_move_to_prev_on_page(&m_pcur);
  1133. }
  1134. }
  1135. /**
  1136. Close the persistent curosr and commit the mini-transaction. */
  1137. void
  1138. IndexPurge::close() UNIV_NOTHROW
  1139. {
  1140. btr_pcur_close(&m_pcur);
  1141. mtr_commit(&m_mtr);
  1142. }
  1143. /**
  1144. Position the cursor on the next record.
  1145. @return DB_SUCCESS or error code */
  1146. dberr_t
  1147. IndexPurge::next() UNIV_NOTHROW
  1148. {
  1149. btr_pcur_move_to_next_on_page(&m_pcur);
  1150. /* When switching pages, commit the mini-transaction
  1151. in order to release the latch on the old page. */
  1152. if (!btr_pcur_is_after_last_on_page(&m_pcur)) {
  1153. return(DB_SUCCESS);
  1154. } else if (trx_is_interrupted(m_trx)) {
  1155. /* Check after every page because the check
  1156. is expensive. */
  1157. return(DB_INTERRUPTED);
  1158. }
  1159. btr_pcur_store_position(&m_pcur, &m_mtr);
  1160. mtr_commit(&m_mtr);
  1161. mtr_start(&m_mtr);
  1162. mtr_set_log_mode(&m_mtr, MTR_LOG_NO_REDO);
  1163. btr_pcur_restore_position(BTR_MODIFY_LEAF, &m_pcur, &m_mtr);
  1164. if (!btr_pcur_move_to_next_user_rec(&m_pcur, &m_mtr)) {
  1165. return(DB_END_OF_INDEX);
  1166. }
  1167. return(DB_SUCCESS);
  1168. }
  1169. /**
  1170. Store the persistent cursor position and reopen the
  1171. B-tree cursor in BTR_MODIFY_TREE mode, because the
  1172. tree structure may be changed during a pessimistic delete. */
  1173. void
  1174. IndexPurge::purge_pessimistic_delete() UNIV_NOTHROW
  1175. {
  1176. dberr_t err;
  1177. btr_pcur_restore_position(BTR_MODIFY_TREE | BTR_LATCH_FOR_DELETE,
  1178. &m_pcur, &m_mtr);
  1179. ut_ad(rec_get_deleted_flag(
  1180. btr_pcur_get_rec(&m_pcur),
  1181. dict_table_is_comp(m_index->table)));
  1182. btr_cur_pessimistic_delete(
  1183. &err, FALSE, btr_pcur_get_btr_cur(&m_pcur), 0, false, &m_mtr);
  1184. ut_a(err == DB_SUCCESS);
  1185. /* Reopen the B-tree cursor in BTR_MODIFY_LEAF mode */
  1186. mtr_commit(&m_mtr);
  1187. }
  1188. /**
  1189. Purge delete-marked records. */
  1190. void
  1191. IndexPurge::purge() UNIV_NOTHROW
  1192. {
  1193. btr_pcur_store_position(&m_pcur, &m_mtr);
  1194. purge_pessimistic_delete();
  1195. mtr_start(&m_mtr);
  1196. mtr_set_log_mode(&m_mtr, MTR_LOG_NO_REDO);
  1197. btr_pcur_restore_position(BTR_MODIFY_LEAF, &m_pcur, &m_mtr);
  1198. }
  1199. /** Adjust the BLOB reference for a single column that is externally stored
  1200. @param rec record to update
  1201. @param offsets column offsets for the record
  1202. @param i column ordinal value
  1203. @return DB_SUCCESS or error code */
  1204. inline
  1205. dberr_t
  1206. PageConverter::adjust_cluster_index_blob_column(
  1207. rec_t* rec,
  1208. const ulint* offsets,
  1209. ulint i) UNIV_NOTHROW
  1210. {
  1211. ulint len;
  1212. byte* field;
  1213. field = rec_get_nth_field(rec, offsets, i, &len);
  1214. DBUG_EXECUTE_IF("ib_import_trigger_corruption_2",
  1215. len = BTR_EXTERN_FIELD_REF_SIZE - 1;);
  1216. if (len < BTR_EXTERN_FIELD_REF_SIZE) {
  1217. ib_errf(m_trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  1218. ER_INNODB_INDEX_CORRUPT,
  1219. "Externally stored column(" ULINTPF
  1220. ") has a reference length of " ULINTPF
  1221. " in the cluster index %s",
  1222. i, len, m_cluster_index->name());
  1223. return(DB_CORRUPTION);
  1224. }
  1225. field += len - (BTR_EXTERN_FIELD_REF_SIZE - BTR_EXTERN_SPACE_ID);
  1226. mach_write_to_4(field, get_space_id());
  1227. if (m_page_zip_ptr) {
  1228. page_zip_write_blob_ptr(
  1229. m_page_zip_ptr, rec, m_cluster_index, offsets, i, 0);
  1230. }
  1231. return(DB_SUCCESS);
  1232. }
  1233. /** Adjusts the BLOB reference in the clustered index row for all externally
  1234. stored columns.
  1235. @param rec record to update
  1236. @param offsets column offsets for the record
  1237. @return DB_SUCCESS or error code */
  1238. inline
  1239. dberr_t
  1240. PageConverter::adjust_cluster_index_blob_columns(
  1241. rec_t* rec,
  1242. const ulint* offsets) UNIV_NOTHROW
  1243. {
  1244. ut_ad(rec_offs_any_extern(offsets));
  1245. /* Adjust the space_id in the BLOB pointers. */
  1246. for (ulint i = 0; i < rec_offs_n_fields(offsets); ++i) {
  1247. /* Only if the column is stored "externally". */
  1248. if (rec_offs_nth_extern(offsets, i)) {
  1249. dberr_t err;
  1250. err = adjust_cluster_index_blob_column(rec, offsets, i);
  1251. if (err != DB_SUCCESS) {
  1252. return(err);
  1253. }
  1254. }
  1255. }
  1256. return(DB_SUCCESS);
  1257. }
  1258. /** In the clustered index, adjust BLOB pointers as needed. Also update the
  1259. BLOB reference, write the new space id.
  1260. @param rec record to update
  1261. @param offsets column offsets for the record
  1262. @return DB_SUCCESS or error code */
  1263. inline
  1264. dberr_t
  1265. PageConverter::adjust_cluster_index_blob_ref(
  1266. rec_t* rec,
  1267. const ulint* offsets) UNIV_NOTHROW
  1268. {
  1269. if (rec_offs_any_extern(offsets)) {
  1270. dberr_t err;
  1271. err = adjust_cluster_index_blob_columns(rec, offsets);
  1272. if (err != DB_SUCCESS) {
  1273. return(err);
  1274. }
  1275. }
  1276. return(DB_SUCCESS);
  1277. }
  1278. /** Purge delete-marked records, only if it is possible to do so without
  1279. re-organising the B+tree.
  1280. @param offsets current row offsets.
  1281. @return true if purge succeeded */
  1282. inline
  1283. bool
  1284. PageConverter::purge(const ulint* offsets) UNIV_NOTHROW
  1285. {
  1286. const dict_index_t* index = m_index->m_srv_index;
  1287. /* We can't have a page that is empty and not root. */
  1288. if (m_rec_iter.remove(index, m_page_zip_ptr, m_offsets)) {
  1289. ++m_index->m_stats.m_n_purged;
  1290. return(true);
  1291. } else {
  1292. ++m_index->m_stats.m_n_purge_failed;
  1293. }
  1294. return(false);
  1295. }
  1296. /** Adjust the BLOB references and sys fields for the current record.
  1297. @param rec record to update
  1298. @param offsets column offsets for the record
  1299. @return DB_SUCCESS or error code. */
  1300. inline
  1301. dberr_t
  1302. PageConverter::adjust_cluster_record(
  1303. const dict_index_t* index,
  1304. rec_t* rec,
  1305. const ulint* offsets) UNIV_NOTHROW
  1306. {
  1307. dberr_t err;
  1308. if ((err = adjust_cluster_index_blob_ref(rec, offsets)) == DB_SUCCESS) {
  1309. /* Reset DB_TRX_ID and DB_ROLL_PTR. Normally, these fields
  1310. are only written in conjunction with other changes to the
  1311. record. */
  1312. ulint trx_id_pos = m_cluster_index->n_uniq
  1313. ? m_cluster_index->n_uniq : 1;
  1314. if (m_page_zip_ptr) {
  1315. page_zip_write_trx_id_and_roll_ptr(
  1316. m_page_zip_ptr, rec, m_offsets, trx_id_pos,
  1317. 0, roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS,
  1318. NULL);
  1319. } else {
  1320. ulint len;
  1321. byte* ptr = rec_get_nth_field(
  1322. rec, m_offsets, trx_id_pos, &len);
  1323. ut_ad(len == DATA_TRX_ID_LEN);
  1324. memcpy(ptr, reset_trx_id, sizeof reset_trx_id);
  1325. }
  1326. }
  1327. return(err);
  1328. }
  1329. /** Update the BLOB refrences and write UNDO log entries for
  1330. rows that can't be purged optimistically.
  1331. @param block block to update
  1332. @retval DB_SUCCESS or error code */
  1333. inline
  1334. dberr_t
  1335. PageConverter::update_records(
  1336. buf_block_t* block) UNIV_NOTHROW
  1337. {
  1338. ibool comp = dict_table_is_comp(m_cfg->m_table);
  1339. bool clust_index = m_index->m_srv_index == m_cluster_index;
  1340. /* This will also position the cursor on the first user record. */
  1341. m_rec_iter.open(block);
  1342. if (!page_is_leaf(block->frame)) {
  1343. return DB_SUCCESS;
  1344. }
  1345. while (!m_rec_iter.end()) {
  1346. rec_t* rec = m_rec_iter.current();
  1347. ibool deleted = rec_get_deleted_flag(rec, comp);
  1348. /* For the clustered index we have to adjust the BLOB
  1349. reference and the system fields irrespective of the
  1350. delete marked flag. The adjustment of delete marked
  1351. cluster records is required for purge to work later. */
  1352. if (deleted || clust_index) {
  1353. m_offsets = rec_get_offsets(
  1354. rec, m_index->m_srv_index, m_offsets, true,
  1355. ULINT_UNDEFINED, &m_heap);
  1356. }
  1357. if (clust_index) {
  1358. dberr_t err = adjust_cluster_record(
  1359. m_index->m_srv_index, rec, m_offsets);
  1360. if (err != DB_SUCCESS) {
  1361. return(err);
  1362. }
  1363. }
  1364. /* If it is a delete marked record then try an
  1365. optimistic delete. */
  1366. if (deleted) {
  1367. /* A successful purge will move the cursor to the
  1368. next record. */
  1369. if (!purge(m_offsets)) {
  1370. m_rec_iter.next();
  1371. }
  1372. ++m_index->m_stats.m_n_deleted;
  1373. } else {
  1374. ++m_index->m_stats.m_n_rows;
  1375. m_rec_iter.next();
  1376. }
  1377. }
  1378. return(DB_SUCCESS);
  1379. }
  1380. /** Update the space, index id, trx id.
  1381. @return DB_SUCCESS or error code */
  1382. inline
  1383. dberr_t
  1384. PageConverter::update_index_page(
  1385. buf_block_t* block) UNIV_NOTHROW
  1386. {
  1387. index_id_t id;
  1388. buf_frame_t* page = block->frame;
  1389. if (is_free(block->page.id.page_no())) {
  1390. return(DB_SUCCESS);
  1391. } else if ((id = btr_page_get_index_id(page)) != m_index->m_id) {
  1392. row_index_t* index = find_index(id);
  1393. if (index == 0) {
  1394. ib::error() << "Page for tablespace " << m_space
  1395. << " is index page with id " << id
  1396. << " but that index is not found from"
  1397. << " configuration file. Current index name "
  1398. << m_index->m_name << " and id " << m_index->m_id;
  1399. m_index = 0;
  1400. return(DB_CORRUPTION);
  1401. }
  1402. /* Update current index */
  1403. m_index = index;
  1404. }
  1405. /* If the .cfg file is missing and there is an index mismatch
  1406. then ignore the error. */
  1407. if (m_cfg->m_missing && (m_index == 0 || m_index->m_srv_index == 0)) {
  1408. return(DB_SUCCESS);
  1409. }
  1410. #ifdef UNIV_ZIP_DEBUG
  1411. ut_a(!is_compressed_table()
  1412. || page_zip_validate(m_page_zip_ptr, page, m_index->m_srv_index));
  1413. #endif /* UNIV_ZIP_DEBUG */
  1414. /* This has to be written to uncompressed index header. Set it to
  1415. the current index id. */
  1416. btr_page_set_index_id(
  1417. page, m_page_zip_ptr, m_index->m_srv_index->id, 0);
  1418. if (dict_index_is_clust(m_index->m_srv_index)) {
  1419. if (page_is_root(page)) {
  1420. /* Preserve the PAGE_ROOT_AUTO_INC. */
  1421. if (m_index->m_srv_index->table->supports_instant()
  1422. && btr_cur_instant_root_init(
  1423. const_cast<dict_index_t*>(
  1424. m_index->m_srv_index),
  1425. page)) {
  1426. return(DB_CORRUPTION);
  1427. }
  1428. } else {
  1429. /* Clear PAGE_MAX_TRX_ID so that it can be
  1430. used for other purposes in the future. IMPORT
  1431. in MySQL 5.6, 5.7 and MariaDB 10.0 and 10.1
  1432. would set the field to the transaction ID even
  1433. on clustered index pages. */
  1434. page_set_max_trx_id(block, m_page_zip_ptr, 0, NULL);
  1435. }
  1436. } else {
  1437. /* Set PAGE_MAX_TRX_ID on secondary index leaf pages,
  1438. and clear it on non-leaf pages. */
  1439. page_set_max_trx_id(block, m_page_zip_ptr,
  1440. page_is_leaf(page) ? m_trx->id : 0, NULL);
  1441. }
  1442. if (page_is_empty(page)) {
  1443. /* Only a root page can be empty. */
  1444. if (!page_is_root(page)) {
  1445. // TODO: We should relax this and skip secondary
  1446. // indexes. Mark them as corrupt because they can
  1447. // always be rebuilt.
  1448. return(DB_CORRUPTION);
  1449. }
  1450. return(DB_SUCCESS);
  1451. }
  1452. return(update_records(block));
  1453. }
  1454. /** Validate the space flags and update tablespace header page.
  1455. @param block block read from file, not from the buffer pool.
  1456. @retval DB_SUCCESS or error code */
  1457. inline
  1458. dberr_t
  1459. PageConverter::update_header(
  1460. buf_block_t* block) UNIV_NOTHROW
  1461. {
  1462. /* Check for valid header */
  1463. switch (fsp_header_get_space_id(get_frame(block))) {
  1464. case 0:
  1465. return(DB_CORRUPTION);
  1466. case ULINT_UNDEFINED:
  1467. ib::warn() << "Space id check in the header failed: ignored";
  1468. }
  1469. mach_write_to_8(
  1470. get_frame(block) + FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
  1471. m_current_lsn);
  1472. /* Write back the adjusted flags. */
  1473. mach_write_to_4(FSP_HEADER_OFFSET + FSP_SPACE_FLAGS
  1474. + get_frame(block), m_space_flags);
  1475. /* Write space_id to the tablespace header, page 0. */
  1476. mach_write_to_4(
  1477. get_frame(block) + FSP_HEADER_OFFSET + FSP_SPACE_ID,
  1478. get_space_id());
  1479. /* This is on every page in the tablespace. */
  1480. mach_write_to_4(
  1481. get_frame(block) + FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID,
  1482. get_space_id());
  1483. return(DB_SUCCESS);
  1484. }
  1485. /** Update the page, set the space id, max trx id and index id.
  1486. @param block block read from file
  1487. @retval DB_SUCCESS or error code */
  1488. inline
  1489. dberr_t
  1490. PageConverter::update_page(
  1491. buf_block_t* block,
  1492. ulint& page_type) UNIV_NOTHROW
  1493. {
  1494. dberr_t err = DB_SUCCESS;
  1495. ut_ad(!block->page.zip.data == !is_compressed_table());
  1496. if (block->page.zip.data) {
  1497. m_page_zip_ptr = &block->page.zip;
  1498. } else {
  1499. ut_ad(!m_page_zip_ptr);
  1500. }
  1501. switch (page_type = fil_page_get_type(get_frame(block))) {
  1502. case FIL_PAGE_TYPE_FSP_HDR:
  1503. ut_a(block->page.id.page_no() == 0);
  1504. /* Work directly on the uncompressed page headers. */
  1505. return(update_header(block));
  1506. case FIL_PAGE_INDEX:
  1507. case FIL_PAGE_RTREE:
  1508. /* We need to decompress the contents into block->frame
  1509. before we can do any thing with Btree pages. */
  1510. if (is_compressed_table() && !buf_zip_decompress(block, TRUE)) {
  1511. return(DB_CORRUPTION);
  1512. }
  1513. /* fall through */
  1514. case FIL_PAGE_TYPE_INSTANT:
  1515. /* This is on every page in the tablespace. */
  1516. mach_write_to_4(
  1517. get_frame(block)
  1518. + FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID, get_space_id());
  1519. /* Only update the Btree nodes. */
  1520. return(update_index_page(block));
  1521. case FIL_PAGE_TYPE_SYS:
  1522. /* This is page 0 in the system tablespace. */
  1523. return(DB_CORRUPTION);
  1524. case FIL_PAGE_TYPE_XDES:
  1525. err = set_current_xdes(
  1526. block->page.id.page_no(), get_frame(block));
  1527. /* fall through */
  1528. case FIL_PAGE_INODE:
  1529. case FIL_PAGE_TYPE_TRX_SYS:
  1530. case FIL_PAGE_IBUF_FREE_LIST:
  1531. case FIL_PAGE_TYPE_ALLOCATED:
  1532. case FIL_PAGE_IBUF_BITMAP:
  1533. case FIL_PAGE_TYPE_BLOB:
  1534. case FIL_PAGE_TYPE_ZBLOB:
  1535. case FIL_PAGE_TYPE_ZBLOB2:
  1536. /* Work directly on the uncompressed page headers. */
  1537. /* This is on every page in the tablespace. */
  1538. mach_write_to_4(
  1539. get_frame(block)
  1540. + FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID, get_space_id());
  1541. return(err);
  1542. }
  1543. ib::warn() << "Unknown page type (" << page_type << ")";
  1544. return(DB_CORRUPTION);
  1545. }
  1546. /** Called for every page in the tablespace. If the page was not
  1547. updated then its state must be set to BUF_PAGE_NOT_USED.
  1548. @param block block read from file, note it is not from the buffer pool
  1549. @retval DB_SUCCESS or error code. */
  1550. dberr_t
  1551. PageConverter::operator() (os_offset_t, buf_block_t* block) UNIV_NOTHROW
  1552. {
  1553. /* If we already had an old page with matching number
  1554. in the buffer pool, evict it now, because
  1555. we no longer evict the pages on DISCARD TABLESPACE. */
  1556. buf_page_get_gen(block->page.id, get_page_size(),
  1557. RW_NO_LATCH, NULL, BUF_EVICT_IF_IN_POOL,
  1558. __FILE__, __LINE__, NULL, NULL);
  1559. ulint page_type;
  1560. dberr_t err = update_page(block, page_type);
  1561. if (err != DB_SUCCESS) return err;
  1562. if (!block->page.zip.data) {
  1563. buf_flush_init_for_writing(
  1564. NULL, block->frame, NULL, m_current_lsn);
  1565. } else if (fil_page_type_is_index(page_type)) {
  1566. buf_flush_init_for_writing(
  1567. NULL, block->page.zip.data, &block->page.zip,
  1568. m_current_lsn);
  1569. } else {
  1570. /* Calculate and update the checksum of non-index
  1571. pages for ROW_FORMAT=COMPRESSED tables. */
  1572. buf_flush_update_zip_checksum(
  1573. block->page.zip.data, get_page_size().physical(),
  1574. m_current_lsn);
  1575. }
  1576. return DB_SUCCESS;
  1577. }
  1578. /*****************************************************************//**
  1579. Clean up after import tablespace failure, this function will acquire
  1580. the dictionary latches on behalf of the transaction if the transaction
  1581. hasn't already acquired them. */
  1582. static MY_ATTRIBUTE((nonnull))
  1583. void
  1584. row_import_discard_changes(
  1585. /*=======================*/
  1586. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from handler */
  1587. trx_t* trx, /*!< in/out: transaction for import */
  1588. dberr_t err) /*!< in: error code */
  1589. {
  1590. dict_table_t* table = prebuilt->table;
  1591. ut_a(err != DB_SUCCESS);
  1592. prebuilt->trx->error_info = NULL;
  1593. ib::info() << "Discarding tablespace of table "
  1594. << prebuilt->table->name
  1595. << ": " << ut_strerr(err);
  1596. if (trx->dict_operation_lock_mode != RW_X_LATCH) {
  1597. ut_a(trx->dict_operation_lock_mode == 0);
  1598. row_mysql_lock_data_dictionary(trx);
  1599. }
  1600. ut_a(trx->dict_operation_lock_mode == RW_X_LATCH);
  1601. /* Since we update the index root page numbers on disk after
  1602. we've done a successful import. The table will not be loadable.
  1603. However, we need to ensure that the in memory root page numbers
  1604. are reset to "NULL". */
  1605. for (dict_index_t* index = UT_LIST_GET_FIRST(table->indexes);
  1606. index != 0;
  1607. index = UT_LIST_GET_NEXT(indexes, index)) {
  1608. index->page = FIL_NULL;
  1609. }
  1610. table->file_unreadable = true;
  1611. if (table->space) {
  1612. fil_close_tablespace(trx, table->space->id);
  1613. table->space = NULL;
  1614. }
  1615. }
  1616. /*****************************************************************//**
  1617. Clean up after import tablespace. */
  1618. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1619. dberr_t
  1620. row_import_cleanup(
  1621. /*===============*/
  1622. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from handler */
  1623. trx_t* trx, /*!< in/out: transaction for import */
  1624. dberr_t err) /*!< in: error code */
  1625. {
  1626. ut_a(prebuilt->trx != trx);
  1627. if (err != DB_SUCCESS) {
  1628. row_import_discard_changes(prebuilt, trx, err);
  1629. }
  1630. ut_a(trx->dict_operation_lock_mode == RW_X_LATCH);
  1631. DBUG_EXECUTE_IF("ib_import_before_commit_crash", DBUG_SUICIDE(););
  1632. trx_commit_for_mysql(trx);
  1633. row_mysql_unlock_data_dictionary(trx);
  1634. trx_free(trx);
  1635. prebuilt->trx->op_info = "";
  1636. DBUG_EXECUTE_IF("ib_import_before_checkpoint_crash", DBUG_SUICIDE(););
  1637. log_make_checkpoint_at(LSN_MAX, TRUE);
  1638. return(err);
  1639. }
  1640. /*****************************************************************//**
  1641. Report error during tablespace import. */
  1642. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1643. dberr_t
  1644. row_import_error(
  1645. /*=============*/
  1646. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from handler */
  1647. trx_t* trx, /*!< in/out: transaction for import */
  1648. dberr_t err) /*!< in: error code */
  1649. {
  1650. if (!trx_is_interrupted(trx)) {
  1651. char table_name[MAX_FULL_NAME_LEN + 1];
  1652. innobase_format_name(
  1653. table_name, sizeof(table_name),
  1654. prebuilt->table->name.m_name);
  1655. ib_senderrf(
  1656. trx->mysql_thd, IB_LOG_LEVEL_WARN,
  1657. ER_INNODB_IMPORT_ERROR,
  1658. table_name, (ulong) err, ut_strerr(err));
  1659. }
  1660. return(row_import_cleanup(prebuilt, trx, err));
  1661. }
  1662. /*****************************************************************//**
  1663. Adjust the root page index node and leaf node segment headers, update
  1664. with the new space id. For all the table's secondary indexes.
  1665. @return error code */
  1666. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1667. dberr_t
  1668. row_import_adjust_root_pages_of_secondary_indexes(
  1669. /*==============================================*/
  1670. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from
  1671. handler */
  1672. trx_t* trx, /*!< in: transaction used for
  1673. the import */
  1674. dict_table_t* table, /*!< in: table the indexes
  1675. belong to */
  1676. const row_import& cfg) /*!< Import context */
  1677. {
  1678. dict_index_t* index;
  1679. ulint n_rows_in_table;
  1680. dberr_t err = DB_SUCCESS;
  1681. /* Skip the clustered index. */
  1682. index = dict_table_get_first_index(table);
  1683. n_rows_in_table = cfg.get_n_rows(index->name);
  1684. DBUG_EXECUTE_IF("ib_import_sec_rec_count_mismatch_failure",
  1685. n_rows_in_table++;);
  1686. /* Adjust the root pages of the secondary indexes only. */
  1687. while ((index = dict_table_get_next_index(index)) != NULL) {
  1688. ut_a(!dict_index_is_clust(index));
  1689. if (!(index->type & DICT_CORRUPT)
  1690. && index->page != FIL_NULL) {
  1691. /* Update the Btree segment headers for index node and
  1692. leaf nodes in the root page. Set the new space id. */
  1693. err = btr_root_adjust_on_import(index);
  1694. } else {
  1695. ib::warn() << "Skip adjustment of root pages for"
  1696. " index " << index->name << ".";
  1697. err = DB_CORRUPTION;
  1698. }
  1699. if (err != DB_SUCCESS) {
  1700. if (index->type & DICT_CLUSTERED) {
  1701. break;
  1702. }
  1703. ib_errf(trx->mysql_thd,
  1704. IB_LOG_LEVEL_WARN,
  1705. ER_INNODB_INDEX_CORRUPT,
  1706. "Index %s not found or corrupt,"
  1707. " you should recreate this index.",
  1708. index->name());
  1709. /* Do not bail out, so that the data
  1710. can be recovered. */
  1711. err = DB_SUCCESS;
  1712. index->type |= DICT_CORRUPT;
  1713. continue;
  1714. }
  1715. /* If we failed to purge any records in the index then
  1716. do it the hard way.
  1717. TODO: We can do this in the first pass by generating UNDO log
  1718. records for the failed rows. */
  1719. if (!cfg.requires_purge(index->name)) {
  1720. continue;
  1721. }
  1722. IndexPurge purge(trx, index);
  1723. trx->op_info = "secondary: purge delete marked records";
  1724. err = purge.garbage_collect();
  1725. trx->op_info = "";
  1726. if (err != DB_SUCCESS) {
  1727. break;
  1728. } else if (purge.get_n_rows() != n_rows_in_table) {
  1729. ib_errf(trx->mysql_thd,
  1730. IB_LOG_LEVEL_WARN,
  1731. ER_INNODB_INDEX_CORRUPT,
  1732. "Index '%s' contains " ULINTPF " entries, "
  1733. "should be " ULINTPF ", you should recreate "
  1734. "this index.", index->name(),
  1735. purge.get_n_rows(), n_rows_in_table);
  1736. index->type |= DICT_CORRUPT;
  1737. /* Do not bail out, so that the data
  1738. can be recovered. */
  1739. err = DB_SUCCESS;
  1740. }
  1741. }
  1742. return(err);
  1743. }
  1744. /*****************************************************************//**
  1745. Ensure that dict_sys->row_id exceeds SELECT MAX(DB_ROW_ID).
  1746. @return error code */
  1747. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1748. dberr_t
  1749. row_import_set_sys_max_row_id(
  1750. /*==========================*/
  1751. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from
  1752. handler */
  1753. const dict_table_t* table) /*!< in: table to import */
  1754. {
  1755. dberr_t err;
  1756. const rec_t* rec;
  1757. mtr_t mtr;
  1758. btr_pcur_t pcur;
  1759. row_id_t row_id = 0;
  1760. dict_index_t* index;
  1761. index = dict_table_get_first_index(table);
  1762. ut_a(dict_index_is_clust(index));
  1763. mtr_start(&mtr);
  1764. mtr_set_log_mode(&mtr, MTR_LOG_NO_REDO);
  1765. btr_pcur_open_at_index_side(
  1766. false, // High end
  1767. index,
  1768. BTR_SEARCH_LEAF,
  1769. &pcur,
  1770. true, // Init cursor
  1771. 0, // Leaf level
  1772. &mtr);
  1773. btr_pcur_move_to_prev_on_page(&pcur);
  1774. rec = btr_pcur_get_rec(&pcur);
  1775. /* Check for empty table. */
  1776. if (page_rec_is_infimum(rec)) {
  1777. /* The table is empty. */
  1778. err = DB_SUCCESS;
  1779. } else if (rec_is_default_row(rec, index)) {
  1780. /* The clustered index contains the 'default row',
  1781. that is, the table is empty. */
  1782. err = DB_SUCCESS;
  1783. } else {
  1784. ulint len;
  1785. const byte* field;
  1786. mem_heap_t* heap = NULL;
  1787. ulint offsets_[1 + REC_OFFS_HEADER_SIZE];
  1788. ulint* offsets;
  1789. rec_offs_init(offsets_);
  1790. offsets = rec_get_offsets(
  1791. rec, index, offsets_, true, ULINT_UNDEFINED, &heap);
  1792. field = rec_get_nth_field(
  1793. rec, offsets,
  1794. dict_index_get_sys_col_pos(index, DATA_ROW_ID),
  1795. &len);
  1796. if (len == DATA_ROW_ID_LEN) {
  1797. row_id = mach_read_from_6(field);
  1798. err = DB_SUCCESS;
  1799. } else {
  1800. err = DB_CORRUPTION;
  1801. }
  1802. if (heap != NULL) {
  1803. mem_heap_free(heap);
  1804. }
  1805. }
  1806. btr_pcur_close(&pcur);
  1807. mtr_commit(&mtr);
  1808. DBUG_EXECUTE_IF("ib_import_set_max_rowid_failure",
  1809. err = DB_CORRUPTION;);
  1810. if (err != DB_SUCCESS) {
  1811. ib_errf(prebuilt->trx->mysql_thd,
  1812. IB_LOG_LEVEL_WARN,
  1813. ER_INNODB_INDEX_CORRUPT,
  1814. "Index `%s` corruption detected, invalid DB_ROW_ID"
  1815. " in index.", index->name());
  1816. return(err);
  1817. } else if (row_id > 0) {
  1818. /* Update the system row id if the imported index row id is
  1819. greater than the max system row id. */
  1820. mutex_enter(&dict_sys->mutex);
  1821. if (row_id >= dict_sys->row_id) {
  1822. dict_sys->row_id = row_id + 1;
  1823. dict_hdr_flush_row_id();
  1824. }
  1825. mutex_exit(&dict_sys->mutex);
  1826. }
  1827. return(DB_SUCCESS);
  1828. }
  1829. /*****************************************************************//**
  1830. Read the a string from the meta data file.
  1831. @return DB_SUCCESS or error code. */
  1832. static
  1833. dberr_t
  1834. row_import_cfg_read_string(
  1835. /*=======================*/
  1836. FILE* file, /*!< in/out: File to read from */
  1837. byte* ptr, /*!< out: string to read */
  1838. ulint max_len) /*!< in: maximum length of the output
  1839. buffer in bytes */
  1840. {
  1841. DBUG_EXECUTE_IF("ib_import_string_read_error",
  1842. errno = EINVAL; return(DB_IO_ERROR););
  1843. ulint len = 0;
  1844. while (!feof(file)) {
  1845. int ch = fgetc(file);
  1846. if (ch == EOF) {
  1847. break;
  1848. } else if (ch != 0) {
  1849. if (len < max_len) {
  1850. ptr[len++] = ch;
  1851. } else {
  1852. break;
  1853. }
  1854. /* max_len includes the NUL byte */
  1855. } else if (len != max_len - 1) {
  1856. break;
  1857. } else {
  1858. ptr[len] = 0;
  1859. return(DB_SUCCESS);
  1860. }
  1861. }
  1862. errno = EINVAL;
  1863. return(DB_IO_ERROR);
  1864. }
  1865. /*********************************************************************//**
  1866. Write the meta data (index user fields) config file.
  1867. @return DB_SUCCESS or error code. */
  1868. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1869. dberr_t
  1870. row_import_cfg_read_index_fields(
  1871. /*=============================*/
  1872. FILE* file, /*!< in: file to write to */
  1873. THD* thd, /*!< in/out: session */
  1874. row_index_t* index, /*!< Index being read in */
  1875. row_import* cfg) /*!< in/out: meta-data read */
  1876. {
  1877. byte row[sizeof(ib_uint32_t) * 3];
  1878. ulint n_fields = index->m_n_fields;
  1879. index->m_fields = UT_NEW_ARRAY_NOKEY(dict_field_t, n_fields);
  1880. /* Trigger OOM */
  1881. DBUG_EXECUTE_IF(
  1882. "ib_import_OOM_4",
  1883. UT_DELETE_ARRAY(index->m_fields);
  1884. index->m_fields = NULL;
  1885. );
  1886. if (index->m_fields == NULL) {
  1887. return(DB_OUT_OF_MEMORY);
  1888. }
  1889. dict_field_t* field = index->m_fields;
  1890. memset(field, 0x0, sizeof(*field) * n_fields);
  1891. for (ulint i = 0; i < n_fields; ++i, ++field) {
  1892. byte* ptr = row;
  1893. /* Trigger EOF */
  1894. DBUG_EXECUTE_IF("ib_import_io_read_error_1",
  1895. (void) fseek(file, 0L, SEEK_END););
  1896. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  1897. ib_senderrf(
  1898. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  1899. (ulong) errno, strerror(errno),
  1900. "while reading index fields.");
  1901. return(DB_IO_ERROR);
  1902. }
  1903. field->prefix_len = mach_read_from_4(ptr);
  1904. ptr += sizeof(ib_uint32_t);
  1905. field->fixed_len = mach_read_from_4(ptr);
  1906. ptr += sizeof(ib_uint32_t);
  1907. /* Include the NUL byte in the length. */
  1908. ulint len = mach_read_from_4(ptr);
  1909. byte* name = UT_NEW_ARRAY_NOKEY(byte, len);
  1910. /* Trigger OOM */
  1911. DBUG_EXECUTE_IF(
  1912. "ib_import_OOM_5",
  1913. UT_DELETE_ARRAY(name);
  1914. name = NULL;
  1915. );
  1916. if (name == NULL) {
  1917. return(DB_OUT_OF_MEMORY);
  1918. }
  1919. field->name = reinterpret_cast<const char*>(name);
  1920. dberr_t err = row_import_cfg_read_string(file, name, len);
  1921. if (err != DB_SUCCESS) {
  1922. ib_senderrf(
  1923. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  1924. (ulong) errno, strerror(errno),
  1925. "while parsing table name.");
  1926. return(err);
  1927. }
  1928. }
  1929. return(DB_SUCCESS);
  1930. }
  1931. /*****************************************************************//**
  1932. Read the index names and root page numbers of the indexes and set the values.
  1933. Row format [root_page_no, len of str, str ... ]
  1934. @return DB_SUCCESS or error code. */
  1935. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1936. dberr_t
  1937. row_import_read_index_data(
  1938. /*=======================*/
  1939. FILE* file, /*!< in: File to read from */
  1940. THD* thd, /*!< in: session */
  1941. row_import* cfg) /*!< in/out: meta-data read */
  1942. {
  1943. byte* ptr;
  1944. row_index_t* cfg_index;
  1945. byte row[sizeof(index_id_t) + sizeof(ib_uint32_t) * 9];
  1946. /* FIXME: What is the max value? */
  1947. ut_a(cfg->m_n_indexes > 0);
  1948. ut_a(cfg->m_n_indexes < 1024);
  1949. cfg->m_indexes = UT_NEW_ARRAY_NOKEY(row_index_t, cfg->m_n_indexes);
  1950. /* Trigger OOM */
  1951. DBUG_EXECUTE_IF(
  1952. "ib_import_OOM_6",
  1953. UT_DELETE_ARRAY(cfg->m_indexes);
  1954. cfg->m_indexes = NULL;
  1955. );
  1956. if (cfg->m_indexes == NULL) {
  1957. return(DB_OUT_OF_MEMORY);
  1958. }
  1959. memset(cfg->m_indexes, 0x0, sizeof(*cfg->m_indexes) * cfg->m_n_indexes);
  1960. cfg_index = cfg->m_indexes;
  1961. for (ulint i = 0; i < cfg->m_n_indexes; ++i, ++cfg_index) {
  1962. /* Trigger EOF */
  1963. DBUG_EXECUTE_IF("ib_import_io_read_error_2",
  1964. (void) fseek(file, 0L, SEEK_END););
  1965. /* Read the index data. */
  1966. size_t n_bytes = fread(row, 1, sizeof(row), file);
  1967. /* Trigger EOF */
  1968. DBUG_EXECUTE_IF("ib_import_io_read_error",
  1969. (void) fseek(file, 0L, SEEK_END););
  1970. if (n_bytes != sizeof(row)) {
  1971. char msg[BUFSIZ];
  1972. snprintf(msg, sizeof(msg),
  1973. "while reading index meta-data, expected "
  1974. "to read " ULINTPF
  1975. " bytes but read only " ULINTPF " bytes",
  1976. sizeof(row), n_bytes);
  1977. ib_senderrf(
  1978. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  1979. (ulong) errno, strerror(errno), msg);
  1980. ib::error() << "IO Error: " << msg;
  1981. return(DB_IO_ERROR);
  1982. }
  1983. ptr = row;
  1984. cfg_index->m_id = mach_read_from_8(ptr);
  1985. ptr += sizeof(index_id_t);
  1986. cfg_index->m_space = mach_read_from_4(ptr);
  1987. ptr += sizeof(ib_uint32_t);
  1988. cfg_index->m_page_no = mach_read_from_4(ptr);
  1989. ptr += sizeof(ib_uint32_t);
  1990. cfg_index->m_type = mach_read_from_4(ptr);
  1991. ptr += sizeof(ib_uint32_t);
  1992. cfg_index->m_trx_id_offset = mach_read_from_4(ptr);
  1993. if (cfg_index->m_trx_id_offset != mach_read_from_4(ptr)) {
  1994. ut_ad(0);
  1995. /* Overflow. Pretend that the clustered index
  1996. has a variable-length PRIMARY KEY. */
  1997. cfg_index->m_trx_id_offset = 0;
  1998. }
  1999. ptr += sizeof(ib_uint32_t);
  2000. cfg_index->m_n_user_defined_cols = mach_read_from_4(ptr);
  2001. ptr += sizeof(ib_uint32_t);
  2002. cfg_index->m_n_uniq = mach_read_from_4(ptr);
  2003. ptr += sizeof(ib_uint32_t);
  2004. cfg_index->m_n_nullable = mach_read_from_4(ptr);
  2005. ptr += sizeof(ib_uint32_t);
  2006. cfg_index->m_n_fields = mach_read_from_4(ptr);
  2007. ptr += sizeof(ib_uint32_t);
  2008. /* The NUL byte is included in the name length. */
  2009. ulint len = mach_read_from_4(ptr);
  2010. if (len > OS_FILE_MAX_PATH) {
  2011. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  2012. ER_INNODB_INDEX_CORRUPT,
  2013. "Index name length (" ULINTPF ") is too long, "
  2014. "the meta-data is corrupt", len);
  2015. return(DB_CORRUPTION);
  2016. }
  2017. cfg_index->m_name = UT_NEW_ARRAY_NOKEY(byte, len);
  2018. /* Trigger OOM */
  2019. DBUG_EXECUTE_IF(
  2020. "ib_import_OOM_7",
  2021. UT_DELETE_ARRAY(cfg_index->m_name);
  2022. cfg_index->m_name = NULL;
  2023. );
  2024. if (cfg_index->m_name == NULL) {
  2025. return(DB_OUT_OF_MEMORY);
  2026. }
  2027. dberr_t err;
  2028. err = row_import_cfg_read_string(file, cfg_index->m_name, len);
  2029. if (err != DB_SUCCESS) {
  2030. ib_senderrf(
  2031. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2032. (ulong) errno, strerror(errno),
  2033. "while parsing index name.");
  2034. return(err);
  2035. }
  2036. err = row_import_cfg_read_index_fields(
  2037. file, thd, cfg_index, cfg);
  2038. if (err != DB_SUCCESS) {
  2039. return(err);
  2040. }
  2041. }
  2042. return(DB_SUCCESS);
  2043. }
  2044. /*****************************************************************//**
  2045. Set the index root page number for v1 format.
  2046. @return DB_SUCCESS or error code. */
  2047. static
  2048. dberr_t
  2049. row_import_read_indexes(
  2050. /*====================*/
  2051. FILE* file, /*!< in: File to read from */
  2052. THD* thd, /*!< in: session */
  2053. row_import* cfg) /*!< in/out: meta-data read */
  2054. {
  2055. byte row[sizeof(ib_uint32_t)];
  2056. /* Trigger EOF */
  2057. DBUG_EXECUTE_IF("ib_import_io_read_error_3",
  2058. (void) fseek(file, 0L, SEEK_END););
  2059. /* Read the number of indexes. */
  2060. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  2061. ib_senderrf(
  2062. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2063. (ulong) errno, strerror(errno),
  2064. "while reading number of indexes.");
  2065. return(DB_IO_ERROR);
  2066. }
  2067. cfg->m_n_indexes = mach_read_from_4(row);
  2068. if (cfg->m_n_indexes == 0) {
  2069. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2070. "Number of indexes in meta-data file is 0");
  2071. return(DB_CORRUPTION);
  2072. } else if (cfg->m_n_indexes > 1024) {
  2073. // FIXME: What is the upper limit? */
  2074. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2075. "Number of indexes in meta-data file is too high: "
  2076. ULINTPF, cfg->m_n_indexes);
  2077. cfg->m_n_indexes = 0;
  2078. return(DB_CORRUPTION);
  2079. }
  2080. return(row_import_read_index_data(file, thd, cfg));
  2081. }
  2082. /*********************************************************************//**
  2083. Read the meta data (table columns) config file. Deserialise the contents of
  2084. dict_col_t structure, along with the column name. */
  2085. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2086. dberr_t
  2087. row_import_read_columns(
  2088. /*====================*/
  2089. FILE* file, /*!< in: file to write to */
  2090. THD* thd, /*!< in/out: session */
  2091. row_import* cfg) /*!< in/out: meta-data read */
  2092. {
  2093. dict_col_t* col;
  2094. byte row[sizeof(ib_uint32_t) * 8];
  2095. /* FIXME: What should the upper limit be? */
  2096. ut_a(cfg->m_n_cols > 0);
  2097. ut_a(cfg->m_n_cols < 1024);
  2098. cfg->m_cols = UT_NEW_ARRAY_NOKEY(dict_col_t, cfg->m_n_cols);
  2099. /* Trigger OOM */
  2100. DBUG_EXECUTE_IF(
  2101. "ib_import_OOM_8",
  2102. UT_DELETE_ARRAY(cfg->m_cols);
  2103. cfg->m_cols = NULL;
  2104. );
  2105. if (cfg->m_cols == NULL) {
  2106. return(DB_OUT_OF_MEMORY);
  2107. }
  2108. cfg->m_col_names = UT_NEW_ARRAY_NOKEY(byte*, cfg->m_n_cols);
  2109. /* Trigger OOM */
  2110. DBUG_EXECUTE_IF(
  2111. "ib_import_OOM_9",
  2112. UT_DELETE_ARRAY(cfg->m_col_names);
  2113. cfg->m_col_names = NULL;
  2114. );
  2115. if (cfg->m_col_names == NULL) {
  2116. return(DB_OUT_OF_MEMORY);
  2117. }
  2118. memset(cfg->m_cols, 0x0, sizeof(cfg->m_cols) * cfg->m_n_cols);
  2119. memset(cfg->m_col_names, 0x0, sizeof(cfg->m_col_names) * cfg->m_n_cols);
  2120. col = cfg->m_cols;
  2121. for (ulint i = 0; i < cfg->m_n_cols; ++i, ++col) {
  2122. byte* ptr = row;
  2123. /* Trigger EOF */
  2124. DBUG_EXECUTE_IF("ib_import_io_read_error_4",
  2125. (void) fseek(file, 0L, SEEK_END););
  2126. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  2127. ib_senderrf(
  2128. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2129. (ulong) errno, strerror(errno),
  2130. "while reading table column meta-data.");
  2131. return(DB_IO_ERROR);
  2132. }
  2133. col->prtype = mach_read_from_4(ptr);
  2134. ptr += sizeof(ib_uint32_t);
  2135. col->mtype = mach_read_from_4(ptr);
  2136. ptr += sizeof(ib_uint32_t);
  2137. col->len = mach_read_from_4(ptr);
  2138. ptr += sizeof(ib_uint32_t);
  2139. ulint mbminmaxlen = mach_read_from_4(ptr);
  2140. col->mbmaxlen = mbminmaxlen / 5;
  2141. col->mbminlen = mbminmaxlen % 5;
  2142. ptr += sizeof(ib_uint32_t);
  2143. col->ind = mach_read_from_4(ptr);
  2144. ptr += sizeof(ib_uint32_t);
  2145. col->ord_part = mach_read_from_4(ptr);
  2146. ptr += sizeof(ib_uint32_t);
  2147. col->max_prefix = mach_read_from_4(ptr);
  2148. ptr += sizeof(ib_uint32_t);
  2149. /* Read in the column name as [len, byte array]. The len
  2150. includes the NUL byte. */
  2151. ulint len = mach_read_from_4(ptr);
  2152. /* FIXME: What is the maximum column name length? */
  2153. if (len == 0 || len > 128) {
  2154. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  2155. ER_IO_READ_ERROR,
  2156. "Column name length " ULINTPF ", is invalid",
  2157. len);
  2158. return(DB_CORRUPTION);
  2159. }
  2160. cfg->m_col_names[i] = UT_NEW_ARRAY_NOKEY(byte, len);
  2161. /* Trigger OOM */
  2162. DBUG_EXECUTE_IF(
  2163. "ib_import_OOM_10",
  2164. UT_DELETE_ARRAY(cfg->m_col_names[i]);
  2165. cfg->m_col_names[i] = NULL;
  2166. );
  2167. if (cfg->m_col_names[i] == NULL) {
  2168. return(DB_OUT_OF_MEMORY);
  2169. }
  2170. dberr_t err;
  2171. err = row_import_cfg_read_string(
  2172. file, cfg->m_col_names[i], len);
  2173. if (err != DB_SUCCESS) {
  2174. ib_senderrf(
  2175. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2176. (ulong) errno, strerror(errno),
  2177. "while parsing table column name.");
  2178. return(err);
  2179. }
  2180. }
  2181. return(DB_SUCCESS);
  2182. }
  2183. /*****************************************************************//**
  2184. Read the contents of the <tablespace>.cfg file.
  2185. @return DB_SUCCESS or error code. */
  2186. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2187. dberr_t
  2188. row_import_read_v1(
  2189. /*===============*/
  2190. FILE* file, /*!< in: File to read from */
  2191. THD* thd, /*!< in: session */
  2192. row_import* cfg) /*!< out: meta data */
  2193. {
  2194. byte value[sizeof(ib_uint32_t)];
  2195. /* Trigger EOF */
  2196. DBUG_EXECUTE_IF("ib_import_io_read_error_5",
  2197. (void) fseek(file, 0L, SEEK_END););
  2198. /* Read the hostname where the tablespace was exported. */
  2199. if (fread(value, 1, sizeof(value), file) != sizeof(value)) {
  2200. ib_senderrf(
  2201. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2202. (ulong) errno, strerror(errno),
  2203. "while reading meta-data export hostname length.");
  2204. return(DB_IO_ERROR);
  2205. }
  2206. ulint len = mach_read_from_4(value);
  2207. /* NUL byte is part of name length. */
  2208. cfg->m_hostname = UT_NEW_ARRAY_NOKEY(byte, len);
  2209. /* Trigger OOM */
  2210. DBUG_EXECUTE_IF(
  2211. "ib_import_OOM_1",
  2212. UT_DELETE_ARRAY(cfg->m_hostname);
  2213. cfg->m_hostname = NULL;
  2214. );
  2215. if (cfg->m_hostname == NULL) {
  2216. return(DB_OUT_OF_MEMORY);
  2217. }
  2218. dberr_t err = row_import_cfg_read_string(file, cfg->m_hostname, len);
  2219. if (err != DB_SUCCESS) {
  2220. ib_senderrf(
  2221. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2222. (ulong) errno, strerror(errno),
  2223. "while parsing export hostname.");
  2224. return(err);
  2225. }
  2226. /* Trigger EOF */
  2227. DBUG_EXECUTE_IF("ib_import_io_read_error_6",
  2228. (void) fseek(file, 0L, SEEK_END););
  2229. /* Read the table name of tablespace that was exported. */
  2230. if (fread(value, 1, sizeof(value), file) != sizeof(value)) {
  2231. ib_senderrf(
  2232. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2233. (ulong) errno, strerror(errno),
  2234. "while reading meta-data table name length.");
  2235. return(DB_IO_ERROR);
  2236. }
  2237. len = mach_read_from_4(value);
  2238. /* NUL byte is part of name length. */
  2239. cfg->m_table_name = UT_NEW_ARRAY_NOKEY(byte, len);
  2240. /* Trigger OOM */
  2241. DBUG_EXECUTE_IF(
  2242. "ib_import_OOM_2",
  2243. UT_DELETE_ARRAY(cfg->m_table_name);
  2244. cfg->m_table_name = NULL;
  2245. );
  2246. if (cfg->m_table_name == NULL) {
  2247. return(DB_OUT_OF_MEMORY);
  2248. }
  2249. err = row_import_cfg_read_string(file, cfg->m_table_name, len);
  2250. if (err != DB_SUCCESS) {
  2251. ib_senderrf(
  2252. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2253. (ulong) errno, strerror(errno),
  2254. "while parsing table name.");
  2255. return(err);
  2256. }
  2257. ib::info() << "Importing tablespace for table '" << cfg->m_table_name
  2258. << "' that was exported from host '" << cfg->m_hostname << "'";
  2259. byte row[sizeof(ib_uint32_t) * 3];
  2260. /* Trigger EOF */
  2261. DBUG_EXECUTE_IF("ib_import_io_read_error_7",
  2262. (void) fseek(file, 0L, SEEK_END););
  2263. /* Read the autoinc value. */
  2264. if (fread(row, 1, sizeof(ib_uint64_t), file) != sizeof(ib_uint64_t)) {
  2265. ib_senderrf(
  2266. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2267. (ulong) errno, strerror(errno),
  2268. "while reading autoinc value.");
  2269. return(DB_IO_ERROR);
  2270. }
  2271. cfg->m_autoinc = mach_read_from_8(row);
  2272. /* Trigger EOF */
  2273. DBUG_EXECUTE_IF("ib_import_io_read_error_8",
  2274. (void) fseek(file, 0L, SEEK_END););
  2275. /* Read the tablespace page size. */
  2276. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  2277. ib_senderrf(
  2278. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2279. (ulong) errno, strerror(errno),
  2280. "while reading meta-data header.");
  2281. return(DB_IO_ERROR);
  2282. }
  2283. byte* ptr = row;
  2284. const ulint logical_page_size = mach_read_from_4(ptr);
  2285. ptr += sizeof(ib_uint32_t);
  2286. if (logical_page_size != srv_page_size) {
  2287. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  2288. "Tablespace to be imported has a different"
  2289. " page size than this server. Server page size"
  2290. " is %lu, whereas tablespace page size"
  2291. " is " ULINTPF,
  2292. srv_page_size,
  2293. logical_page_size);
  2294. return(DB_ERROR);
  2295. }
  2296. cfg->m_flags = mach_read_from_4(ptr);
  2297. ptr += sizeof(ib_uint32_t);
  2298. cfg->m_page_size.copy_from(dict_tf_get_page_size(cfg->m_flags));
  2299. ut_a(logical_page_size == cfg->m_page_size.logical());
  2300. cfg->m_n_cols = mach_read_from_4(ptr);
  2301. if (!dict_tf_is_valid(cfg->m_flags)) {
  2302. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  2303. ER_TABLE_SCHEMA_MISMATCH,
  2304. "Invalid table flags: " ULINTPF, cfg->m_flags);
  2305. return(DB_CORRUPTION);
  2306. }
  2307. err = row_import_read_columns(file, thd, cfg);
  2308. if (err == DB_SUCCESS) {
  2309. err = row_import_read_indexes(file, thd, cfg);
  2310. }
  2311. return(err);
  2312. }
  2313. /**
  2314. Read the contents of the <tablespace>.cfg file.
  2315. @return DB_SUCCESS or error code. */
  2316. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2317. dberr_t
  2318. row_import_read_meta_data(
  2319. /*======================*/
  2320. dict_table_t* table, /*!< in: table */
  2321. FILE* file, /*!< in: File to read from */
  2322. THD* thd, /*!< in: session */
  2323. row_import& cfg) /*!< out: contents of the .cfg file */
  2324. {
  2325. byte row[sizeof(ib_uint32_t)];
  2326. /* Trigger EOF */
  2327. DBUG_EXECUTE_IF("ib_import_io_read_error_9",
  2328. (void) fseek(file, 0L, SEEK_END););
  2329. if (fread(&row, 1, sizeof(row), file) != sizeof(row)) {
  2330. ib_senderrf(
  2331. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2332. (ulong) errno, strerror(errno),
  2333. "while reading meta-data version.");
  2334. return(DB_IO_ERROR);
  2335. }
  2336. cfg.m_version = mach_read_from_4(row);
  2337. /* Check the version number. */
  2338. switch (cfg.m_version) {
  2339. case IB_EXPORT_CFG_VERSION_V1:
  2340. return(row_import_read_v1(file, thd, &cfg));
  2341. default:
  2342. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2343. "Unsupported meta-data version number (" ULINTPF "), "
  2344. "file ignored", cfg.m_version);
  2345. }
  2346. return(DB_ERROR);
  2347. }
  2348. /**
  2349. Read the contents of the <tablename>.cfg file.
  2350. @return DB_SUCCESS or error code. */
  2351. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2352. dberr_t
  2353. row_import_read_cfg(
  2354. /*================*/
  2355. dict_table_t* table, /*!< in: table */
  2356. THD* thd, /*!< in: session */
  2357. row_import& cfg) /*!< out: contents of the .cfg file */
  2358. {
  2359. dberr_t err;
  2360. char name[OS_FILE_MAX_PATH];
  2361. cfg.m_table = table;
  2362. srv_get_meta_data_filename(table, name, sizeof(name));
  2363. FILE* file = fopen(name, "rb");
  2364. if (file == NULL) {
  2365. char msg[BUFSIZ];
  2366. snprintf(msg, sizeof(msg),
  2367. "Error opening '%s', will attempt to import"
  2368. " without schema verification", name);
  2369. ib_senderrf(
  2370. thd, IB_LOG_LEVEL_WARN, ER_IO_READ_ERROR,
  2371. (ulong) errno, strerror(errno), msg);
  2372. cfg.m_missing = true;
  2373. err = DB_FAIL;
  2374. } else {
  2375. cfg.m_missing = false;
  2376. err = row_import_read_meta_data(table, file, thd, cfg);
  2377. fclose(file);
  2378. }
  2379. return(err);
  2380. }
  2381. /*****************************************************************//**
  2382. Update the <space, root page> of a table's indexes from the values
  2383. in the data dictionary.
  2384. @return DB_SUCCESS or error code */
  2385. dberr_t
  2386. row_import_update_index_root(
  2387. /*=========================*/
  2388. trx_t* trx, /*!< in/out: transaction that
  2389. covers the update */
  2390. const dict_table_t* table, /*!< in: Table for which we want
  2391. to set the root page_no */
  2392. bool reset, /*!< in: if true then set to
  2393. FIL_NUL */
  2394. bool dict_locked) /*!< in: Set to true if the
  2395. caller already owns the
  2396. dict_sys_t:: mutex. */
  2397. {
  2398. const dict_index_t* index;
  2399. que_t* graph = 0;
  2400. dberr_t err = DB_SUCCESS;
  2401. ut_ad(reset || table->space->id == table->space_id);
  2402. static const char sql[] = {
  2403. "PROCEDURE UPDATE_INDEX_ROOT() IS\n"
  2404. "BEGIN\n"
  2405. "UPDATE SYS_INDEXES\n"
  2406. "SET SPACE = :space,\n"
  2407. " PAGE_NO = :page,\n"
  2408. " TYPE = :type\n"
  2409. "WHERE TABLE_ID = :table_id AND ID = :index_id;\n"
  2410. "END;\n"};
  2411. if (!dict_locked) {
  2412. mutex_enter(&dict_sys->mutex);
  2413. }
  2414. for (index = dict_table_get_first_index(table);
  2415. index != 0;
  2416. index = dict_table_get_next_index(index)) {
  2417. pars_info_t* info;
  2418. ib_uint32_t page;
  2419. ib_uint32_t space;
  2420. ib_uint32_t type;
  2421. index_id_t index_id;
  2422. table_id_t table_id;
  2423. info = (graph != 0) ? graph->info : pars_info_create();
  2424. mach_write_to_4(
  2425. reinterpret_cast<byte*>(&type),
  2426. index->type);
  2427. mach_write_to_4(
  2428. reinterpret_cast<byte*>(&page),
  2429. reset ? FIL_NULL : index->page);
  2430. mach_write_to_4(
  2431. reinterpret_cast<byte*>(&space),
  2432. reset ? FIL_NULL : index->table->space_id);
  2433. mach_write_to_8(
  2434. reinterpret_cast<byte*>(&index_id),
  2435. index->id);
  2436. mach_write_to_8(
  2437. reinterpret_cast<byte*>(&table_id),
  2438. table->id);
  2439. /* If we set the corrupt bit during the IMPORT phase then
  2440. we need to update the system tables. */
  2441. pars_info_bind_int4_literal(info, "type", &type);
  2442. pars_info_bind_int4_literal(info, "space", &space);
  2443. pars_info_bind_int4_literal(info, "page", &page);
  2444. pars_info_bind_ull_literal(info, "index_id", &index_id);
  2445. pars_info_bind_ull_literal(info, "table_id", &table_id);
  2446. if (graph == 0) {
  2447. graph = pars_sql(info, sql);
  2448. ut_a(graph);
  2449. graph->trx = trx;
  2450. }
  2451. que_thr_t* thr;
  2452. graph->fork_type = QUE_FORK_MYSQL_INTERFACE;
  2453. ut_a(thr = que_fork_start_command(graph));
  2454. que_run_threads(thr);
  2455. DBUG_EXECUTE_IF("ib_import_internal_error",
  2456. trx->error_state = DB_ERROR;);
  2457. err = trx->error_state;
  2458. if (err != DB_SUCCESS) {
  2459. ib_errf(trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  2460. ER_INTERNAL_ERROR,
  2461. "While updating the <space, root page"
  2462. " number> of index %s - %s",
  2463. index->name(), ut_strerr(err));
  2464. break;
  2465. }
  2466. }
  2467. que_graph_free(graph);
  2468. if (!dict_locked) {
  2469. mutex_exit(&dict_sys->mutex);
  2470. }
  2471. return(err);
  2472. }
  2473. /** Callback arg for row_import_set_discarded. */
  2474. struct discard_t {
  2475. ib_uint32_t flags2; /*!< Value read from column */
  2476. bool state; /*!< New state of the flag */
  2477. ulint n_recs; /*!< Number of recs processed */
  2478. };
  2479. /******************************************************************//**
  2480. Fetch callback that sets or unsets the DISCARDED tablespace flag in
  2481. SYS_TABLES. The flags is stored in MIX_LEN column.
  2482. @return FALSE if all OK */
  2483. static
  2484. ibool
  2485. row_import_set_discarded(
  2486. /*=====================*/
  2487. void* row, /*!< in: sel_node_t* */
  2488. void* user_arg) /*!< in: bool set/unset flag */
  2489. {
  2490. sel_node_t* node = static_cast<sel_node_t*>(row);
  2491. discard_t* discard = static_cast<discard_t*>(user_arg);
  2492. dfield_t* dfield = que_node_get_val(node->select_list);
  2493. dtype_t* type = dfield_get_type(dfield);
  2494. ulint len = dfield_get_len(dfield);
  2495. ut_a(dtype_get_mtype(type) == DATA_INT);
  2496. ut_a(len == sizeof(ib_uint32_t));
  2497. ulint flags2 = mach_read_from_4(
  2498. static_cast<byte*>(dfield_get_data(dfield)));
  2499. if (discard->state) {
  2500. flags2 |= DICT_TF2_DISCARDED;
  2501. } else {
  2502. flags2 &= ~DICT_TF2_DISCARDED;
  2503. }
  2504. mach_write_to_4(reinterpret_cast<byte*>(&discard->flags2), flags2);
  2505. ++discard->n_recs;
  2506. /* There should be at most one matching record. */
  2507. ut_a(discard->n_recs == 1);
  2508. return(FALSE);
  2509. }
  2510. /** Update the DICT_TF2_DISCARDED flag in SYS_TABLES.MIX_LEN.
  2511. @param[in,out] trx dictionary transaction
  2512. @param[in] table_id table identifier
  2513. @param[in] discarded whether to set or clear the flag
  2514. @return DB_SUCCESS or error code */
  2515. dberr_t row_import_update_discarded_flag(trx_t* trx, table_id_t table_id,
  2516. bool discarded)
  2517. {
  2518. pars_info_t* info;
  2519. discard_t discard;
  2520. static const char sql[] =
  2521. "PROCEDURE UPDATE_DISCARDED_FLAG() IS\n"
  2522. "DECLARE FUNCTION my_func;\n"
  2523. "DECLARE CURSOR c IS\n"
  2524. " SELECT MIX_LEN"
  2525. " FROM SYS_TABLES"
  2526. " WHERE ID = :table_id FOR UPDATE;"
  2527. "\n"
  2528. "BEGIN\n"
  2529. "OPEN c;\n"
  2530. "WHILE 1 = 1 LOOP\n"
  2531. " FETCH c INTO my_func();\n"
  2532. " IF c % NOTFOUND THEN\n"
  2533. " EXIT;\n"
  2534. " END IF;\n"
  2535. "END LOOP;\n"
  2536. "UPDATE SYS_TABLES"
  2537. " SET MIX_LEN = :flags2"
  2538. " WHERE ID = :table_id;\n"
  2539. "CLOSE c;\n"
  2540. "END;\n";
  2541. discard.n_recs = 0;
  2542. discard.state = discarded;
  2543. discard.flags2 = ULINT32_UNDEFINED;
  2544. info = pars_info_create();
  2545. pars_info_add_ull_literal(info, "table_id", table_id);
  2546. pars_info_bind_int4_literal(info, "flags2", &discard.flags2);
  2547. pars_info_bind_function(
  2548. info, "my_func", row_import_set_discarded, &discard);
  2549. dberr_t err = que_eval_sql(info, sql, false, trx);
  2550. ut_a(discard.n_recs == 1);
  2551. ut_a(discard.flags2 != ULINT32_UNDEFINED);
  2552. return(err);
  2553. }
  2554. struct fil_iterator_t {
  2555. pfs_os_file_t file; /*!< File handle */
  2556. const char* filepath; /*!< File path name */
  2557. os_offset_t start; /*!< From where to start */
  2558. os_offset_t end; /*!< Where to stop */
  2559. os_offset_t file_size; /*!< File size in bytes */
  2560. ulint n_io_buffers; /*!< Number of pages to use
  2561. for IO */
  2562. byte* io_buffer; /*!< Buffer to use for IO */
  2563. fil_space_crypt_t *crypt_data; /*!< Crypt data (if encrypted) */
  2564. byte* crypt_io_buffer; /*!< IO buffer when encrypted */
  2565. };
  2566. /********************************************************************//**
  2567. TODO: This can be made parallel trivially by chunking up the file and creating
  2568. a callback per thread. . Main benefit will be to use multiple CPUs for
  2569. checksums and compressed tables. We have to do compressed tables block by
  2570. block right now. Secondly we need to decompress/compress and copy too much
  2571. of data. These are CPU intensive.
  2572. Iterate over all the pages in the tablespace.
  2573. @param iter - Tablespace iterator
  2574. @param block - block to use for IO
  2575. @param callback - Callback to inspect and update page contents
  2576. @retval DB_SUCCESS or error code */
  2577. static
  2578. dberr_t
  2579. fil_iterate(
  2580. /*========*/
  2581. const fil_iterator_t& iter,
  2582. buf_block_t* block,
  2583. AbstractCallback& callback)
  2584. {
  2585. os_offset_t offset;
  2586. const ulint size = callback.get_page_size().physical();
  2587. ulint n_bytes = iter.n_io_buffers * size;
  2588. ut_ad(!srv_read_only_mode);
  2589. /* TODO: For ROW_FORMAT=COMPRESSED tables we do a lot of useless
  2590. copying for non-index pages. Unfortunately, it is
  2591. required by buf_zip_decompress() */
  2592. for (offset = iter.start; offset < iter.end; offset += n_bytes) {
  2593. if (callback.is_interrupted()) {
  2594. return DB_INTERRUPTED;
  2595. }
  2596. byte* io_buffer = iter.io_buffer;
  2597. block->frame = io_buffer;
  2598. if (block->page.zip.data) {
  2599. /* Zip IO is done in the compressed page buffer. */
  2600. io_buffer = block->page.zip.data;
  2601. }
  2602. /* We have to read the exact number of bytes. Otherwise the
  2603. InnoDB IO functions croak on failed reads. */
  2604. n_bytes = ulint(ut_min(os_offset_t(n_bytes),
  2605. iter.end - offset));
  2606. ut_ad(n_bytes > 0);
  2607. ut_ad(!(n_bytes % size));
  2608. const bool encrypted = iter.crypt_data != NULL
  2609. && iter.crypt_data->should_encrypt();
  2610. /* Use additional crypt io buffer if tablespace is encrypted */
  2611. byte* const readptr = encrypted
  2612. ? iter.crypt_io_buffer : io_buffer;
  2613. byte* const writeptr = readptr;
  2614. IORequest read_request(IORequest::READ);
  2615. read_request.disable_partial_io_warnings();
  2616. dberr_t err = os_file_read_no_error_handling(
  2617. read_request, iter.file, readptr, offset, n_bytes, 0);
  2618. if (err != DB_SUCCESS) {
  2619. ib::error() << iter.filepath
  2620. << ": os_file_read() failed";
  2621. }
  2622. bool updated = false;
  2623. os_offset_t page_off = offset;
  2624. ulint n_pages_read = n_bytes / size;
  2625. block->page.id.set_page_no(ulint(page_off / size));
  2626. for (ulint i = 0; i < n_pages_read;
  2627. block->page.id.set_page_no(block->page.id.page_no() + 1),
  2628. ++i, page_off += size, block->frame += size) {
  2629. bool decrypted = false;
  2630. err = DB_SUCCESS;
  2631. byte* src = readptr + i * size;
  2632. byte* dst = io_buffer + i * size;
  2633. bool frame_changed = false;
  2634. ulint page_type = mach_read_from_2(src+FIL_PAGE_TYPE);
  2635. const bool page_compressed
  2636. = page_type
  2637. == FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED
  2638. || page_type == FIL_PAGE_PAGE_COMPRESSED;
  2639. const ulint page_no = page_get_page_no(src);
  2640. if (!page_no && page_off) {
  2641. const ulint* b = reinterpret_cast<const ulint*>
  2642. (src);
  2643. const ulint* const e = b + size / sizeof *b;
  2644. do {
  2645. if (*b++) {
  2646. goto page_corrupted;
  2647. }
  2648. } while (b != e);
  2649. /* Proceed to the next page,
  2650. because this one is all zero. */
  2651. continue;
  2652. }
  2653. if (page_no != page_off / size) {
  2654. goto page_corrupted;
  2655. }
  2656. if (encrypted) {
  2657. decrypted = fil_space_decrypt(
  2658. iter.crypt_data, dst,
  2659. callback.get_page_size(), src, &err);
  2660. if (err != DB_SUCCESS) {
  2661. return err;
  2662. }
  2663. if (decrypted) {
  2664. updated = true;
  2665. } else {
  2666. if (!page_compressed
  2667. && !block->page.zip.data) {
  2668. block->frame = src;
  2669. frame_changed = true;
  2670. } else {
  2671. ut_ad(dst != src);
  2672. memcpy(dst, src, size);
  2673. }
  2674. }
  2675. }
  2676. /* If the original page is page_compressed, we need
  2677. to decompress it before adjusting further. */
  2678. if (page_compressed) {
  2679. fil_decompress_page(NULL, dst, ulong(size),
  2680. NULL);
  2681. updated = true;
  2682. } else if (buf_page_is_corrupted(
  2683. false,
  2684. encrypted && !frame_changed
  2685. ? dst : src,
  2686. callback.get_page_size(), NULL)) {
  2687. page_corrupted:
  2688. ib::warn() << callback.filename()
  2689. << ": Page " << (offset / size)
  2690. << " at offset " << offset
  2691. << " looks corrupted.";
  2692. return DB_CORRUPTION;
  2693. }
  2694. if ((err = callback(page_off, block)) != DB_SUCCESS) {
  2695. return err;
  2696. } else if (!updated) {
  2697. updated = buf_block_get_state(block)
  2698. == BUF_BLOCK_FILE_PAGE;
  2699. }
  2700. /* If tablespace is encrypted we use additional
  2701. temporary scratch area where pages are read
  2702. for decrypting readptr == crypt_io_buffer != io_buffer.
  2703. Destination for decryption is a buffer pool block
  2704. block->frame == dst == io_buffer that is updated.
  2705. Pages that did not require decryption even when
  2706. tablespace is marked as encrypted are not copied
  2707. instead block->frame is set to src == readptr.
  2708. For encryption we again use temporary scratch area
  2709. writeptr != io_buffer == dst
  2710. that is then written to the tablespace
  2711. (1) For normal tables io_buffer == dst == writeptr
  2712. (2) For only page compressed tables
  2713. io_buffer == dst == writeptr
  2714. (3) For encrypted (and page compressed)
  2715. readptr != io_buffer == dst != writeptr
  2716. */
  2717. ut_ad(!encrypted && !page_compressed ?
  2718. src == dst && dst == writeptr + (i * size):1);
  2719. ut_ad(page_compressed && !encrypted ?
  2720. src == dst && dst == writeptr + (i * size):1);
  2721. ut_ad(encrypted ?
  2722. src != dst && dst != writeptr + (i * size):1);
  2723. /* When tablespace is encrypted or compressed its
  2724. first page (i.e. page 0) is not encrypted or
  2725. compressed and there is no need to copy frame. */
  2726. if (encrypted && block->page.id.page_no() != 0) {
  2727. byte *local_frame = callback.get_frame(block);
  2728. ut_ad((writeptr + (i * size)) != local_frame);
  2729. memcpy((writeptr + (i * size)), local_frame, size);
  2730. }
  2731. if (frame_changed) {
  2732. block->frame = dst;
  2733. }
  2734. src = io_buffer + (i * size);
  2735. if (page_compressed) {
  2736. ulint len = 0;
  2737. fil_compress_page(
  2738. NULL,
  2739. src,
  2740. NULL,
  2741. size,
  2742. 0,/* FIXME: compression level */
  2743. 512,/* FIXME: use proper block size */
  2744. encrypted,
  2745. &len);
  2746. ut_ad(len <= size);
  2747. memset(src + len, 0, size - len);
  2748. updated = true;
  2749. }
  2750. /* Encrypt the page if encryption was used. */
  2751. if (encrypted && decrypted) {
  2752. byte *dest = writeptr + i * size;
  2753. byte* tmp = fil_encrypt_buf(
  2754. iter.crypt_data,
  2755. block->page.id.space(),
  2756. block->page.id.page_no(),
  2757. mach_read_from_8(src + FIL_PAGE_LSN),
  2758. src, callback.get_page_size(), dest);
  2759. if (tmp == src) {
  2760. /* TODO: remove unnecessary memcpy's */
  2761. ut_ad(dest != src);
  2762. memcpy(dest, src, size);
  2763. }
  2764. updated = true;
  2765. }
  2766. }
  2767. /* A page was updated in the set, write back to disk. */
  2768. if (updated) {
  2769. IORequest write_request(IORequest::WRITE);
  2770. err = os_file_write(write_request,
  2771. iter.filepath, iter.file,
  2772. writeptr, offset, n_bytes);
  2773. if (err != DB_SUCCESS) {
  2774. return err;
  2775. }
  2776. }
  2777. }
  2778. return DB_SUCCESS;
  2779. }
  2780. /********************************************************************//**
  2781. Iterate over all the pages in the tablespace.
  2782. @param table - the table definiton in the server
  2783. @param n_io_buffers - number of blocks to read and write together
  2784. @param callback - functor that will do the page updates
  2785. @return DB_SUCCESS or error code */
  2786. static
  2787. dberr_t
  2788. fil_tablespace_iterate(
  2789. /*===================*/
  2790. dict_table_t* table,
  2791. ulint n_io_buffers,
  2792. AbstractCallback& callback)
  2793. {
  2794. dberr_t err;
  2795. pfs_os_file_t file;
  2796. char* filepath;
  2797. ut_a(n_io_buffers > 0);
  2798. ut_ad(!srv_read_only_mode);
  2799. DBUG_EXECUTE_IF("ib_import_trigger_corruption_1",
  2800. return(DB_CORRUPTION););
  2801. /* Make sure the data_dir_path is set. */
  2802. dict_get_and_save_data_dir_path(table, false);
  2803. if (DICT_TF_HAS_DATA_DIR(table->flags)) {
  2804. ut_a(table->data_dir_path);
  2805. filepath = fil_make_filepath(
  2806. table->data_dir_path, table->name.m_name, IBD, true);
  2807. } else {
  2808. filepath = fil_make_filepath(
  2809. NULL, table->name.m_name, IBD, false);
  2810. }
  2811. if (!filepath) {
  2812. return(DB_OUT_OF_MEMORY);
  2813. } else {
  2814. bool success;
  2815. file = os_file_create_simple_no_error_handling(
  2816. innodb_data_file_key, filepath,
  2817. OS_FILE_OPEN, OS_FILE_READ_WRITE, false, &success);
  2818. if (!success) {
  2819. /* The following call prints an error message */
  2820. os_file_get_last_error(true);
  2821. ib::error() << "Trying to import a tablespace,"
  2822. " but could not open the tablespace file "
  2823. << filepath;
  2824. ut_free(filepath);
  2825. return DB_TABLESPACE_NOT_FOUND;
  2826. } else {
  2827. err = DB_SUCCESS;
  2828. }
  2829. }
  2830. callback.set_file(filepath, file);
  2831. os_offset_t file_size = os_file_get_size(file);
  2832. ut_a(file_size != (os_offset_t) -1);
  2833. /* Allocate a page to read in the tablespace header, so that we
  2834. can determine the page size and zip_size (if it is compressed).
  2835. We allocate an extra page in case it is a compressed table. One
  2836. page is to ensure alignement. */
  2837. void* page_ptr = ut_malloc_nokey(3 * srv_page_size);
  2838. byte* page = static_cast<byte*>(ut_align(page_ptr, srv_page_size));
  2839. buf_block_t* block = reinterpret_cast<buf_block_t*>
  2840. (ut_zalloc_nokey(sizeof *block));
  2841. block->frame = page;
  2842. block->page.id.copy_from(page_id_t(0, 0));
  2843. block->page.io_fix = BUF_IO_NONE;
  2844. block->page.buf_fix_count = 1;
  2845. block->page.state = BUF_BLOCK_FILE_PAGE;
  2846. /* Read the first page and determine the page and zip size. */
  2847. IORequest request(IORequest::READ);
  2848. request.disable_partial_io_warnings();
  2849. err = os_file_read_no_error_handling(request, file, page, 0,
  2850. srv_page_size, 0);
  2851. if (err == DB_SUCCESS) {
  2852. err = callback.init(file_size, block);
  2853. }
  2854. if (err == DB_SUCCESS) {
  2855. block->page.id.copy_from(
  2856. page_id_t(callback.get_space_id(), 0));
  2857. block->page.size.copy_from(callback.get_page_size());
  2858. if (block->page.size.is_compressed()) {
  2859. page_zip_set_size(&block->page.zip,
  2860. callback.get_page_size().physical());
  2861. /* ROW_FORMAT=COMPRESSED is not optimised for block IO
  2862. for now. We do the IMPORT page by page. */
  2863. n_io_buffers = 1;
  2864. }
  2865. fil_iterator_t iter;
  2866. /* read (optional) crypt data */
  2867. iter.crypt_data = fil_space_read_crypt_data(
  2868. callback.get_page_size(), page);
  2869. /* If tablespace is encrypted, it needs extra buffers */
  2870. if (iter.crypt_data && n_io_buffers > 1) {
  2871. /* decrease io buffers so that memory
  2872. consumption will not double */
  2873. n_io_buffers /= 2;
  2874. }
  2875. iter.file = file;
  2876. iter.start = 0;
  2877. iter.end = file_size;
  2878. iter.filepath = filepath;
  2879. iter.file_size = file_size;
  2880. iter.n_io_buffers = n_io_buffers;
  2881. /* Add an extra page for compressed page scratch area. */
  2882. void* io_buffer = ut_malloc_nokey(
  2883. (2 + iter.n_io_buffers) * srv_page_size);
  2884. iter.io_buffer = static_cast<byte*>(
  2885. ut_align(io_buffer, srv_page_size));
  2886. void* crypt_io_buffer = NULL;
  2887. if (iter.crypt_data) {
  2888. crypt_io_buffer = ut_malloc_nokey(
  2889. (2 + iter.n_io_buffers) * srv_page_size);
  2890. iter.crypt_io_buffer = static_cast<byte*>(
  2891. ut_align(crypt_io_buffer, srv_page_size));
  2892. }
  2893. if (block->page.zip.ssize) {
  2894. ut_ad(iter.n_io_buffers == 1);
  2895. block->frame = iter.io_buffer;
  2896. block->page.zip.data = block->frame + srv_page_size;
  2897. }
  2898. err = fil_iterate(iter, block, callback);
  2899. if (iter.crypt_data) {
  2900. fil_space_destroy_crypt_data(&iter.crypt_data);
  2901. }
  2902. ut_free(crypt_io_buffer);
  2903. ut_free(io_buffer);
  2904. }
  2905. if (err == DB_SUCCESS) {
  2906. ib::info() << "Sync to disk";
  2907. if (!os_file_flush(file)) {
  2908. ib::info() << "os_file_flush() failed!";
  2909. err = DB_IO_ERROR;
  2910. } else {
  2911. ib::info() << "Sync to disk - done!";
  2912. }
  2913. }
  2914. os_file_close(file);
  2915. ut_free(page_ptr);
  2916. ut_free(filepath);
  2917. ut_free(block);
  2918. return(err);
  2919. }
  2920. /*****************************************************************//**
  2921. Imports a tablespace. The space id in the .ibd file must match the space id
  2922. of the table in the data dictionary.
  2923. @return error code or DB_SUCCESS */
  2924. dberr_t
  2925. row_import_for_mysql(
  2926. /*=================*/
  2927. dict_table_t* table, /*!< in/out: table */
  2928. row_prebuilt_t* prebuilt) /*!< in: prebuilt struct in MySQL */
  2929. {
  2930. dberr_t err;
  2931. trx_t* trx;
  2932. ib_uint64_t autoinc = 0;
  2933. char* filepath = NULL;
  2934. ulint space_flags MY_ATTRIBUTE((unused));
  2935. /* The caller assured that this is not read_only_mode and that no
  2936. temorary tablespace is being imported. */
  2937. ut_ad(!srv_read_only_mode);
  2938. ut_ad(!dict_table_is_temporary(table));
  2939. ut_ad(table->space_id);
  2940. ut_ad(table->space_id < SRV_LOG_SPACE_FIRST_ID);
  2941. ut_ad(prebuilt->trx);
  2942. ut_ad(!table->is_readable());
  2943. ibuf_delete_for_discarded_space(table->space_id);
  2944. trx_start_if_not_started(prebuilt->trx, true);
  2945. trx = trx_create();
  2946. /* So that the table is not DROPped during recovery. */
  2947. trx_set_dict_operation(trx, TRX_DICT_OP_INDEX);
  2948. trx_start_if_not_started(trx, true);
  2949. /* So that we can send error messages to the user. */
  2950. trx->mysql_thd = prebuilt->trx->mysql_thd;
  2951. /* Ensure that the table will be dropped by trx_rollback_active()
  2952. in case of a crash. */
  2953. trx->table_id = table->id;
  2954. /* Assign an undo segment for the transaction, so that the
  2955. transaction will be recovered after a crash. */
  2956. /* TODO: Do not write any undo log for the IMPORT cleanup. */
  2957. {
  2958. mtr_t mtr;
  2959. mtr.start();
  2960. trx_undo_assign(trx, &err, &mtr);
  2961. mtr.commit();
  2962. }
  2963. DBUG_EXECUTE_IF("ib_import_undo_assign_failure",
  2964. err = DB_TOO_MANY_CONCURRENT_TRXS;);
  2965. if (err != DB_SUCCESS) {
  2966. return(row_import_cleanup(prebuilt, trx, err));
  2967. } else if (trx->rsegs.m_redo.undo == 0) {
  2968. err = DB_TOO_MANY_CONCURRENT_TRXS;
  2969. return(row_import_cleanup(prebuilt, trx, err));
  2970. }
  2971. prebuilt->trx->op_info = "read meta-data file";
  2972. /* Prevent DDL operations while we are checking. */
  2973. rw_lock_s_lock_func(dict_operation_lock, 0, __FILE__, __LINE__);
  2974. row_import cfg;
  2975. memset(&cfg, 0x0, sizeof(cfg));
  2976. err = row_import_read_cfg(table, trx->mysql_thd, cfg);
  2977. /* Check if the table column definitions match the contents
  2978. of the config file. */
  2979. if (err == DB_SUCCESS) {
  2980. /* We have a schema file, try and match it with our
  2981. data dictionary. */
  2982. err = cfg.match_schema(trx->mysql_thd);
  2983. /* Update index->page and SYS_INDEXES.PAGE_NO to match the
  2984. B-tree root page numbers in the tablespace. Use the index
  2985. name from the .cfg file to find match. */
  2986. if (err == DB_SUCCESS) {
  2987. cfg.set_root_by_name();
  2988. autoinc = cfg.m_autoinc;
  2989. }
  2990. rw_lock_s_unlock_gen(dict_operation_lock, 0);
  2991. DBUG_EXECUTE_IF("ib_import_set_index_root_failure",
  2992. err = DB_TOO_MANY_CONCURRENT_TRXS;);
  2993. } else if (cfg.m_missing) {
  2994. rw_lock_s_unlock_gen(dict_operation_lock, 0);
  2995. /* We don't have a schema file, we will have to discover
  2996. the index root pages from the .ibd file and skip the schema
  2997. matching step. */
  2998. ut_a(err == DB_FAIL);
  2999. cfg.m_page_size.copy_from(univ_page_size);
  3000. FetchIndexRootPages fetchIndexRootPages(table, trx);
  3001. err = fil_tablespace_iterate(
  3002. table, IO_BUFFER_SIZE(cfg.m_page_size.physical()),
  3003. fetchIndexRootPages);
  3004. if (err == DB_SUCCESS) {
  3005. err = fetchIndexRootPages.build_row_import(&cfg);
  3006. /* Update index->page and SYS_INDEXES.PAGE_NO
  3007. to match the B-tree root page numbers in the
  3008. tablespace. */
  3009. if (err == DB_SUCCESS) {
  3010. err = cfg.set_root_by_heuristic();
  3011. }
  3012. }
  3013. space_flags = fetchIndexRootPages.get_space_flags();
  3014. } else {
  3015. rw_lock_s_unlock_gen(dict_operation_lock, 0);
  3016. }
  3017. if (err != DB_SUCCESS) {
  3018. return(row_import_error(prebuilt, trx, err));
  3019. }
  3020. prebuilt->trx->op_info = "importing tablespace";
  3021. ib::info() << "Phase I - Update all pages";
  3022. /* Iterate over all the pages and do the sanity checking and
  3023. the conversion required to import the tablespace. */
  3024. PageConverter converter(&cfg, table->space_id, trx);
  3025. /* Set the IO buffer size in pages. */
  3026. err = fil_tablespace_iterate(
  3027. table, IO_BUFFER_SIZE(cfg.m_page_size.physical()), converter);
  3028. DBUG_EXECUTE_IF("ib_import_reset_space_and_lsn_failure",
  3029. err = DB_TOO_MANY_CONCURRENT_TRXS;);
  3030. if (err != DB_SUCCESS) {
  3031. char table_name[MAX_FULL_NAME_LEN + 1];
  3032. innobase_format_name(
  3033. table_name, sizeof(table_name),
  3034. table->name.m_name);
  3035. if (err != DB_DECRYPTION_FAILED) {
  3036. ib_errf(trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  3037. ER_INTERNAL_ERROR,
  3038. "Cannot reset LSNs in table %s : %s",
  3039. table_name, ut_strerr(err));
  3040. }
  3041. return(row_import_cleanup(prebuilt, trx, err));
  3042. }
  3043. row_mysql_lock_data_dictionary(trx);
  3044. /* If the table is stored in a remote tablespace, we need to
  3045. determine that filepath from the link file and system tables.
  3046. Find the space ID in SYS_TABLES since this is an ALTER TABLE. */
  3047. dict_get_and_save_data_dir_path(table, true);
  3048. if (DICT_TF_HAS_DATA_DIR(table->flags)) {
  3049. ut_a(table->data_dir_path);
  3050. filepath = fil_make_filepath(
  3051. table->data_dir_path, table->name.m_name, IBD, true);
  3052. } else {
  3053. filepath = fil_make_filepath(
  3054. NULL, table->name.m_name, IBD, false);
  3055. }
  3056. DBUG_EXECUTE_IF(
  3057. "ib_import_OOM_15",
  3058. ut_free(filepath);
  3059. filepath = NULL;
  3060. );
  3061. if (filepath == NULL) {
  3062. row_mysql_unlock_data_dictionary(trx);
  3063. return(row_import_cleanup(prebuilt, trx, DB_OUT_OF_MEMORY));
  3064. }
  3065. /* Open the tablespace so that we can access via the buffer pool.
  3066. We set the 2nd param (fix_dict = true) here because we already
  3067. have an x-lock on dict_operation_lock and dict_sys->mutex.
  3068. The tablespace is initially opened as a temporary one, because
  3069. we will not be writing any redo log for it before we have invoked
  3070. fil_space_t::set_imported() to declare it a persistent tablespace. */
  3071. ulint fsp_flags = dict_tf_to_fsp_flags(table->flags);
  3072. table->space = fil_ibd_open(
  3073. true, true, FIL_TYPE_IMPORT, table->space_id,
  3074. fsp_flags, table->name, filepath, &err);
  3075. ut_ad((table->space == NULL) == (err != DB_SUCCESS));
  3076. DBUG_EXECUTE_IF("ib_import_open_tablespace_failure",
  3077. err = DB_TABLESPACE_NOT_FOUND; table->space = NULL;);
  3078. if (!table->space) {
  3079. row_mysql_unlock_data_dictionary(trx);
  3080. ib_senderrf(trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  3081. ER_GET_ERRMSG,
  3082. err, ut_strerr(err), filepath);
  3083. ut_free(filepath);
  3084. return(row_import_cleanup(prebuilt, trx, err));
  3085. }
  3086. row_mysql_unlock_data_dictionary(trx);
  3087. ut_free(filepath);
  3088. err = ibuf_check_bitmap_on_import(trx, table->space);
  3089. DBUG_EXECUTE_IF("ib_import_check_bitmap_failure", err = DB_CORRUPTION;);
  3090. if (err != DB_SUCCESS) {
  3091. return(row_import_cleanup(prebuilt, trx, err));
  3092. }
  3093. /* The first index must always be the clustered index. */
  3094. dict_index_t* index = dict_table_get_first_index(table);
  3095. if (!dict_index_is_clust(index)) {
  3096. return(row_import_error(prebuilt, trx, DB_CORRUPTION));
  3097. }
  3098. /* Update the Btree segment headers for index node and
  3099. leaf nodes in the root page. Set the new space id. */
  3100. err = btr_root_adjust_on_import(index);
  3101. DBUG_EXECUTE_IF("ib_import_cluster_root_adjust_failure",
  3102. err = DB_CORRUPTION;);
  3103. if (err != DB_SUCCESS) {
  3104. return(row_import_error(prebuilt, trx, err));
  3105. } else if (cfg.requires_purge(index->name)) {
  3106. /* Purge any delete-marked records that couldn't be
  3107. purged during the page conversion phase from the
  3108. cluster index. */
  3109. IndexPurge purge(trx, index);
  3110. trx->op_info = "cluster: purging delete marked records";
  3111. err = purge.garbage_collect();
  3112. trx->op_info = "";
  3113. }
  3114. DBUG_EXECUTE_IF("ib_import_cluster_failure", err = DB_CORRUPTION;);
  3115. if (err != DB_SUCCESS) {
  3116. return(row_import_error(prebuilt, trx, err));
  3117. }
  3118. /* For secondary indexes, purge any records that couldn't be purged
  3119. during the page conversion phase. */
  3120. err = row_import_adjust_root_pages_of_secondary_indexes(
  3121. prebuilt, trx, table, cfg);
  3122. DBUG_EXECUTE_IF("ib_import_sec_root_adjust_failure",
  3123. err = DB_CORRUPTION;);
  3124. if (err != DB_SUCCESS) {
  3125. return(row_import_error(prebuilt, trx, err));
  3126. }
  3127. /* Ensure that the next available DB_ROW_ID is not smaller than
  3128. any DB_ROW_ID stored in the table. */
  3129. if (prebuilt->clust_index_was_generated) {
  3130. err = row_import_set_sys_max_row_id(prebuilt, table);
  3131. if (err != DB_SUCCESS) {
  3132. return(row_import_error(prebuilt, trx, err));
  3133. }
  3134. }
  3135. ib::info() << "Phase III - Flush changes to disk";
  3136. /* Ensure that all pages dirtied during the IMPORT make it to disk.
  3137. The only dirty pages generated should be from the pessimistic purge
  3138. of delete marked records that couldn't be purged in Phase I. */
  3139. {
  3140. FlushObserver observer(prebuilt->table->space, trx, NULL);
  3141. buf_LRU_flush_or_remove_pages(prebuilt->table->space_id,
  3142. &observer);
  3143. if (observer.is_interrupted()) {
  3144. ib::info() << "Phase III - Flush interrupted";
  3145. return(row_import_error(prebuilt, trx,
  3146. DB_INTERRUPTED));
  3147. }
  3148. }
  3149. ib::info() << "Phase IV - Flush complete";
  3150. prebuilt->table->space->set_imported();
  3151. /* The dictionary latches will be released in in row_import_cleanup()
  3152. after the transaction commit, for both success and error. */
  3153. row_mysql_lock_data_dictionary(trx);
  3154. /* Update the root pages of the table's indexes. */
  3155. err = row_import_update_index_root(trx, table, false, true);
  3156. if (err != DB_SUCCESS) {
  3157. return(row_import_error(prebuilt, trx, err));
  3158. }
  3159. err = row_import_update_discarded_flag(trx, table->id, false);
  3160. if (err != DB_SUCCESS) {
  3161. return(row_import_error(prebuilt, trx, err));
  3162. }
  3163. table->file_unreadable = false;
  3164. table->flags2 &= ~DICT_TF2_DISCARDED;
  3165. /* Set autoinc value read from .cfg file, if one was specified.
  3166. Otherwise, keep the PAGE_ROOT_AUTO_INC as is. */
  3167. if (autoinc) {
  3168. ib::info() << table->name << " autoinc value set to "
  3169. << autoinc;
  3170. table->autoinc = autoinc--;
  3171. btr_write_autoinc(dict_table_get_first_index(table), autoinc);
  3172. }
  3173. return(row_import_cleanup(prebuilt, trx, err));
  3174. }