You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

4758 lines
124 KiB

MDEV-22456 Dropping the adaptive hash index may cause DDL to lock up InnoDB If the InnoDB buffer pool contains many pages for a table or index that is being dropped or rebuilt, and if many of such pages are pointed to by the adaptive hash index, dropping the adaptive hash index may consume a lot of time. The time-consuming operation of dropping the adaptive hash index entries is being executed while the InnoDB data dictionary cache dict_sys is exclusively locked. It is not actually necessary to drop all adaptive hash index entries at the time a table or index is being dropped or rebuilt. We can let the LRU replacement policy of the buffer pool take care of this gradually. For this to work, we must detach the dict_table_t and dict_index_t objects from the main dict_sys cache, and once the last adaptive hash index entry for the detached table is removed (when the garbage page is evicted from the buffer pool) we can free the dict_table_t and dict_index_t object. Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE skip both the buffer pool eviction and the drop of the adaptive hash index. We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE. We can remove the eviction from DROP TABLE. We must retain the eviction in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the discarded table is being re-imported with the same tablespace identifier, the fresh data from the imported tablespace will replace any stale pages in the buffer pool. rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can no longer be interrupted inside InnoDB. fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(), fseg_free_page_low(), fseg_free_extent(): Remove the parameter that specifies whether the adaptive hash index should be dropped. btr_search_lazy_free(): Lazily free an index when the last reference to it is dropped from the adaptive hash index. buf_pool_clear_hash_index(): Declare static, and move to the same compilation unit with the bulk of the adaptive hash index code. dict_index_t::clone(), dict_index_t::clone_if_needed(): Clone an index that is being rebuilt while adaptive hash index entries exist. The original index will be inserted into dict_table_t::freed_indexes and dict_index_t::set_freed() will be called. dict_index_t::set_freed(), dict_index_t::freed(): Note that or check whether the index has been freed. We will use the impossible page number 1 to denote this condition. dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count(). dict_index_t::detach_columns(): Move the assignment n_fields=0 to ha_innobase_inplace_ctx::clear_added_indexes(). We must have access to the columns when freeing the adaptive hash index. Note: dict_table_t::v_cols[] will remain valid. If virtual columns are dropped or added, the table definition will be reloaded in ha_innobase::commit_inplace_alter_table(). buf_page_mtr_lock(): Drop a stale adaptive hash index if needed. We will also reduce the number of btr_get_search_latch() calls and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT in order to benefit cmake -DWITH_INNODB_AHI=OFF.
6 years ago
MDEV-17794 Do not assign persistent ID for temporary tables InnoDB in MySQL 5.7 introduced two new parameters to the function dict_hdr_get_new_id(), to allow redo logging to be disabled when assigning identifiers to temporary tables or during the backup-unfriendly TRUNCATE TABLE that was replaced in MariaDB by MDEV-13564. Now that MariaDB 10.4.0 removed the crash recovery code for the backup-unfriendly TRUNCATE, we can revert dict_hdr_get_new_id() to be used only for persistent data structures. dict_table_assign_new_id(): Remove. This was a simple 2-line function that was called from few places. dict_table_open_on_id_low(): Declare in the only file where it is called. dict_sys_t::temp_id_hash: A separate lookup table for temporary tables. Table names will be in the common dict_sys_t::table_hash. dict_sys_t::get_temporary_table_id(): Assign a temporary table ID. dict_sys_t::get_table(): Look up a persistent table. dict_sys_t::get_temporary_table(): Look up a temporary table. dict_sys_t::temp_table_id: The sequence of temporary table identifiers. Starts from DICT_HDR_FIRST_ID, so that we can continue to simply compare dict_table_t::id to a few constants for the persistent hard-coded data dictionary tables. undo_node_t::state: Distinguish temporary and persistent tables. lock_check_dict_lock(), lock_get_table_id(): Assert that there cannot be locks on temporary tables. row_rec_to_index_entry_impl(): Assert that there cannot be metadata records on temporary tables. row_undo_ins_parse_undo_rec(): Distinguish temporary and persistent tables. Move some assertions from the only caller. Return whether the table was found. row_undo_ins(): Add some assertions. row_undo_mod_clust(), row_undo_mod(): Do not assign node->state. Let row_undo() do that. row_undo_mod_parse_undo_rec(): Distinguish temporary and persistent tables. Move some assertions from the only caller. Return whether the table was found. row_undo_try_truncate(): Renamed and simplified from trx_roll_try_truncate(). row_undo_rec_get(): Replaces trx_roll_pop_top_rec_of_trx() and trx_roll_pop_top_rec(). Fetch an undo log record, and assign undo->state accordingly. trx_undo_truncate_end(): Acquire the rseg->mutex only for the minimum required duration, and release it between mini-transactions.
7 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-33277 In-place upgrade causes invalid AUTO_INCREMENT values MDEV-33308 CHECK TABLE is modifying .frm file even if --read-only As noted in commit d0ef1aaf618452efea5a2e74f6d86ed0a586b63e, MySQL as well as older versions of MariaDB server would during ALTER TABLE ... IMPORT TABLESPACE write bogus values to the PAGE_MAX_TRX_ID field to pages of the clustered index, instead of letting that field remain 0. In commit 8777458a6eb73ac1d7d864ebac390ea7039e21c1 this field was repurposed for PAGE_ROOT_AUTO_INC in the clustered index root page. To avoid trouble when upgrading from MySQL or older versions of MariaDB, we will try to detect and correct bogus values of PAGE_ROOT_AUTO_INC when opening a table for the first time from the SQL layer. btr_read_autoinc_with_fallback(): Add the parameters to mysql_version,max to indicate the TABLE_SHARE::mysql_version of the .frm file and the maximum value allowed for the type of the AUTO_INCREMENT column. In case the table was originally created in MySQL or an older version of MariaDB, read also the maximum value of the AUTO_INCREMENT column from the table and reset the PAGE_ROOT_AUTO_INC if it is above the limit. dict_table_t::get_index(const dict_col_t &) const: Find an index that starts with the specified column. ha_innobase::check_for_upgrade(): Return HA_ADMIN_FAILED if InnoDB needs upgrading but is in read-only mode. In this way, the call to update_frm_version() will be skipped. row_import_autoinc(): Adjust the AUTO_INCREMENT column at the end of ALTER TABLE...IMPORT TABLESPACE. This refinement was suggested by Debarun Banerjee. The changes outside InnoDB were developed by Michael 'Monty' Widenius: Added print_check_msg() service for easy reporting of check/repair messages in ENGINE=Aria and ENGINE=InnoDB. Fixed that CHECK TABLE do not update the .frm file under --read-only. Added 'handler_flags' to HA_CHECK_OPT as a way for storage engines to store state from handler::check_for_upgrade(). Reviewed by: Debarun Banerjee
2 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-11623 MariaDB 10.1 fails to start datadir created with MariaDB 10.0/MySQL 5.6 using innodb-page-size!=16K The storage format of FSP_SPACE_FLAGS was accidentally broken already in MariaDB 10.1.0. This fix is bringing the format in line with other MySQL and MariaDB release series. Please refer to the comments that were added to fsp0fsp.h for details. This is an INCOMPATIBLE CHANGE that affects users of page_compression and non-default innodb_page_size. Upgrading to this release will correct the flags in the data files. If you want to downgrade to earlier MariaDB 10.1.x, please refer to the test innodb.101_compatibility how to reset the FSP_SPACE_FLAGS in the files. NOTE: MariaDB 10.1.0 to 10.1.20 can misinterpret uncompressed data files with innodb_page_size=4k or 64k as compressed innodb_page_size=16k files, and then probably fail when trying to access the pages. See the comments in the function fsp_flags_convert_from_101() for detailed analysis. Move PAGE_COMPRESSION to FSP_SPACE_FLAGS bit position 16. In this way, compressed innodb_page_size=16k tablespaces will not be mistaken for uncompressed ones by MariaDB 10.1.0 to 10.1.20. Derive PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR from the dict_table_t::flags when the table is available, in fil_space_for_table_exists_in_mem() or fil_open_single_table_tablespace(). During crash recovery, fil_load_single_table_tablespace() will use innodb_compression_level for the PAGE_COMPRESSION_LEVEL. FSP_FLAGS_MEM_MASK: A bitmap of the memory-only fil_space_t::flags that are not to be written to FSP_SPACE_FLAGS. Currently, these will include PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES and DATA_DIR. Introduce the macro FSP_FLAGS_PAGE_SSIZE(). We only support one innodb_page_size for the whole instance. When creating a dummy tablespace for the redo log, use fil_space_t::flags=0. The flags are never written to the redo log files. Remove many FSP_FLAGS_SET_ macros. dict_tf_verify_flags(): Remove. This is basically only duplicating the logic of dict_tf_to_fsp_flags(), used in a debug assertion. fil_space_t::mark: Remove. This flag was not used for anything. fil_space_for_table_exists_in_mem(): Remove the unnecessary parameter mark_space, and add a parameter for table flags. Check that fil_space_t::flags match the table flags, and adjust the (memory-only) flags based on the table flags. fil_node_open_file(): Remove some redundant or unreachable conditions, do not use stderr for output, and avoid unnecessary server aborts. fil_user_tablespace_restore_page(): Convert the flags, so that the correct page_size will be used when restoring a page from the doublewrite buffer. fil_space_get_page_compressed(), fsp_flags_is_page_compressed(): Remove. It suffices to have fil_space_is_page_compressed(). FSP_FLAGS_WIDTH_DATA_DIR, FSP_FLAGS_WIDTH_PAGE_COMPRESSION_LEVEL, FSP_FLAGS_WIDTH_ATOMIC_WRITES: Remove, because these flags do not exist in the FSP_SPACE_FLAGS but only in memory. fsp_flags_try_adjust(): New function, to adjust the FSP_SPACE_FLAGS in page 0. Called by fil_open_single_table_tablespace(), fil_space_for_table_exists_in_mem(), innobase_start_or_create_for_mysql() except if --innodb-read-only is active. fsp_flags_is_valid(ulint): Reimplement from the scratch, with accurate comments. Do not display any details of detected inconsistencies, because the output could be confusing when dealing with MariaDB 10.1.x data files. fsp_flags_convert_from_101(ulint): Convert flags from buggy MariaDB 10.1.x format, or return ULINT_UNDEFINED if the flags cannot be in MariaDB 10.1.x format. fsp_flags_match(): Check the flags when probing files. Implemented based on fsp_flags_is_valid() and fsp_flags_convert_from_101(). dict_check_tablespaces_and_store_max_id(): Do not access the page after committing the mini-transaction. IMPORT TABLESPACE fixes: AbstractCallback::init(): Convert the flags. FetchIndexRootPages::operator(): Check that the tablespace flags match the table flags. Do not attempt to convert tablespace flags to table flags, because the conversion would necessarily be lossy. PageConverter::update_header(): Write back the correct flags. This takes care of the flags in IMPORT TABLESPACE.
9 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
10 years ago
10 years ago
10 years ago
10 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
MDEV-15662 Instant DROP COLUMN or changing the order of columns Allow ADD COLUMN anywhere in a table, not only adding as the last column. Allow instant DROP COLUMN and instant changing the order of columns. The added columns will always be added last in clustered index records. In new records, instantly dropped columns will be stored as NULL or empty when possible. Information about dropped and reordered columns will be written in a metadata BLOB (mblob), which is stored before the first 'user' field in the hidden metadata record at the start of the clustered index. The presence of mblob is indicated by setting the delete-mark flag in the metadata record. The metadata BLOB stores the number of clustered index fields, followed by an array of column information for each field. For dropped columns, we store the NOT NULL flag, the fixed length, and for variable-length columns, whether the maximum length exceeded 255 bytes. For non-dropped columns, we store the column position. Unlike with MDEV-11369, when a table becomes empty, it cannot be converted back to the canonical format. The reason for this is that other threads may hold cached objects such as row_prebuilt_t::ins_node that could refer to dropped or reordered index fields. For instant DROP COLUMN and ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC, we must store the n_core_null_bytes in the root page, so that the chain of node pointer records can be followed in order to reach the leftmost leaf page where the metadata record is located. If the mblob is present, we will zero-initialize the strings "infimum" and "supremum" in the root page, and use the last byte of "supremum" for storing the number of null bytes (which are allocated but useless on node pointer pages). This is necessary for btr_cur_instant_init_metadata() to be able to navigate to the mblob. If the PRIMARY KEY contains any variable-length column and some nullable columns were instantly dropped, the dict_index_t::n_nullable in the data dictionary could be smaller than it actually is in the non-leaf pages. Because of this, the non-leaf pages could use more bytes for the null flags than the data dictionary expects, and we could be reading the lengths of the variable-length columns from the wrong offset, and thus reading the child page number from wrong place. This is the result of two design mistakes that involve unnecessary storage of data: First, it is nonsense to store any data fields for the leftmost node pointer records, because the comparisons would be resolved by the MIN_REC_FLAG alone. Second, there cannot be any null fields in the clustered index node pointer fields, but we nevertheless reserve space for all the null flags. Limitations (future work): MDEV-17459 Allow instant ALTER TABLE even if FULLTEXT INDEX exists MDEV-17468 Avoid table rebuild on operations on generated columns MDEV-17494 Refuse ALGORITHM=INSTANT when the row size is too large btr_page_reorganize_low(): Preserve any metadata in the root page. Call lock_move_reorganize_page() only after restoring the "infimum" and "supremum" records, to avoid a memcmp() assertion failure. dict_col_t::DROPPED: Magic value for dict_col_t::ind. dict_col_t::clear_instant(): Renamed from dict_col_t::remove_instant(). Do not assert that the column was instantly added, because we sometimes call this unconditionally for all columns. Convert an instantly added column to a "core column". The old name remove_instant() could be mistaken to refer to "instant DROP COLUMN". dict_col_t::is_added(): Rename from dict_col_t::is_instant(). dtype_t::metadata_blob_init(): Initialize the mblob data type. dtuple_t::is_metadata(), dtuple_t::is_alter_metadata(), upd_t::is_metadata(), upd_t::is_alter_metadata(): Check if info_bits refer to a metadata record. dict_table_t::instant: Metadata about dropped or reordered columns. dict_table_t::prepare_instant(): Prepare ha_innobase_inplace_ctx::instant_table for instant ALTER TABLE. innobase_instant_try() will pass this to dict_table_t::instant_column(). On rollback, dict_table_t::rollback_instant() will be called. dict_table_t::instant_column(): Renamed from instant_add_column(). Add the parameter col_map so that columns can be reordered. Copy and adjust v_cols[] as well. dict_table_t::find(): Find an old column based on a new column number. dict_table_t::serialise_columns(), dict_table_t::deserialise_columns(): Convert the mblob. dict_index_t::instant_metadata(): Create the metadata record for instant ALTER TABLE. Invoke dict_table_t::serialise_columns(). dict_index_t::reconstruct_fields(): Invoked by dict_table_t::deserialise_columns(). dict_index_t::clear_instant_alter(): Move the fields for the dropped columns to the end, and sort the surviving index fields in ascending order of column position. ha_innobase::check_if_supported_inplace_alter(): Do not allow adding a FTS_DOC_ID column if a hidden FTS_DOC_ID column exists due to FULLTEXT INDEX. (This always required ALGORITHM=COPY.) instant_alter_column_possible(): Add a parameter for InnoDB table, to check for additional conditions, such as the maximum number of index fields. ha_innobase_inplace_ctx::first_alter_pos: The first column whose position is affected by instant ADD, DROP, or changing the order of columns. innobase_build_col_map(): Skip added virtual columns. prepare_inplace_add_virtual(): Correctly compute num_to_add_vcol. Remove some unnecessary code. Note that the call to innodb_base_col_setup() should be executed later. commit_try_norebuild(): If ctx->is_instant(), let the virtual columns be added or dropped by innobase_instant_try(). innobase_instant_try(): Fill in a zero default value for the hidden column FTS_DOC_ID (to reduce the work needed in MDEV-17459). If any columns were dropped or reordered (or added not last), delete any SYS_COLUMNS records for the following columns, and insert SYS_COLUMNS records for all subsequent stored columns as well as for all virtual columns. If any virtual column is dropped, rewrite all virtual column metadata. Use a shortcut only for adding virtual columns. This is because innobase_drop_virtual_try() assumes that the dropped virtual columns still exist in ctx->old_table. innodb_update_cols(): Renamed from innodb_update_n_cols(). innobase_add_one_virtual(), innobase_insert_sys_virtual(): Change the return type to bool, and invoke my_error() when detecting an error. innodb_insert_sys_columns(): Insert a record into SYS_COLUMNS. Refactored from innobase_add_one_virtual() and innobase_instant_add_col(). innobase_instant_add_col(): Replace the parameter dfield with type. innobase_instant_drop_cols(): Drop matching columns from SYS_COLUMNS and all columns from SYS_VIRTUAL. innobase_add_virtual_try(), innobase_drop_virtual_try(): Let the caller invoke innodb_update_cols(). innobase_rename_column_try(): Skip dropped columns. commit_cache_norebuild(): Update table->fts->doc_col. dict_mem_table_col_rename_low(): Skip dropped columns. trx_undo_rec_get_partial_row(): Skip dropped columns. trx_undo_update_rec_get_update(): Handle the metadata BLOB correctly. trx_undo_page_report_modify(): Avoid out-of-bounds access to record fields. Log metadata records consistently. Apparently, the first fields of a clustered index may be updated in an update_undo vector when the index is ID_IND of SYS_FOREIGN, as part of renaming the table during ALTER TABLE. Normally, updates of the PRIMARY KEY should be logged as delete-mark and an insert. row_undo_mod_parse_undo_rec(), row_purge_parse_undo_rec(): Use trx_undo_metadata. row_undo_mod_clust_low(): On metadata rollback, roll back the root page too. row_undo_mod_clust(): Relax an assertion. The delete-mark flag was repurposed for ALTER TABLE metadata records. row_rec_to_index_entry_impl(): Add the template parameter mblob and the optional parameter info_bits for specifying the desired new info bits. For the metadata tuple, allow conversion between the original format (ADD COLUMN only) and the generic format (with hidden BLOB). Add the optional parameter "pad" to determine whether the tuple should be padded to the index fields (on ALTER TABLE it should), or whether it should remain at its original size (on rollback). row_build_index_entry_low(): Clean up the code, removing redundant variables and conditions. For instantly dropped columns, generate a dummy value that is NULL, the empty string, or a fixed length of NUL bytes, depending on the type of the dropped column. row_upd_clust_rec_by_insert_inherit_func(): On the update of PRIMARY KEY of a record that contained a dropped column whose value was stored externally, we will be inserting a dummy NULL or empty string value to the field of the dropped column. The externally stored column would eventually be dropped when purge removes the delete-marked record for the old PRIMARY KEY value. btr_index_rec_validate(): Recognize the metadata record. btr_discard_only_page_on_level(): Preserve the generic instant ALTER TABLE metadata. btr_set_instant(): Replaces page_set_instant(). This sets a clustered index root page to the appropriate format, or upgrades from the MDEV-11369 instant ADD COLUMN to generic ALTER TABLE format. btr_cur_instant_init_low(): Read and validate the metadata BLOB page before reconstructing the dictionary information based on it. btr_cur_instant_init_metadata(): Do not read any lengths from the metadata record header before reading the BLOB. At this point, we would not actually know how many nullable fields the metadata record contains. btr_cur_instant_root_init(): Initialize n_core_null_bytes in one of two possible ways. btr_cur_trim(): Handle the mblob record. row_metadata_to_tuple(): Convert a metadata record to a data tuple, based on the new info_bits of the metadata record. btr_cur_pessimistic_update(): Invoke row_metadata_to_tuple() if needed. Invoke dtuple_convert_big_rec() for metadata records if the record is too large, or if the mblob is not yet marked as externally stored. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): When the last user record is deleted, do not delete the generic instant ALTER TABLE metadata record. Only delete MDEV-11369 instant ADD COLUMN metadata records. btr_cur_optimistic_insert(): Avoid unnecessary computation of rec_size. btr_pcur_store_position(): Allow a logically empty page to contain a metadata record for generic ALTER TABLE. REC_INFO_DEFAULT_ROW_ADD: Renamed from REC_INFO_DEFAULT_ROW. This is for the old instant ADD COLUMN (MDEV-11369) only. REC_INFO_DEFAULT_ROW_ALTER: The more generic metadata record, with additional information for dropped or reordered columns. rec_info_bits_valid(): Remove. The only case when this would fail is when the record is the generic ALTER TABLE metadata record. rec_is_alter_metadata(): Check if a record is the metadata record for instant ALTER TABLE (other than ADD COLUMN). NOTE: This function must not be invoked on node pointer records, because the delete-mark flag in those records may be set (it is garbage), and then a debug assertion could fail because index->is_instant() does not necessarily hold. rec_is_add_metadata(): Check if a record is MDEV-11369 ADD COLUMN metadata record (not more generic instant ALTER TABLE). rec_get_converted_size_comp_prefix_low(): Assume that the metadata field will be stored externally. In dtuple_convert_big_rec() during the rec_get_converted_size() call, it would not be there yet. rec_get_converted_size_comp(): Replace status,fields,n_fields with tuple. rec_init_offsets_comp_ordinary(), rec_get_converted_size_comp_prefix_low(), rec_convert_dtuple_to_rec_comp(): Add template<bool mblob = false>. With mblob=true, process a record with a metadata BLOB. rec_copy_prefix_to_buf(): Assert that no fields beyond the key and system columns are being copied. Exclude the metadata BLOB field. rec_convert_dtuple_to_metadata_comp(): Convert an alter metadata tuple into a record. row_upd_index_replace_metadata(): Apply an update vector to an alter_metadata tuple. row_log_allocate(): Replace dict_index_t::is_instant() with a more appropriate condition that ignores dict_table_t::instant. Only a table on which the MDEV-11369 ADD COLUMN was performed can "lose its instantness" when it becomes empty. After instant DROP COLUMN or reordering columns, we cannot simply convert the table to the canonical format, because the data dictionary cache and all possibly existing references to it from other client connection threads would have to be adjusted. row_quiesce_write_index_fields(): Do not crash when the table contains an instantly dropped column. Thanks to Thirunarayanan Balathandayuthapani for discussing the design and implementing an initial prototype of this. Thanks to Matthias Leich for testing.
7 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-14407 Assertion failure during rollback Rollback attempted to dereference DB_ROLL_PTR=0, which cannot possibly be a valid undo log pointer. A safer canonical value would be roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS which is what was chosen in MDEV-12288, corresponding to reset_trx_id. No deterministic test case for the bug was found. The simplest test cases may be related to MDEV-11415, which suppresses undo logging for ALGORITHM=COPY operations. In those operations, in the spirit of MDEV-12288, we should actually have written reset_trx_id instead of using the transaction identifier of the current transaction (and a bogus value of DB_ROLL_PTR=0). However, thanks to MySQL Bug#28432 which I had fixed in MySQL 5.6.8 as part of WL#6255, access to the rebuilt table by earlier-started transactions should actually have been refused with ER_TABLE_DEF_CHANGED. reset_trx_id: Move the definition to data0type.cc and the declaration to data0type.h. btr_cur_ins_lock_and_undo(): When undo logging is disabled, use the safe value that corresponds to reset_trx_id. btr_cur_optimistic_insert(): Validate the DB_TRX_ID,DB_ROLL_PTR before inserting into a clustered index leaf page. ins_node_t::sys_buf[]: Replaces row_id_buf and trx_id_buf and some heap usage. row_ins_alloc_sys_fields(): Init ins_node_t::sys_buf[] to reset_trx_id. row_ins_buf(): Only if undo logging is enabled, copy trx->id to node->sys_buf. Otherwise, rely on the initialization in row_ins_alloc_sys_fields(). row_purge_reset_trx_id(): Invoke mlog_write_string() with reset_trx_id directly. (No functional change.) trx_undo_page_report_modify(): Assert that the DB_ROLL_PTR is not 0. trx_undo_get_undo_rec_low(): Assert that the roll_ptr is valid before trying to dereference it. dict_index_t::is_primary(): Check if the index is the primary key. PageConverter::adjust_cluster_record(): Fix MDEV-15249 Crash in MVCC read after IMPORT TABLESPACE by resetting the system fields to reset_trx_id instead of writing the current transaction ID (which will be committed at the end of the IMPORT TABLESPACE) and DB_ROLL_PTR=0. This can partially be viewed as a follow-up fix of MDEV-12288, because IMPORT should already then have written DB_TRX_ID=0 and DB_ROLL_PTR=1<<55 to prevent unnecessary DB_TRX_ID lookups in subsequent accesses to the table.
8 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-13328 ALTER TABLE…DISCARD TABLESPACE takes a lot of time With a big buffer pool that contains many data pages, DISCARD TABLESPACE took a long time, because it would scan the entire buffer pool to remove any pages that belong to the tablespace. With a large buffer pool, this would take a lot of time, especially when the table-to-discard is empty. The minimum amount of work that DISCARD TABLESPACE must do is to remove the pages of the to-be-discarded table from the buf_pool->flush_list because any writes to the data file must be prevented before the file is deleted. If DISCARD TABLESPACE does not evict the pages from the buffer pool, then IMPORT TABLESPACE must do it, because we must prevent pre-DISCARD, not-yet-evicted pages from being mistaken for pages of the imported tablespace. It would not be a useful fix to simply move the buffer pool scan to the IMPORT TABLESPACE step. What we can do is to actively evict those pages that could be mistaken for imported pages. In this way, when importing a small table into a big buffer pool, the import should still run relatively fast. Import is bypassing the buffer pool when reading pages for the adjustment phase. In the adjustment phase, if a page exists in the buffer pool, we could replace it with the page from the imported file. Unfortunately I did not get this to work properly, so instead we will simply evict any matching page from the buffer pool. buf_page_get_gen(): Implement BUF_EVICT_IF_IN_POOL, a new mode where the requested page will be evicted if it is found. There must be no unwritten changes for the page. buf_remove_t: Remove. Instead, use trx!=NULL to signify that a write to file is desired, and use a separate parameter bool drop_ahi. buf_LRU_flush_or_remove_pages(), fil_delete_tablespace(): Replace buf_remove_t. buf_LRU_remove_pages(), buf_LRU_remove_all_pages(): Remove. PageConverter::m_mtr: A dummy mini-transaction buffer PageConverter::PageConverter(): Complete the member initialization list. PageConverter::operator()(): Evict any 'shadow' pages from the buffer pool so that pre-existing (garbage) pages cannot be mistaken for pages that exist in the being-imported file. row_discard_tablespace(): Remove a bogus comment that seems to refer to IMPORT TABLESPACE, not DISCARD TABLESPACE.
8 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-13328 ALTER TABLE…DISCARD TABLESPACE takes a lot of time With a big buffer pool that contains many data pages, DISCARD TABLESPACE took a long time, because it would scan the entire buffer pool to remove any pages that belong to the tablespace. With a large buffer pool, this would take a lot of time, especially when the table-to-discard is empty. The minimum amount of work that DISCARD TABLESPACE must do is to remove the pages of the to-be-discarded table from the buf_pool->flush_list because any writes to the data file must be prevented before the file is deleted. If DISCARD TABLESPACE does not evict the pages from the buffer pool, then IMPORT TABLESPACE must do it, because we must prevent pre-DISCARD, not-yet-evicted pages from being mistaken for pages of the imported tablespace. It would not be a useful fix to simply move the buffer pool scan to the IMPORT TABLESPACE step. What we can do is to actively evict those pages that could be mistaken for imported pages. In this way, when importing a small table into a big buffer pool, the import should still run relatively fast. Import is bypassing the buffer pool when reading pages for the adjustment phase. In the adjustment phase, if a page exists in the buffer pool, we could replace it with the page from the imported file. Unfortunately I did not get this to work properly, so instead we will simply evict any matching page from the buffer pool. buf_page_get_gen(): Implement BUF_EVICT_IF_IN_POOL, a new mode where the requested page will be evicted if it is found. There must be no unwritten changes for the page. buf_remove_t: Remove. Instead, use trx!=NULL to signify that a write to file is desired, and use a separate parameter bool drop_ahi. buf_LRU_flush_or_remove_pages(), fil_delete_tablespace(): Replace buf_remove_t. buf_LRU_remove_pages(), buf_LRU_remove_all_pages(): Remove. PageConverter::m_mtr: A dummy mini-transaction buffer PageConverter::PageConverter(): Complete the member initialization list. PageConverter::operator()(): Evict any 'shadow' pages from the buffer pool so that pre-existing (garbage) pages cannot be mistaken for pages that exist in the being-imported file. row_discard_tablespace(): Remove a bogus comment that seems to refer to IMPORT TABLESPACE, not DISCARD TABLESPACE.
8 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
9 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
9 years ago
9 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
MDEV-15662 Instant DROP COLUMN or changing the order of columns Allow ADD COLUMN anywhere in a table, not only adding as the last column. Allow instant DROP COLUMN and instant changing the order of columns. The added columns will always be added last in clustered index records. In new records, instantly dropped columns will be stored as NULL or empty when possible. Information about dropped and reordered columns will be written in a metadata BLOB (mblob), which is stored before the first 'user' field in the hidden metadata record at the start of the clustered index. The presence of mblob is indicated by setting the delete-mark flag in the metadata record. The metadata BLOB stores the number of clustered index fields, followed by an array of column information for each field. For dropped columns, we store the NOT NULL flag, the fixed length, and for variable-length columns, whether the maximum length exceeded 255 bytes. For non-dropped columns, we store the column position. Unlike with MDEV-11369, when a table becomes empty, it cannot be converted back to the canonical format. The reason for this is that other threads may hold cached objects such as row_prebuilt_t::ins_node that could refer to dropped or reordered index fields. For instant DROP COLUMN and ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC, we must store the n_core_null_bytes in the root page, so that the chain of node pointer records can be followed in order to reach the leftmost leaf page where the metadata record is located. If the mblob is present, we will zero-initialize the strings "infimum" and "supremum" in the root page, and use the last byte of "supremum" for storing the number of null bytes (which are allocated but useless on node pointer pages). This is necessary for btr_cur_instant_init_metadata() to be able to navigate to the mblob. If the PRIMARY KEY contains any variable-length column and some nullable columns were instantly dropped, the dict_index_t::n_nullable in the data dictionary could be smaller than it actually is in the non-leaf pages. Because of this, the non-leaf pages could use more bytes for the null flags than the data dictionary expects, and we could be reading the lengths of the variable-length columns from the wrong offset, and thus reading the child page number from wrong place. This is the result of two design mistakes that involve unnecessary storage of data: First, it is nonsense to store any data fields for the leftmost node pointer records, because the comparisons would be resolved by the MIN_REC_FLAG alone. Second, there cannot be any null fields in the clustered index node pointer fields, but we nevertheless reserve space for all the null flags. Limitations (future work): MDEV-17459 Allow instant ALTER TABLE even if FULLTEXT INDEX exists MDEV-17468 Avoid table rebuild on operations on generated columns MDEV-17494 Refuse ALGORITHM=INSTANT when the row size is too large btr_page_reorganize_low(): Preserve any metadata in the root page. Call lock_move_reorganize_page() only after restoring the "infimum" and "supremum" records, to avoid a memcmp() assertion failure. dict_col_t::DROPPED: Magic value for dict_col_t::ind. dict_col_t::clear_instant(): Renamed from dict_col_t::remove_instant(). Do not assert that the column was instantly added, because we sometimes call this unconditionally for all columns. Convert an instantly added column to a "core column". The old name remove_instant() could be mistaken to refer to "instant DROP COLUMN". dict_col_t::is_added(): Rename from dict_col_t::is_instant(). dtype_t::metadata_blob_init(): Initialize the mblob data type. dtuple_t::is_metadata(), dtuple_t::is_alter_metadata(), upd_t::is_metadata(), upd_t::is_alter_metadata(): Check if info_bits refer to a metadata record. dict_table_t::instant: Metadata about dropped or reordered columns. dict_table_t::prepare_instant(): Prepare ha_innobase_inplace_ctx::instant_table for instant ALTER TABLE. innobase_instant_try() will pass this to dict_table_t::instant_column(). On rollback, dict_table_t::rollback_instant() will be called. dict_table_t::instant_column(): Renamed from instant_add_column(). Add the parameter col_map so that columns can be reordered. Copy and adjust v_cols[] as well. dict_table_t::find(): Find an old column based on a new column number. dict_table_t::serialise_columns(), dict_table_t::deserialise_columns(): Convert the mblob. dict_index_t::instant_metadata(): Create the metadata record for instant ALTER TABLE. Invoke dict_table_t::serialise_columns(). dict_index_t::reconstruct_fields(): Invoked by dict_table_t::deserialise_columns(). dict_index_t::clear_instant_alter(): Move the fields for the dropped columns to the end, and sort the surviving index fields in ascending order of column position. ha_innobase::check_if_supported_inplace_alter(): Do not allow adding a FTS_DOC_ID column if a hidden FTS_DOC_ID column exists due to FULLTEXT INDEX. (This always required ALGORITHM=COPY.) instant_alter_column_possible(): Add a parameter for InnoDB table, to check for additional conditions, such as the maximum number of index fields. ha_innobase_inplace_ctx::first_alter_pos: The first column whose position is affected by instant ADD, DROP, or changing the order of columns. innobase_build_col_map(): Skip added virtual columns. prepare_inplace_add_virtual(): Correctly compute num_to_add_vcol. Remove some unnecessary code. Note that the call to innodb_base_col_setup() should be executed later. commit_try_norebuild(): If ctx->is_instant(), let the virtual columns be added or dropped by innobase_instant_try(). innobase_instant_try(): Fill in a zero default value for the hidden column FTS_DOC_ID (to reduce the work needed in MDEV-17459). If any columns were dropped or reordered (or added not last), delete any SYS_COLUMNS records for the following columns, and insert SYS_COLUMNS records for all subsequent stored columns as well as for all virtual columns. If any virtual column is dropped, rewrite all virtual column metadata. Use a shortcut only for adding virtual columns. This is because innobase_drop_virtual_try() assumes that the dropped virtual columns still exist in ctx->old_table. innodb_update_cols(): Renamed from innodb_update_n_cols(). innobase_add_one_virtual(), innobase_insert_sys_virtual(): Change the return type to bool, and invoke my_error() when detecting an error. innodb_insert_sys_columns(): Insert a record into SYS_COLUMNS. Refactored from innobase_add_one_virtual() and innobase_instant_add_col(). innobase_instant_add_col(): Replace the parameter dfield with type. innobase_instant_drop_cols(): Drop matching columns from SYS_COLUMNS and all columns from SYS_VIRTUAL. innobase_add_virtual_try(), innobase_drop_virtual_try(): Let the caller invoke innodb_update_cols(). innobase_rename_column_try(): Skip dropped columns. commit_cache_norebuild(): Update table->fts->doc_col. dict_mem_table_col_rename_low(): Skip dropped columns. trx_undo_rec_get_partial_row(): Skip dropped columns. trx_undo_update_rec_get_update(): Handle the metadata BLOB correctly. trx_undo_page_report_modify(): Avoid out-of-bounds access to record fields. Log metadata records consistently. Apparently, the first fields of a clustered index may be updated in an update_undo vector when the index is ID_IND of SYS_FOREIGN, as part of renaming the table during ALTER TABLE. Normally, updates of the PRIMARY KEY should be logged as delete-mark and an insert. row_undo_mod_parse_undo_rec(), row_purge_parse_undo_rec(): Use trx_undo_metadata. row_undo_mod_clust_low(): On metadata rollback, roll back the root page too. row_undo_mod_clust(): Relax an assertion. The delete-mark flag was repurposed for ALTER TABLE metadata records. row_rec_to_index_entry_impl(): Add the template parameter mblob and the optional parameter info_bits for specifying the desired new info bits. For the metadata tuple, allow conversion between the original format (ADD COLUMN only) and the generic format (with hidden BLOB). Add the optional parameter "pad" to determine whether the tuple should be padded to the index fields (on ALTER TABLE it should), or whether it should remain at its original size (on rollback). row_build_index_entry_low(): Clean up the code, removing redundant variables and conditions. For instantly dropped columns, generate a dummy value that is NULL, the empty string, or a fixed length of NUL bytes, depending on the type of the dropped column. row_upd_clust_rec_by_insert_inherit_func(): On the update of PRIMARY KEY of a record that contained a dropped column whose value was stored externally, we will be inserting a dummy NULL or empty string value to the field of the dropped column. The externally stored column would eventually be dropped when purge removes the delete-marked record for the old PRIMARY KEY value. btr_index_rec_validate(): Recognize the metadata record. btr_discard_only_page_on_level(): Preserve the generic instant ALTER TABLE metadata. btr_set_instant(): Replaces page_set_instant(). This sets a clustered index root page to the appropriate format, or upgrades from the MDEV-11369 instant ADD COLUMN to generic ALTER TABLE format. btr_cur_instant_init_low(): Read and validate the metadata BLOB page before reconstructing the dictionary information based on it. btr_cur_instant_init_metadata(): Do not read any lengths from the metadata record header before reading the BLOB. At this point, we would not actually know how many nullable fields the metadata record contains. btr_cur_instant_root_init(): Initialize n_core_null_bytes in one of two possible ways. btr_cur_trim(): Handle the mblob record. row_metadata_to_tuple(): Convert a metadata record to a data tuple, based on the new info_bits of the metadata record. btr_cur_pessimistic_update(): Invoke row_metadata_to_tuple() if needed. Invoke dtuple_convert_big_rec() for metadata records if the record is too large, or if the mblob is not yet marked as externally stored. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): When the last user record is deleted, do not delete the generic instant ALTER TABLE metadata record. Only delete MDEV-11369 instant ADD COLUMN metadata records. btr_cur_optimistic_insert(): Avoid unnecessary computation of rec_size. btr_pcur_store_position(): Allow a logically empty page to contain a metadata record for generic ALTER TABLE. REC_INFO_DEFAULT_ROW_ADD: Renamed from REC_INFO_DEFAULT_ROW. This is for the old instant ADD COLUMN (MDEV-11369) only. REC_INFO_DEFAULT_ROW_ALTER: The more generic metadata record, with additional information for dropped or reordered columns. rec_info_bits_valid(): Remove. The only case when this would fail is when the record is the generic ALTER TABLE metadata record. rec_is_alter_metadata(): Check if a record is the metadata record for instant ALTER TABLE (other than ADD COLUMN). NOTE: This function must not be invoked on node pointer records, because the delete-mark flag in those records may be set (it is garbage), and then a debug assertion could fail because index->is_instant() does not necessarily hold. rec_is_add_metadata(): Check if a record is MDEV-11369 ADD COLUMN metadata record (not more generic instant ALTER TABLE). rec_get_converted_size_comp_prefix_low(): Assume that the metadata field will be stored externally. In dtuple_convert_big_rec() during the rec_get_converted_size() call, it would not be there yet. rec_get_converted_size_comp(): Replace status,fields,n_fields with tuple. rec_init_offsets_comp_ordinary(), rec_get_converted_size_comp_prefix_low(), rec_convert_dtuple_to_rec_comp(): Add template<bool mblob = false>. With mblob=true, process a record with a metadata BLOB. rec_copy_prefix_to_buf(): Assert that no fields beyond the key and system columns are being copied. Exclude the metadata BLOB field. rec_convert_dtuple_to_metadata_comp(): Convert an alter metadata tuple into a record. row_upd_index_replace_metadata(): Apply an update vector to an alter_metadata tuple. row_log_allocate(): Replace dict_index_t::is_instant() with a more appropriate condition that ignores dict_table_t::instant. Only a table on which the MDEV-11369 ADD COLUMN was performed can "lose its instantness" when it becomes empty. After instant DROP COLUMN or reordering columns, we cannot simply convert the table to the canonical format, because the data dictionary cache and all possibly existing references to it from other client connection threads would have to be adjusted. row_quiesce_write_index_fields(): Do not crash when the table contains an instantly dropped column. Thanks to Thirunarayanan Balathandayuthapani for discussing the design and implementing an initial prototype of this. Thanks to Matthias Leich for testing.
7 years ago
MDEV-11369 Instant ADD COLUMN for InnoDB For InnoDB tables, adding, dropping and reordering columns has required a rebuild of the table and all its indexes. Since MySQL 5.6 (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing concurrent modification of the tables. This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously, with only minor changes performed to the table structure. The counter innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS is incremented whenever a table rebuild operation is converted into an instant ADD COLUMN operation. ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN. Some usability limitations will be addressed in subsequent work: MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY and ALGORITHM=INSTANT MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE The format of the clustered index (PRIMARY KEY) is changed as follows: (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT, and a new field PAGE_INSTANT will contain the original number of fields in the clustered index ('core' fields). If instant ADD COLUMN has not been used or the table becomes empty, or the very first instant ADD COLUMN operation is rolled back, the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset to 0 and FIL_PAGE_INDEX. (2) A special 'default row' record is inserted into the leftmost leaf, between the page infimum and the first user record. This record is distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the same format as records that contain values for the instantly added columns. This 'default row' always has the same number of fields as the clustered index according to the table definition. The values of 'core' fields are to be ignored. For other fields, the 'default row' will contain the default values as they were during the ALTER TABLE statement. (If the column default values are changed later, those values will only be stored in the .frm file. The 'default row' will contain the original evaluated values, which must be the same for every row.) The 'default row' must be completely hidden from higher-level access routines. Assertions have been added to ensure that no 'default row' is ever present in the adaptive hash index or in locked records. The 'default row' is never delete-marked. (3) In clustered index leaf page records, the number of fields must reside between the number of 'core' fields (dict_index_t::n_core_fields introduced in this work) and dict_index_t::n_fields. If the number of fields is less than dict_index_t::n_fields, the missing fields are replaced with the column value of the 'default row'. Note: The number of fields in the record may shrink if some of the last instantly added columns are updated to the value that is in the 'default row'. The function btr_cur_trim() implements this 'compression' on update and rollback; dtuple::trim() implements it on insert. (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new status value REC_STATUS_COLUMNS_ADDED will indicate the presence of a new record header that will encode n_fields-n_core_fields-1 in 1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header always explicitly encodes the number of fields.) We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for covering the insert of the 'default row' record when instant ADD COLUMN is used for the first time. Subsequent instant ADD COLUMN can use TRX_UNDO_UPD_EXIST_REC. This is joint work with Vin Chen (陈福荣) from Tencent. The design that was discussed in April 2017 would not have allowed import or export of data files, because instead of the 'default row' it would have introduced a data dictionary table. The test rpl.rpl_alter_instant is exactly as contributed in pull request #408. The test innodb.instant_alter is based on a contributed test. The redo log record format changes for ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT are as contributed. (With this change present, crash recovery from MariaDB 10.3.1 will fail in spectacular ways!) Also the semantics of higher-level redo log records that modify the PAGE_INSTANT field is changed. The redo log format version identifier was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1. Everything else has been rewritten by me. Thanks to Elena Stepanova, the code has been tested extensively. When rolling back an instant ADD COLUMN operation, we must empty the PAGE_FREE list after deleting or shortening the 'default row' record, by calling either btr_page_empty() or btr_page_reorganize(). We must know the size of each entry in the PAGE_FREE list. If rollback left a freed copy of the 'default row' in the PAGE_FREE list, we would be unable to determine its size (if it is in ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC) because it would contain more fields than the rolled-back definition of the clustered index. UNIV_SQL_DEFAULT: A new special constant that designates an instantly added column that is not present in the clustered index record. len_is_stored(): Check if a length is an actual length. There are two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL. dict_col_t::def_val: The 'default row' value of the column. If the column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT. dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(), instant_value(). dict_col_t::remove_instant(): Remove the 'instant ADD' status of a column. dict_col_t::name(const dict_table_t& table): Replaces dict_table_get_col_name(). dict_index_t::n_core_fields: The original number of fields. For secondary indexes and if instant ADD COLUMN has not been used, this will be equal to dict_index_t::n_fields. dict_index_t::n_core_null_bytes: Number of bytes needed to represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable). dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that n_core_null_bytes was not initialized yet from the clustered index root page. dict_index_t: Add the accessors is_instant(), is_clust(), get_n_nullable(), instant_field_value(). dict_index_t::instant_add_field(): Adjust clustered index metadata for instant ADD COLUMN. dict_index_t::remove_instant(): Remove the 'instant ADD' status of a clustered index when the table becomes empty, or the very first instant ADD COLUMN operation is rolled back. dict_table_t: Add the accessors is_instant(), is_temporary(), supports_instant(). dict_table_t::instant_add_column(): Adjust metadata for instant ADD COLUMN. dict_table_t::rollback_instant(): Adjust metadata on the rollback of instant ADD COLUMN. prepare_inplace_alter_table_dict(): First create the ctx->new_table, and only then decide if the table really needs to be rebuilt. We must split the creation of table or index metadata from the creation of the dictionary table records and the creation of the data. In this way, we can transform a table-rebuilding operation into an instant ADD COLUMN operation. Dictionary objects will only be added to cache when table rebuilding or index creation is needed. The ctx->instant_table will never be added to cache. dict_table_t::add_to_cache(): Modified and renamed from dict_table_add_to_cache(). Do not modify the table metadata. Let the callers invoke dict_table_add_system_columns() and if needed, set can_be_evicted. dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the system columns (which will now exist in the dict_table_t object already at this point). dict_create_table_step(): Expect the callers to invoke dict_table_add_system_columns(). pars_create_table(): Before creating the table creation execution graph, invoke dict_table_add_system_columns(). row_create_table_for_mysql(): Expect all callers to invoke dict_table_add_system_columns(). create_index_dict(): Replaces row_merge_create_index_graph(). innodb_update_n_cols(): Renamed from innobase_update_n_virtual(). Call my_error() if an error occurs. btr_cur_instant_init(), btr_cur_instant_init_low(), btr_cur_instant_root_init(): Load additional metadata from the clustered index and set dict_index_t::n_core_null_bytes. This is invoked when table metadata is first loaded into the data dictionary. dict_boot(): Initialize n_core_null_bytes for the four hard-coded dictionary tables. dict_create_index_step(): Initialize n_core_null_bytes. This is executed as part of CREATE TABLE. dict_index_build_internal_clust(): Initialize n_core_null_bytes to NO_CORE_NULL_BYTES if table->supports_instant(). row_create_index_for_mysql(): Initialize n_core_null_bytes for CREATE TEMPORARY TABLE. commit_cache_norebuild(): Call the code to rename or enlarge columns in the cache only if instant ADD COLUMN is not being used. (Instant ADD COLUMN would copy all column metadata from instant_table to old_table, including the names and lengths.) PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields. This is repurposing the 16-bit field PAGE_DIRECTION, of which only the least significant 3 bits were used. The original byte containing PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B. page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT. page_ptr_get_direction(), page_get_direction(), page_ptr_set_direction(): Accessors for PAGE_DIRECTION. page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION. page_direction_increment(): Increment PAGE_N_DIRECTION and set PAGE_DIRECTION. rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes, and assume that heap_no is always set. Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records, even if the record contains fewer fields. rec_offs_make_valid(): Add the parameter 'leaf'. rec_copy_prefix_to_dtuple(): Assert that the tuple is only built on the core fields. Instant ADD COLUMN only applies to the clustered index, and we should never build a search key that has more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR. All these columns are always present. dict_index_build_data_tuple(): Remove assertions that would be duplicated in rec_copy_prefix_to_dtuple(). rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose number of fields is between n_core_fields and n_fields. cmp_rec_rec_with_match(): Implement the comparison between two MIN_REC_FLAG records. trx_t::in_rollback: Make the field available in non-debug builds. trx_start_for_ddl_low(): Remove dangerous error-tolerance. A dictionary transaction must be flagged as such before it has generated any undo log records. This is because trx_undo_assign_undo() will mark the transaction as a dictionary transaction in the undo log header right before the very first undo log record is being written. btr_index_rec_validate(): Account for instant ADD COLUMN row_undo_ins_remove_clust_rec(): On the rollback of an insert into SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the last column from the table and the clustered index. row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(), trx_undo_update_rec_get_update(): Handle the 'default row' as a special case. dtuple_t::trim(index): Omit a redundant suffix of an index tuple right before insert or update. After instant ADD COLUMN, if the last fields of a clustered index tuple match the 'default row', there is no need to store them. While trimming the entry, we must hold a page latch, so that the table cannot be emptied and the 'default row' be deleted. btr_cur_optimistic_update(), btr_cur_pessimistic_update(), row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low(): Invoke dtuple_t::trim() if needed. row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling row_ins_clust_index_entry_low(). rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number of fields to be between n_core_fields and n_fields. Do not support infimum,supremum. They are never supposed to be stored in dtuple_t, because page creation nowadays uses a lower-level method for initializing them. rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the number of fields. btr_cur_trim(): In an update, trim the index entry as needed. For the 'default row', handle rollback specially. For user records, omit fields that match the 'default row'. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Skip locking and adaptive hash index for the 'default row'. row_log_table_apply_convert_mrec(): Replace 'default row' values if needed. In the temporary file that is applied by row_log_table_apply(), we must identify whether the records contain the extra header for instantly added columns. For now, we will allocate an additional byte for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table has been subject to instant ADD COLUMN. The ROW_T_DELETE records are fine, as they will be converted and will only contain 'core' columns (PRIMARY KEY and some system columns) that are converted from dtuple_t. rec_get_converted_size_temp(), rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): Add the parameter 'status'. REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED: An info_bits constant for distinguishing the 'default row' record. rec_comp_status_t: An enum of the status bit values. rec_leaf_format: An enum that replaces the bool parameter of rec_init_offsets_comp_ordinary().
8 years ago
9 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
9 years ago
9 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
9 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
8 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
8 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
8 years ago
MDEV-12873 InnoDB SYS_TABLES.TYPE incompatibility for PAGE_COMPRESSED=YES in MariaDB 10.2.2 to 10.2.6 Remove the SHARED_SPACE flag that was erroneously introduced in MariaDB 10.2.2, and shift the SYS_TABLES.TYPE flags back to where they were before MariaDB 10.2.2. While doing this, ensure that tables created with affected MariaDB versions can be loaded, and also ensure that tables created with MySQL 5.7 using the TABLESPACE attribute cannot be loaded. MariaDB 10.2.2 picked the SHARED_SPACE flag from MySQL 5.7, shifting the MariaDB 10.1 flags PAGE_COMPRESSION, PAGE_COMPRESSION_LEVEL, ATOMIC_WRITES by one bit. The SHARED_SPACE flag would always be written as 0 by MariaDB, because MariaDB does not support CREATE TABLESPACE or CREATE TABLE...TABLESPACE for InnoDB. So, instead of the bits AALLLLCxxxxxxx we would have AALLLLC0xxxxxxx if the table was created with MariaDB 10.2.2 to 10.2.6. (AA=ATOMIC_WRITES, LLLL=PAGE_COMPRESSION_LEVEL, C=PAGE_COMPRESSED, xxxxxxx=7 bits that were not moved.) PAGE_COMPRESSED=NO implies LLLLC=00000. That is not a problem. If someone created a table in MariaDB 10.2.2 or 10.2.3 with the attribute ATOMIC_WRITES=OFF (value 2; AA=10) and without PAGE_COMPRESSED=YES or PAGE_COMPRESSION_LEVEL, the table should be rejected. We ignore this problem, because it should be unlikely for anyone to specify ATOMIC_WRITES=OFF, and because 10.2.2 and 10.2.2 were not mature releases. The value ATOMIC_WRITES=ON (1) would be interpreted as ATOMIC_WRITES=OFF, but starting with MariaDB 10.2.4 the ATOMIC_WRITES attribute is ignored. PAGE_COMPRESSED=YES implies that PAGE_COMPRESSION_LEVEL be between 1 and 9 and that ROW_FORMAT be COMPACT or DYNAMIC. Thus, the affected wrong bit pattern in SYS_TABLES.TYPE is of the form AALLLL10DB00001 where D signals the presence of a DATA DIRECTORY attribute and B is 1 for ROW_FORMAT=DYNAMIC and 0 for ROW_FORMAT=COMPACT. We must interpret this bit pattern as AALLLL1DB00001 (discarding the extraneous 0 bit). dict_sys_tables_rec_read(): Adjust the affected bit pattern when reading the SYS_TABLES.TYPE column. In case of invalid flags, report both SYS_TABLES.TYPE (after possible adjustment) and SYS_TABLES.MIX_LEN. dict_load_table_one(): Replace an unreachable condition on !dict_tf2_is_valid() with a debug assertion. The flags will already have been validated by dict_sys_tables_rec_read(); if that validation fails, dict_load_table_low() will have failed. fil_ibd_create(): Shorten an error message about a file pre-existing. Datafile::validate_to_dd(): Clarify an error message about tablespace flags mismatch. ha_innobase::open(): Remove an unnecessary warning message. dict_tf_is_valid(): Simplify and stricten the logic. Validate the values of PAGE_COMPRESSION. Remove error log output; let the callers handle that. DICT_TF_BITS: Remove ATOMIC_WRITES, PAGE_ENCRYPTION, PAGE_ENCRYPTION_KEY. The ATOMIC_WRITES is ignored once the SYS_TABLES.TYPE has been validated; there is no need to store it in dict_table_t::flags. The PAGE_ENCRYPTION and PAGE_ENCRYPTION_KEY are unused since MariaDB 10.1.4 (the GA release was 10.1.8). DICT_TF_BIT_MASK: Remove (unused). FSP_FLAGS_MEM_ATOMIC_WRITES: Remove (the flags are never read). row_import_read_v1(): Display an error if dict_tf_is_valid() fails.
8 years ago
9 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
9 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-18644: Support full_crc32 for page_compressed This is a follow-up task to MDEV-12026, which introduced innodb_checksum_algorithm=full_crc32 and a simpler page format. MDEV-12026 did not enable full_crc32 for page_compressed tables, which we will be doing now. This is joint work with Thirunarayanan Balathandayuthapani. For innodb_checksum_algorithm=full_crc32 we change the page_compressed format as follows: FIL_PAGE_TYPE: The most significant bit will be set to indicate page_compressed format. The least significant bits will contain the compressed page size, rounded up to a multiple of 256 bytes. The checksum will be stored in the last 4 bytes of the page (whether it is the full page or a page_compressed page whose size is determined by FIL_PAGE_TYPE), covering all preceding bytes of the page. If encryption is used, then the page will be encrypted between compression and computing the checksum. For page_compressed, FIL_PAGE_LSN will not be repeated at the end of the page. FSP_SPACE_FLAGS (already implemented as part of MDEV-12026): We will store the innodb_compression_algorithm that may be used to compress pages. Previously, the choice of algorithm was written to each compressed data page separately, and one would be unable to know in advance which compression algorithm(s) are used. fil_space_t::full_crc32_page_compressed_len(): Determine if the page_compressed algorithm of the tablespace needs to know the exact length of the compressed data. If yes, we will reserve and write an extra byte for this right before the checksum. buf_page_is_compressed(): Determine if a page uses page_compressed (in any innodb_checksum_algorithm). fil_page_decompress(): Pass also fil_space_t::flags so that the format can be determined. buf_page_is_zeroes(): Check if a page is full of zero bytes. buf_page_full_crc32_is_corrupted(): Renamed from buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32, we always simply validate the checksum to the page contents, while the physical page size is explicitly specified by an unencrypted part of the page header. buf_page_full_crc32_size(): Determine the size of a full_crc32 page. buf_dblwr_check_page_lsn(): Make this a debug-only function, because it involves potentially costly lookups of fil_space_t. create_table_info_t::check_table_options(), ha_innobase::check_if_supported_inplace_alter(): Do allow the creation of SPATIAL INDEX with full_crc32 also when page_compressed is used. commit_cache_norebuild(): Preserve the compression algorithm when updating the page_compression_level. dict_tf_to_fsp_flags(): Set the flags for page compression algorithm. FIXME: Maybe there should be a table option page_compression_algorithm and a session variable to back it?
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-23855: Remove fil_system.LRU and reduce fil_system.mutex contention Also fixes MDEV-23929: innodb_flush_neighbors is not being ignored for system tablespace on SSD When the maximum configured number of file is exceeded, InnoDB will close data files. We used to maintain a fil_system.LRU list and a counter fil_node_t::n_pending to achieve this, at the huge cost of multiple fil_system.mutex operations per I/O operation. fil_node_open_file_low(): Implement a FIFO replacement policy: The last opened file will be moved to the end of fil_system.space_list, and files will be closed from the start of the list. However, we will not move tablespaces in fil_system.space_list while i_s_tablespaces_encryption_fill_table() is executing (producing output for INFORMATION_SCHEMA.INNODB_TABLESPACES_ENCRYPTION) because it may cause information of some tablespaces to go missing. We also avoid this in mariabackup --backup because datafiles_iter_next() assumes that the ordering is not changed. IORequest: Fold more parameters to IORequest::type. fil_space_t::io(): Replaces fil_io(). fil_space_t::flush(): Replaces fil_flush(). OS_AIO_IBUF: Remove. We will always issue synchronous reads of the change buffer pages in buf_read_page_low(). We will always ignore some errors for background reads. This should reduce fil_system.mutex contention a little. fil_node_t::complete_write(): Replaces fil_node_t::complete_io(). On both read and write completion, fil_space_t::release_for_io() will have to be called. fil_space_t::io(): Do not acquire fil_system.mutex in the normal code path. xb_delta_open_matching_space(): Do not try to open the system tablespace which was already opened. This fixes a file sharing violation in mariabackup --prepare --incremental. Reviewed by: Vladislav Vaintroub
5 years ago
MDEV-23399: Performance regression with write workloads The buffer pool refactoring in MDEV-15053 and MDEV-22871 shifted the performance bottleneck to the page flushing. The configuration parameters will be changed as follows: innodb_lru_flush_size=32 (new: how many pages to flush on LRU eviction) innodb_lru_scan_depth=1536 (old: 1024) innodb_max_dirty_pages_pct=90 (old: 75) innodb_max_dirty_pages_pct_lwm=75 (old: 0) Note: The parameter innodb_lru_scan_depth will only affect LRU eviction of buffer pool pages when a new page is being allocated. The page cleaner thread will no longer evict any pages. It used to guarantee that some pages will remain free in the buffer pool. Now, we perform that eviction 'on demand' in buf_LRU_get_free_block(). The parameter innodb_lru_scan_depth(srv_LRU_scan_depth) is used as follows: * When the buffer pool is being shrunk in buf_pool_t::withdraw_blocks() * As a buf_pool.free limit in buf_LRU_list_batch() for terminating the flushing that is initiated e.g., by buf_LRU_get_free_block() The parameter also used to serve as an initial limit for unzip_LRU eviction (evicting uncompressed page frames while retaining ROW_FORMAT=COMPRESSED pages), but now we will use a hard-coded limit of 100 or unlimited for invoking buf_LRU_scan_and_free_block(). The status variables will be changed as follows: innodb_buffer_pool_pages_flushed: This includes also the count of innodb_buffer_pool_pages_LRU_flushed and should work reliably, updated one by one in buf_flush_page() to give more real-time statistics. The function buf_flush_stats(), which we are removing, was not called in every code path. For both counters, we will use regular variables that are incremented in a critical section of buf_pool.mutex. Note that show_innodb_vars() directly links to the variables, and reads of the counters will *not* be protected by buf_pool.mutex, so you cannot get a consistent snapshot of both variables. The following INFORMATION_SCHEMA.INNODB_METRICS counters will be removed, because the page cleaner no longer deals with writing or evicting least recently used pages, and because the single-page writes have been removed: * buffer_LRU_batch_flush_avg_time_slot * buffer_LRU_batch_flush_avg_time_thread * buffer_LRU_batch_flush_avg_time_est * buffer_LRU_batch_flush_avg_pass * buffer_LRU_single_flush_scanned * buffer_LRU_single_flush_num_scan * buffer_LRU_single_flush_scanned_per_call When moving to a single buffer pool instance in MDEV-15058, we missed some opportunity to simplify the buf_flush_page_cleaner thread. It was unnecessarily using a mutex and some complex data structures, even though we always have a single page cleaner thread. Furthermore, the buf_flush_page_cleaner thread had separate 'recovery' and 'shutdown' modes where it was waiting to be triggered by some other thread, adding unnecessary latency and potential for hangs in relatively rarely executed startup or shutdown code. The page cleaner was also running two kinds of batches in an interleaved fashion: "LRU flush" (writing out some least recently used pages and evicting them on write completion) and the normal batches that aim to increase the MIN(oldest_modification) in the buffer pool, to help the log checkpoint advance. The buf_pool.flush_list flushing was being blocked by buf_block_t::lock for no good reason. Furthermore, if the FIL_PAGE_LSN of a page is ahead of log_sys.get_flushed_lsn(), that is, what has been persistently written to the redo log, we would trigger a log flush and then resume the page flushing. This would unnecessarily limit the performance of the page cleaner thread and trigger the infamous messages "InnoDB: page_cleaner: 1000ms intended loop took 4450ms. The settings might not be optimal" that were suppressed in commit d1ab89037a518fcffbc50c24e4bd94e4ec33aed0 unless log_warnings>2. Our revised algorithm will make log_sys.get_flushed_lsn() advance at the start of buf_flush_lists(), and then execute a 'best effort' to write out all pages. The flush batches will skip pages that were modified since the log was written, or are are currently exclusively locked. The MDEV-13670 message "page_cleaner: 1000ms intended loop took" message will be removed, because by design, the buf_flush_page_cleaner() should not be blocked during a batch for extended periods of time. We will remove the single-page flushing altogether. Related to this, the debug parameter innodb_doublewrite_batch_size will be removed, because all of the doublewrite buffer will be used for flushing batches. If a page needs to be evicted from the buffer pool and all 100 least recently used pages in the buffer pool have unflushed changes, buf_LRU_get_free_block() will execute buf_flush_lists() to write out and evict innodb_lru_flush_size pages. At most one thread will execute buf_flush_lists() in buf_LRU_get_free_block(); other threads will wait for that LRU flushing batch to finish. To improve concurrency, we will replace the InnoDB ib_mutex_t and os_event_t native mutexes and condition variables in this area of code. Most notably, this means that the buffer pool mutex (buf_pool.mutex) is no longer instrumented via any InnoDB interfaces. It will continue to be instrumented via PERFORMANCE_SCHEMA. For now, both buf_pool.flush_list_mutex and buf_pool.mutex will be declared with MY_MUTEX_INIT_FAST (PTHREAD_MUTEX_ADAPTIVE_NP). The critical sections of buf_pool.flush_list_mutex should be shorter than those for buf_pool.mutex, because in the worst case, they cover a linear scan of buf_pool.flush_list, while the worst case of a critical section of buf_pool.mutex covers a linear scan of the potentially much longer buf_pool.LRU list. mysql_mutex_is_owner(), safe_mutex_is_owner(): New predicate, usable with SAFE_MUTEX. Some InnoDB debug assertions need this predicate instead of mysql_mutex_assert_owner() or mysql_mutex_assert_not_owner(). buf_pool_t::n_flush_LRU, buf_pool_t::n_flush_list: Replaces buf_pool_t::init_flush[] and buf_pool_t::n_flush[]. The number of active flush operations. buf_pool_t::mutex, buf_pool_t::flush_list_mutex: Use mysql_mutex_t instead of ib_mutex_t, to have native mutexes with PERFORMANCE_SCHEMA and SAFE_MUTEX instrumentation. buf_pool_t::done_flush_LRU: Condition variable for !n_flush_LRU. buf_pool_t::done_flush_list: Condition variable for !n_flush_list. buf_pool_t::do_flush_list: Condition variable to wake up the buf_flush_page_cleaner when a log checkpoint needs to be written or the server is being shut down. Replaces buf_flush_event. We will keep using timed waits (the page cleaner thread will wake _at least_ once per second), because the calculations for innodb_adaptive_flushing depend on fixed time intervals. buf_dblwr: Allocate statically, and move all code to member functions. Use a native mutex and condition variable. Remove code to deal with single-page flushing. buf_dblwr_check_block(): Make the check debug-only. We were spending a significant amount of execution time in page_simple_validate_new(). flush_counters_t::unzip_LRU_evicted: Remove. IORequest: Make more members const. FIXME: m_fil_node should be removed. buf_flush_sync_lsn: Protect by std::atomic, not page_cleaner.mutex (which we are removing). page_cleaner_slot_t, page_cleaner_t: Remove many redundant members. pc_request_flush_slot(): Replaces pc_request() and pc_flush_slot(). recv_writer_thread: Remove. Recovery works just fine without it, if we simply invoke buf_flush_sync() at the end of each batch in recv_sys_t::apply(). recv_recovery_from_checkpoint_finish(): Remove. We can simply call recv_sys.debug_free() directly. srv_started_redo: Replaces srv_start_state. SRV_SHUTDOWN_FLUSH_PHASE: Remove. logs_empty_and_mark_files_at_shutdown() can communicate with the normal page cleaner loop via the new function flush_buffer_pool(). buf_flush_remove(): Assert that the calling thread is holding buf_pool.flush_list_mutex. This removes unnecessary mutex operations from buf_flush_remove_pages() and buf_flush_dirty_pages(), which replace buf_LRU_flush_or_remove_pages(). buf_flush_lists(): Renamed from buf_flush_batch(), with simplified interface. Return the number of flushed pages. Clarified comments and renamed min_n to max_n. Identify LRU batch by lsn=0. Merge all the functions buf_flush_start(), buf_flush_batch(), buf_flush_end() directly to this function, which was their only caller, and remove 2 unnecessary buf_pool.mutex release/re-acquisition that we used to perform around the buf_flush_batch() call. At the start, if not all log has been durably written, wait for a background task to do it, or start a new task to do it. This allows the log write to run concurrently with our page flushing batch. Any pages that were skipped due to too recent FIL_PAGE_LSN or due to them being latched by a writer should be flushed during the next batch, unless there are further modifications to those pages. It is possible that a page that we must flush due to small oldest_modification also carries a recent FIL_PAGE_LSN or is being constantly modified. In the worst case, all writers would then end up waiting in log_free_check() to allow the flushing and the checkpoint to complete. buf_do_flush_list_batch(): Clarify comments, and rename min_n to max_n. Cache the last looked up tablespace. If neighbor flushing is not applicable, invoke buf_flush_page() directly, avoiding a page lookup in between. buf_flush_space(): Auxiliary function to look up a tablespace for page flushing. buf_flush_page(): Defer the computation of space->full_crc32(). Never call log_write_up_to(), but instead skip persistent pages whose latest modification (FIL_PAGE_LSN) is newer than the redo log. Also skip pages on which we cannot acquire a shared latch without waiting. buf_flush_try_neighbors(): Do not bother checking buf_fix_count because buf_flush_page() will no longer wait for the page latch. Take the tablespace as a parameter, and only execute this function when innodb_flush_neighbors>0. Avoid repeated calls of page_id_t::fold(). buf_flush_relocate_on_flush_list(): Declare as cold, and push down a condition from the callers. buf_flush_check_neighbor(): Take id.fold() as a parameter. buf_flush_sync(): Ensure that the buf_pool.flush_list is empty, because the flushing batch will skip pages whose modifications have not yet been written to the log or were latched for modification. buf_free_from_unzip_LRU_list_batch(): Remove redundant local variables. buf_flush_LRU_list_batch(): Let the caller buf_do_LRU_batch() initialize the counters, and report n->evicted. Cache the last looked up tablespace. If neighbor flushing is not applicable, invoke buf_flush_page() directly, avoiding a page lookup in between. buf_do_LRU_batch(): Return the number of pages flushed. buf_LRU_free_page(): Only release and re-acquire buf_pool.mutex if adaptive hash index entries are pointing to the block. buf_LRU_get_free_block(): Do not wake up the page cleaner, because it will no longer perform any useful work for us, and we do not want it to compete for I/O while buf_flush_lists(innodb_lru_flush_size, 0) writes out and evicts at most innodb_lru_flush_size pages. (The function buf_do_LRU_batch() may complete after writing fewer pages if more than innodb_lru_scan_depth pages end up in buf_pool.free list.) Eliminate some mutex release-acquire cycles, and wait for the LRU flush batch to complete before rescanning. buf_LRU_check_size_of_non_data_objects(): Simplify the code. buf_page_write_complete(): Remove the parameter evict, and always evict pages that were part of an LRU flush. buf_page_create(): Take a pre-allocated page as a parameter. buf_pool_t::free_block(): Free a pre-allocated block. recv_sys_t::recover_low(), recv_sys_t::apply(): Preallocate the block while not holding recv_sys.mutex. During page allocation, we may initiate a page flush, which in turn may initiate a log flush, which would require acquiring log_sys.mutex, which should always be acquired before recv_sys.mutex in order to avoid deadlocks. Therefore, we must not be holding recv_sys.mutex while allocating a buffer pool block. BtrBulk::logFreeCheck(): Skip a redundant condition. row_undo_step(): Do not invoke srv_inc_activity_count() for every row that is being rolled back. It should suffice to invoke the function in trx_flush_log_if_needed() during trx_t::commit_in_memory() when the rollback completes. sync_check_enable(): Remove. We will enable innodb_sync_debug from the very beginning. Reviewed by: Vladislav Vaintroub
5 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-18644: Support full_crc32 for page_compressed This is a follow-up task to MDEV-12026, which introduced innodb_checksum_algorithm=full_crc32 and a simpler page format. MDEV-12026 did not enable full_crc32 for page_compressed tables, which we will be doing now. This is joint work with Thirunarayanan Balathandayuthapani. For innodb_checksum_algorithm=full_crc32 we change the page_compressed format as follows: FIL_PAGE_TYPE: The most significant bit will be set to indicate page_compressed format. The least significant bits will contain the compressed page size, rounded up to a multiple of 256 bytes. The checksum will be stored in the last 4 bytes of the page (whether it is the full page or a page_compressed page whose size is determined by FIL_PAGE_TYPE), covering all preceding bytes of the page. If encryption is used, then the page will be encrypted between compression and computing the checksum. For page_compressed, FIL_PAGE_LSN will not be repeated at the end of the page. FSP_SPACE_FLAGS (already implemented as part of MDEV-12026): We will store the innodb_compression_algorithm that may be used to compress pages. Previously, the choice of algorithm was written to each compressed data page separately, and one would be unable to know in advance which compression algorithm(s) are used. fil_space_t::full_crc32_page_compressed_len(): Determine if the page_compressed algorithm of the tablespace needs to know the exact length of the compressed data. If yes, we will reserve and write an extra byte for this right before the checksum. buf_page_is_compressed(): Determine if a page uses page_compressed (in any innodb_checksum_algorithm). fil_page_decompress(): Pass also fil_space_t::flags so that the format can be determined. buf_page_is_zeroes(): Check if a page is full of zero bytes. buf_page_full_crc32_is_corrupted(): Renamed from buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32, we always simply validate the checksum to the page contents, while the physical page size is explicitly specified by an unencrypted part of the page header. buf_page_full_crc32_size(): Determine the size of a full_crc32 page. buf_dblwr_check_page_lsn(): Make this a debug-only function, because it involves potentially costly lookups of fil_space_t. create_table_info_t::check_table_options(), ha_innobase::check_if_supported_inplace_alter(): Do allow the creation of SPATIAL INDEX with full_crc32 also when page_compressed is used. commit_cache_norebuild(): Preserve the compression algorithm when updating the page_compression_level. dict_tf_to_fsp_flags(): Set the flags for page compression algorithm. FIXME: Maybe there should be a table option page_compression_algorithm and a session variable to back it?
7 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-18644: Support full_crc32 for page_compressed This is a follow-up task to MDEV-12026, which introduced innodb_checksum_algorithm=full_crc32 and a simpler page format. MDEV-12026 did not enable full_crc32 for page_compressed tables, which we will be doing now. This is joint work with Thirunarayanan Balathandayuthapani. For innodb_checksum_algorithm=full_crc32 we change the page_compressed format as follows: FIL_PAGE_TYPE: The most significant bit will be set to indicate page_compressed format. The least significant bits will contain the compressed page size, rounded up to a multiple of 256 bytes. The checksum will be stored in the last 4 bytes of the page (whether it is the full page or a page_compressed page whose size is determined by FIL_PAGE_TYPE), covering all preceding bytes of the page. If encryption is used, then the page will be encrypted between compression and computing the checksum. For page_compressed, FIL_PAGE_LSN will not be repeated at the end of the page. FSP_SPACE_FLAGS (already implemented as part of MDEV-12026): We will store the innodb_compression_algorithm that may be used to compress pages. Previously, the choice of algorithm was written to each compressed data page separately, and one would be unable to know in advance which compression algorithm(s) are used. fil_space_t::full_crc32_page_compressed_len(): Determine if the page_compressed algorithm of the tablespace needs to know the exact length of the compressed data. If yes, we will reserve and write an extra byte for this right before the checksum. buf_page_is_compressed(): Determine if a page uses page_compressed (in any innodb_checksum_algorithm). fil_page_decompress(): Pass also fil_space_t::flags so that the format can be determined. buf_page_is_zeroes(): Check if a page is full of zero bytes. buf_page_full_crc32_is_corrupted(): Renamed from buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32, we always simply validate the checksum to the page contents, while the physical page size is explicitly specified by an unencrypted part of the page header. buf_page_full_crc32_size(): Determine the size of a full_crc32 page. buf_dblwr_check_page_lsn(): Make this a debug-only function, because it involves potentially costly lookups of fil_space_t. create_table_info_t::check_table_options(), ha_innobase::check_if_supported_inplace_alter(): Do allow the creation of SPATIAL INDEX with full_crc32 also when page_compressed is used. commit_cache_norebuild(): Preserve the compression algorithm when updating the page_compression_level. dict_tf_to_fsp_flags(): Set the flags for page compression algorithm. FIXME: Maybe there should be a table option page_compression_algorithm and a session variable to back it?
7 years ago
MDEV-18644: Support full_crc32 for page_compressed This is a follow-up task to MDEV-12026, which introduced innodb_checksum_algorithm=full_crc32 and a simpler page format. MDEV-12026 did not enable full_crc32 for page_compressed tables, which we will be doing now. This is joint work with Thirunarayanan Balathandayuthapani. For innodb_checksum_algorithm=full_crc32 we change the page_compressed format as follows: FIL_PAGE_TYPE: The most significant bit will be set to indicate page_compressed format. The least significant bits will contain the compressed page size, rounded up to a multiple of 256 bytes. The checksum will be stored in the last 4 bytes of the page (whether it is the full page or a page_compressed page whose size is determined by FIL_PAGE_TYPE), covering all preceding bytes of the page. If encryption is used, then the page will be encrypted between compression and computing the checksum. For page_compressed, FIL_PAGE_LSN will not be repeated at the end of the page. FSP_SPACE_FLAGS (already implemented as part of MDEV-12026): We will store the innodb_compression_algorithm that may be used to compress pages. Previously, the choice of algorithm was written to each compressed data page separately, and one would be unable to know in advance which compression algorithm(s) are used. fil_space_t::full_crc32_page_compressed_len(): Determine if the page_compressed algorithm of the tablespace needs to know the exact length of the compressed data. If yes, we will reserve and write an extra byte for this right before the checksum. buf_page_is_compressed(): Determine if a page uses page_compressed (in any innodb_checksum_algorithm). fil_page_decompress(): Pass also fil_space_t::flags so that the format can be determined. buf_page_is_zeroes(): Check if a page is full of zero bytes. buf_page_full_crc32_is_corrupted(): Renamed from buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32, we always simply validate the checksum to the page contents, while the physical page size is explicitly specified by an unencrypted part of the page header. buf_page_full_crc32_size(): Determine the size of a full_crc32 page. buf_dblwr_check_page_lsn(): Make this a debug-only function, because it involves potentially costly lookups of fil_space_t. create_table_info_t::check_table_options(), ha_innobase::check_if_supported_inplace_alter(): Do allow the creation of SPATIAL INDEX with full_crc32 also when page_compressed is used. commit_cache_norebuild(): Preserve the compression algorithm when updating the page_compression_level. dict_tf_to_fsp_flags(): Set the flags for page compression algorithm. FIXME: Maybe there should be a table option page_compression_algorithm and a session variable to back it?
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-18644: Support full_crc32 for page_compressed This is a follow-up task to MDEV-12026, which introduced innodb_checksum_algorithm=full_crc32 and a simpler page format. MDEV-12026 did not enable full_crc32 for page_compressed tables, which we will be doing now. This is joint work with Thirunarayanan Balathandayuthapani. For innodb_checksum_algorithm=full_crc32 we change the page_compressed format as follows: FIL_PAGE_TYPE: The most significant bit will be set to indicate page_compressed format. The least significant bits will contain the compressed page size, rounded up to a multiple of 256 bytes. The checksum will be stored in the last 4 bytes of the page (whether it is the full page or a page_compressed page whose size is determined by FIL_PAGE_TYPE), covering all preceding bytes of the page. If encryption is used, then the page will be encrypted between compression and computing the checksum. For page_compressed, FIL_PAGE_LSN will not be repeated at the end of the page. FSP_SPACE_FLAGS (already implemented as part of MDEV-12026): We will store the innodb_compression_algorithm that may be used to compress pages. Previously, the choice of algorithm was written to each compressed data page separately, and one would be unable to know in advance which compression algorithm(s) are used. fil_space_t::full_crc32_page_compressed_len(): Determine if the page_compressed algorithm of the tablespace needs to know the exact length of the compressed data. If yes, we will reserve and write an extra byte for this right before the checksum. buf_page_is_compressed(): Determine if a page uses page_compressed (in any innodb_checksum_algorithm). fil_page_decompress(): Pass also fil_space_t::flags so that the format can be determined. buf_page_is_zeroes(): Check if a page is full of zero bytes. buf_page_full_crc32_is_corrupted(): Renamed from buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32, we always simply validate the checksum to the page contents, while the physical page size is explicitly specified by an unencrypted part of the page header. buf_page_full_crc32_size(): Determine the size of a full_crc32 page. buf_dblwr_check_page_lsn(): Make this a debug-only function, because it involves potentially costly lookups of fil_space_t. create_table_info_t::check_table_options(), ha_innobase::check_if_supported_inplace_alter(): Do allow the creation of SPATIAL INDEX with full_crc32 also when page_compressed is used. commit_cache_norebuild(): Preserve the compression algorithm when updating the page_compression_level. dict_tf_to_fsp_flags(): Set the flags for page compression algorithm. FIXME: Maybe there should be a table option page_compression_algorithm and a session variable to back it?
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-18644: Support full_crc32 for page_compressed This is a follow-up task to MDEV-12026, which introduced innodb_checksum_algorithm=full_crc32 and a simpler page format. MDEV-12026 did not enable full_crc32 for page_compressed tables, which we will be doing now. This is joint work with Thirunarayanan Balathandayuthapani. For innodb_checksum_algorithm=full_crc32 we change the page_compressed format as follows: FIL_PAGE_TYPE: The most significant bit will be set to indicate page_compressed format. The least significant bits will contain the compressed page size, rounded up to a multiple of 256 bytes. The checksum will be stored in the last 4 bytes of the page (whether it is the full page or a page_compressed page whose size is determined by FIL_PAGE_TYPE), covering all preceding bytes of the page. If encryption is used, then the page will be encrypted between compression and computing the checksum. For page_compressed, FIL_PAGE_LSN will not be repeated at the end of the page. FSP_SPACE_FLAGS (already implemented as part of MDEV-12026): We will store the innodb_compression_algorithm that may be used to compress pages. Previously, the choice of algorithm was written to each compressed data page separately, and one would be unable to know in advance which compression algorithm(s) are used. fil_space_t::full_crc32_page_compressed_len(): Determine if the page_compressed algorithm of the tablespace needs to know the exact length of the compressed data. If yes, we will reserve and write an extra byte for this right before the checksum. buf_page_is_compressed(): Determine if a page uses page_compressed (in any innodb_checksum_algorithm). fil_page_decompress(): Pass also fil_space_t::flags so that the format can be determined. buf_page_is_zeroes(): Check if a page is full of zero bytes. buf_page_full_crc32_is_corrupted(): Renamed from buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32, we always simply validate the checksum to the page contents, while the physical page size is explicitly specified by an unencrypted part of the page header. buf_page_full_crc32_size(): Determine the size of a full_crc32 page. buf_dblwr_check_page_lsn(): Make this a debug-only function, because it involves potentially costly lookups of fil_space_t. create_table_info_t::check_table_options(), ha_innobase::check_if_supported_inplace_alter(): Do allow the creation of SPATIAL INDEX with full_crc32 also when page_compressed is used. commit_cache_norebuild(): Preserve the compression algorithm when updating the page_compression_level. dict_tf_to_fsp_flags(): Set the flags for page compression algorithm. FIXME: Maybe there should be a table option page_compression_algorithm and a session variable to back it?
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 MariaDB data-at-rest encryption (innodb_encrypt_tables) had repurposed the same unused data field that was repurposed in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN) field of SPATIAL INDEX. Because of this, MariaDB was unable to support encryption on SPATIAL INDEX pages. Furthermore, InnoDB page checksums skipped some bytes, and there are multiple variations and checksum algorithms. By default, InnoDB accepts all variations of all algorithms that ever existed. This unnecessarily weakens the page checksums. We hereby introduce two more innodb_checksum_algorithm variants (full_crc32, strict_full_crc32) that are special in a way: When either setting is active, newly created data files will carry a flag (fil_space_t::full_crc32()) that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what the parameter innodb_checksum_algorithm is assigned to. For old files, the old checksum algorithms will continue to be used. The value strict_full_crc32 will be equivalent to strict_crc32 and the value full_crc32 will be equivalent to crc32. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. They may be deprecated in the future. We do not want an unnecessary file format change for them. The new full_crc32() format also cleans up the MariaDB tablespace flags. We will reserve flags to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file. We do not make the new algorithm the default for two reasons. First, MariaDB 10.4.2 was a beta release, and the default values of parameters should not change after beta. Second, we did not yet implement the full_crc32 format for page_compressed pages. This will be fixed in MDEV-18644. This is joint work with Marko Mäkelä.
7 years ago
MDEV-18644: Support full_crc32 for page_compressed This is a follow-up task to MDEV-12026, which introduced innodb_checksum_algorithm=full_crc32 and a simpler page format. MDEV-12026 did not enable full_crc32 for page_compressed tables, which we will be doing now. This is joint work with Thirunarayanan Balathandayuthapani. For innodb_checksum_algorithm=full_crc32 we change the page_compressed format as follows: FIL_PAGE_TYPE: The most significant bit will be set to indicate page_compressed format. The least significant bits will contain the compressed page size, rounded up to a multiple of 256 bytes. The checksum will be stored in the last 4 bytes of the page (whether it is the full page or a page_compressed page whose size is determined by FIL_PAGE_TYPE), covering all preceding bytes of the page. If encryption is used, then the page will be encrypted between compression and computing the checksum. For page_compressed, FIL_PAGE_LSN will not be repeated at the end of the page. FSP_SPACE_FLAGS (already implemented as part of MDEV-12026): We will store the innodb_compression_algorithm that may be used to compress pages. Previously, the choice of algorithm was written to each compressed data page separately, and one would be unable to know in advance which compression algorithm(s) are used. fil_space_t::full_crc32_page_compressed_len(): Determine if the page_compressed algorithm of the tablespace needs to know the exact length of the compressed data. If yes, we will reserve and write an extra byte for this right before the checksum. buf_page_is_compressed(): Determine if a page uses page_compressed (in any innodb_checksum_algorithm). fil_page_decompress(): Pass also fil_space_t::flags so that the format can be determined. buf_page_is_zeroes(): Check if a page is full of zero bytes. buf_page_full_crc32_is_corrupted(): Renamed from buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32, we always simply validate the checksum to the page contents, while the physical page size is explicitly specified by an unencrypted part of the page header. buf_page_full_crc32_size(): Determine the size of a full_crc32 page. buf_dblwr_check_page_lsn(): Make this a debug-only function, because it involves potentially costly lookups of fil_space_t. create_table_info_t::check_table_options(), ha_innobase::check_if_supported_inplace_alter(): Do allow the creation of SPATIAL INDEX with full_crc32 also when page_compressed is used. commit_cache_norebuild(): Preserve the compression algorithm when updating the page_compression_level. dict_tf_to_fsp_flags(): Set the flags for page compression algorithm. FIXME: Maybe there should be a table option page_compression_algorithm and a session variable to back it?
7 years ago
MDEV-23855: Remove fil_system.LRU and reduce fil_system.mutex contention Also fixes MDEV-23929: innodb_flush_neighbors is not being ignored for system tablespace on SSD When the maximum configured number of file is exceeded, InnoDB will close data files. We used to maintain a fil_system.LRU list and a counter fil_node_t::n_pending to achieve this, at the huge cost of multiple fil_system.mutex operations per I/O operation. fil_node_open_file_low(): Implement a FIFO replacement policy: The last opened file will be moved to the end of fil_system.space_list, and files will be closed from the start of the list. However, we will not move tablespaces in fil_system.space_list while i_s_tablespaces_encryption_fill_table() is executing (producing output for INFORMATION_SCHEMA.INNODB_TABLESPACES_ENCRYPTION) because it may cause information of some tablespaces to go missing. We also avoid this in mariabackup --backup because datafiles_iter_next() assumes that the ordering is not changed. IORequest: Fold more parameters to IORequest::type. fil_space_t::io(): Replaces fil_io(). fil_space_t::flush(): Replaces fil_flush(). OS_AIO_IBUF: Remove. We will always issue synchronous reads of the change buffer pages in buf_read_page_low(). We will always ignore some errors for background reads. This should reduce fil_system.mutex contention a little. fil_node_t::complete_write(): Replaces fil_node_t::complete_io(). On both read and write completion, fil_space_t::release_for_io() will have to be called. fil_space_t::io(): Do not acquire fil_system.mutex in the normal code path. xb_delta_open_matching_space(): Do not try to open the system tablespace which was already opened. This fixes a file sharing violation in mariabackup --prepare --incremental. Reviewed by: Vladislav Vaintroub
5 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-13103 Deal with page_compressed page corruption fil_page_decompress(): Replaces fil_decompress_page(). Allow the caller detect errors. Remove duplicated code. Use the "safe" instead of "fast" variants of decompression routines. fil_page_compress(): Replaces fil_compress_page(). The length of the input buffer always was srv_page_size (innodb_page_size). Remove printouts, and remove the fil_space_t* parameter. buf_tmp_buffer_t::reserved: Make private; the accessors acquire() and release() will use atomic memory access. buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter. Do not acquire any mutex. Remove the allocation of the buffers. buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf(): Refactored away from buf_pool_reserve_tmp_slot(). buf_page_decrypt_after_read(): Make static, and simplify the logic. Use the encryption buffer also for decompressing. buf_page_io_complete(), buf_dblwr_process(): Check more failures. fil_space_encrypt(): Simplify the debug checks. fil_space_t::printed_compression_failure: Remove. fil_get_compression_alg_name(): Remove. fil_iterate(): Allocate a buffer for compression and decompression only once, instead of allocating and freeing it for every page that uses compression, during IMPORT TABLESPACE. Also, validate the page checksum before decryption, and reduce the scope of some variables. fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused). AbstractCallback::operator()(): Remove the parameter 'offset'. The check for it in FetchIndexRootPages::operator() was basically redundant and dead code since the previous refactoring.
7 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-23855: Remove fil_system.LRU and reduce fil_system.mutex contention Also fixes MDEV-23929: innodb_flush_neighbors is not being ignored for system tablespace on SSD When the maximum configured number of file is exceeded, InnoDB will close data files. We used to maintain a fil_system.LRU list and a counter fil_node_t::n_pending to achieve this, at the huge cost of multiple fil_system.mutex operations per I/O operation. fil_node_open_file_low(): Implement a FIFO replacement policy: The last opened file will be moved to the end of fil_system.space_list, and files will be closed from the start of the list. However, we will not move tablespaces in fil_system.space_list while i_s_tablespaces_encryption_fill_table() is executing (producing output for INFORMATION_SCHEMA.INNODB_TABLESPACES_ENCRYPTION) because it may cause information of some tablespaces to go missing. We also avoid this in mariabackup --backup because datafiles_iter_next() assumes that the ordering is not changed. IORequest: Fold more parameters to IORequest::type. fil_space_t::io(): Replaces fil_io(). fil_space_t::flush(): Replaces fil_flush(). OS_AIO_IBUF: Remove. We will always issue synchronous reads of the change buffer pages in buf_read_page_low(). We will always ignore some errors for background reads. This should reduce fil_system.mutex contention a little. fil_node_t::complete_write(): Replaces fil_node_t::complete_io(). On both read and write completion, fil_space_t::release_for_io() will have to be called. fil_space_t::io(): Do not acquire fil_system.mutex in the normal code path. xb_delta_open_matching_space(): Do not try to open the system tablespace which was already opened. This fixes a file sharing violation in mariabackup --prepare --incremental. Reviewed by: Vladislav Vaintroub
5 years ago
MDEV-15053 Reduce buf_pool_t::mutex contention User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0 and will no longer report the PAGE_STATE value READY_FOR_USE. We will remove some fields from buf_page_t and move much code to member functions of buf_pool_t and buf_page_t, so that the access rules of data members can be enforced consistently. Evicting or adding pages in buf_pool.LRU will remain covered by buf_pool.mutex. Evicting or adding pages in buf_pool.page_hash will remain covered by both buf_pool.mutex and the buf_pool.page_hash X-latch. After this fix, buf_pool.page_hash lookups can entirely avoid acquiring buf_pool.mutex, only relying on buf_pool.hash_lock_get() S-latch. Similarly, buf_flush_check_neighbors() can will rely solely on buf_pool.mutex, no buf_pool.page_hash latch at all. The buf_pool.mutex is rather contended in I/O heavy benchmarks, especially when the workload does not fit in the buffer pool. The first attempt to alleviate the contention was the buf_pool_t::mutex split in commit 4ed7082eefe56b3e97e0edefb3df76dd7ef5e858 which introduced buf_block_t::mutex, which we are now removing. Later, multiple instances of buf_pool_t were introduced in commit c18084f71b02ea707c6461353e6cfc15d7553bc6 and recently removed by us in commit 1a6f708ec594ac0ae2dd30db926ab07b100fa24b (MDEV-15058). UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool related debugging in otherwise non-debug builds has not been used for years. Instead, we have been using UNIV_DEBUG, which is enabled in CMAKE_BUILD_TYPE=Debug. buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on std::atomic and the buf_pool.page_hash latches, and in some cases depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before. We must always release buf_block_t::lock before invoking unfix() or io_unfix(), to prevent a glitch where a block that was added to the buf_pool.free list would apper X-latched. See commit c5883debd6ef440a037011c11873b396923e93c5 how this glitch was finally caught in a debug environment. We move some buf_pool_t::page_hash specific code from the ha and hash modules to buf_pool, for improved readability. buf_pool_t::close(): Assert that all blocks are clean, except on aborted startup or crash-like shutdown. buf_pool_t::validate(): No longer attempt to validate n_flush[] against the number of BUF_IO_WRITE fixed blocks, because buf_page_t::flush_type no longer exists. buf_pool_t::watch_set(): Replaces buf_pool_watch_set(). Reduce mutex contention by separating the buf_pool.watch[] allocation and the insert into buf_pool.page_hash. buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a buf_pool.page_hash latch. Replaces and extends buf_page_hash_lock_s_confirm() and buf_page_hash_lock_x_confirm(). buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES. buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads: Use Atomic_counter. buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out(). buf_pool_t::LRU_remove(): Remove a block from the LRU list and return its predecessor. Incorporates buf_LRU_adjust_hp(), which was removed. buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(), for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by BTR_DELETE_OP (purge), which is never invoked on temporary tables. buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments. buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition. buf_LRU_free_page(): Clarify the function comment. buf_flush_check_neighbor(), buf_flush_check_neighbors(): Rewrite the construction of the page hash range. We will hold the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64) consecutive lookups of buf_pool.page_hash. buf_flush_page_and_try_neighbors(): Remove. Merge to its only callers, and remove redundant operations in buf_flush_LRU_list_batch(). buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite. Do not acquire buf_pool.mutex, and iterate directly with page_id_t. ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined and avoids any loops. fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove. buf_flush_page(): Add a fil_space_t* parameter. Minimize the buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated atomically with the io_fix, and we will protect most buf_block_t fields with buf_block_t::lock. The function buf_flush_write_block_low() is removed and merged here. buf_page_init_for_read(): Use static linkage. Initialize the newly allocated block and acquire the exclusive buf_block_t::lock while not holding any mutex. IORequest::IORequest(): Remove the body. We only need to invoke set_punch_hole() in buf_flush_page() and nowhere else. buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type. This field is only used during a fil_io() call. That function already takes IORequest as a parameter, so we had better introduce for the rarely changing field. buf_block_t::init(): Replaces buf_page_init(). buf_page_t::init(): Replaces buf_page_init_low(). buf_block_t::initialise(): Initialise many fields, but keep the buf_page_t::state(). Both buf_pool_t::validate() and buf_page_optimistic_get() requires that buf_page_t::in_file() be protected atomically with buf_page_t::in_page_hash and buf_page_t::in_LRU_list. buf_page_optimistic_get(): Now that buf_block_t::mutex no longer exists, we must check buf_page_t::io_fix() after acquiring the buf_pool.page_hash lock, to detect whether buf_page_init_for_read() has been initiated. We will also check the io_fix() before acquiring hash_lock in order to avoid unnecessary computation. The field buf_block_t::modify_clock (protected by buf_block_t::lock) allows buf_page_optimistic_get() to validate the block. buf_page_t::real_size: Remove. It was only used while flushing pages of page_compressed tables. buf_page_encrypt(): Add an output parameter that allows us ot eliminate buf_page_t::real_size. Replace a condition with debug assertion. buf_page_should_punch_hole(): Remove. buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch(). Add the parameter size (to replace buf_page_t::real_size). buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page(). Add the parameter size (to replace buf_page_t::real_size). fil_system_t::detach(): Replaces fil_space_detach(). Ensure that fil_validate() will not be violated even if fil_system.mutex is released and reacquired. fil_node_t::complete_io(): Renamed from fil_node_complete_io(). fil_node_t::close_to_free(): Replaces fil_node_close_to_free(). Avoid invoking fil_node_t::close() because fil_system.n_open has already been decremented in fil_space_t::detach(). BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY. BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE, and distinguish dirty pages by buf_page_t::oldest_modification(). BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead. This state was only being used for buf_page_t that are in buf_pool.watch. buf_pool_t::watch[]: Remove pointer indirection. buf_page_t::in_flush_list: Remove. It was set if and only if buf_page_t::oldest_modification() is nonzero. buf_page_decrypt_after_read(), buf_corrupt_page_release(), buf_page_check_corrupt(): Change the const fil_space_t* parameter to const fil_node_t& so that we can report the correct file name. buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function. buf_page_io_complete(): Split to buf_page_read_complete() and buf_page_write_complete(). buf_dblwr_t::in_use: Remove. buf_dblwr_t::buf_block_array: Add IORequest::flush_t. buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of os_aio_wait_until_no_pending_writes(). buf_flush_write_complete(): Declare static, not global. Add the parameter IORequest::flush_t. buf_flush_freed_page(): Simplify the code. recv_sys_t::flush_lru: Renamed from flush_type and changed to bool. fil_read(), fil_write(): Replaced with direct use of fil_io(). fil_buffering_disabled(): Remove. Check srv_file_flush_method directly. fil_mutex_enter_and_prepare_for_io(): Return the resolved fil_space_t* to avoid a duplicated lookup in the caller. fil_report_invalid_page_access(): Clean up the parameters. fil_io(): Return fil_io_t, which comprises fil_node_t and error code. Always invoke fil_space_t::acquire_for_io() and let either the sync=true caller or fil_aio_callback() invoke fil_space_t::release_for_io(). fil_aio_callback(): Rewrite to replace buf_page_io_complete(). fil_check_pending_operations(): Remove a parameter, and remove some redundant lookups. fil_node_close_to_free(): Wait for n_pending==0. Because we no longer do an extra lookup of the tablespace between fil_io() and the completion of the operation, we must give fil_node_t::complete_io() a chance to decrement the counter. fil_close_tablespace(): Remove unused parameter trx, and document that this is only invoked during the error handling of IMPORT TABLESPACE. row_import_discard_changes(): Merged with the only caller, row_import_cleanup(). Do not lock up the data dictionary while invoking fil_close_tablespace(). logs_empty_and_mark_files_at_shutdown(): Do not invoke fil_close_all_files(), to avoid a !needs_flush assertion failure on fil_node_t::close(). innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files(). fil_close_all_files(): Invoke fil_flush_file_spaces() to ensure proper durability. thread_pool::unbind(): Fix a crash that would occur on Windows after srv_thread_pool->disable_aio() and os_file_close(). This fix was submitted by Vladislav Vaintroub. Thanks to Matthias Leich and Axel Schwenke for extensive testing, Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
5 years ago
MDEV-33277 In-place upgrade causes invalid AUTO_INCREMENT values MDEV-33308 CHECK TABLE is modifying .frm file even if --read-only As noted in commit d0ef1aaf618452efea5a2e74f6d86ed0a586b63e, MySQL as well as older versions of MariaDB server would during ALTER TABLE ... IMPORT TABLESPACE write bogus values to the PAGE_MAX_TRX_ID field to pages of the clustered index, instead of letting that field remain 0. In commit 8777458a6eb73ac1d7d864ebac390ea7039e21c1 this field was repurposed for PAGE_ROOT_AUTO_INC in the clustered index root page. To avoid trouble when upgrading from MySQL or older versions of MariaDB, we will try to detect and correct bogus values of PAGE_ROOT_AUTO_INC when opening a table for the first time from the SQL layer. btr_read_autoinc_with_fallback(): Add the parameters to mysql_version,max to indicate the TABLE_SHARE::mysql_version of the .frm file and the maximum value allowed for the type of the AUTO_INCREMENT column. In case the table was originally created in MySQL or an older version of MariaDB, read also the maximum value of the AUTO_INCREMENT column from the table and reset the PAGE_ROOT_AUTO_INC if it is above the limit. dict_table_t::get_index(const dict_col_t &) const: Find an index that starts with the specified column. ha_innobase::check_for_upgrade(): Return HA_ADMIN_FAILED if InnoDB needs upgrading but is in read-only mode. In this way, the call to update_frm_version() will be skipped. row_import_autoinc(): Adjust the AUTO_INCREMENT column at the end of ALTER TABLE...IMPORT TABLESPACE. This refinement was suggested by Debarun Banerjee. The changes outside InnoDB were developed by Michael 'Monty' Widenius: Added print_check_msg() service for easy reporting of check/repair messages in ENGINE=Aria and ENGINE=InnoDB. Fixed that CHECK TABLE do not update the .frm file under --read-only. Added 'handler_flags' to HA_CHECK_OPT as a way for storage engines to store state from handler::check_for_upgrade(). Reviewed by: Debarun Banerjee
2 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12219 Discard temporary undo logs at transaction commit Starting with MySQL 5.7, temporary tables in InnoDB are handled differently from persistent tables. Because temporary tables are private to a connection, concurrency control and multi-versioning (MVCC) are not applicable. For performance reasons, purge is disabled as well. Rollback is supported for temporary tables; that is why we have the temporary undo logs in the first place. Because MVCC and purge are disabled for temporary tables, we should discard all temporary undo logs already at transaction commit, just like we discard the persistent insert_undo logs. Before this change, update_undo logs were being preserved. trx_temp_undo_t: A wrapper for temporary undo logs, comprising a rollback segment and a single temporary undo log. trx_rsegs_t::m_noredo: Use trx_temp_undo_t. (Instead of insert_undo, update_undo, there will be a single undo.) trx_is_noredo_rseg_updated(), trx_is_rseg_assigned(): Remove. trx_undo_add_page(): Remove the parameter undo_ptr. Acquire and release the rollback segment mutex inside the function. trx_undo_free_last_page(): Remove the parameter trx. trx_undo_truncate_end(): Remove the parameter trx, and add the parameter is_temp. Clean up the code a bit. trx_undo_assign_undo(): Split the parameter undo_ptr into rseg, undo. trx_undo_commit_cleanup(): Renamed from trx_undo_insert_cleanup(). Replace the parameter undo_ptr with undo. This will discard the temporary undo or insert_undo log at commit/rollback. trx_purge_add_update_undo_to_history(), trx_undo_update_cleanup(): Remove 3 parameters. Always operate on the persistent update_undo. trx_serialise(): Renamed from trx_serialisation_number_get(). trx_write_serialisation_history(): Simplify the code flow. If there are no persistent changes, do not update MONITOR_TRX_COMMIT_UNDO. trx_commit_in_memory(): Simplify the logic, and add assertions. trx_undo_page_report_modify(): Keep a direct reference to the persistent update_undo log. trx_undo_report_row_operation(): Simplify some code. Always assign TRX_UNDO_INSERT for temporary undo logs. trx_prepare_low(): Keep only one parameter. Prepare all 3 undo logs. trx_roll_try_truncate(): Remove the parameter undo_ptr. Try to truncate all 3 undo logs of the transaction. trx_roll_pop_top_rec_of_trx_low(): Remove. trx_roll_pop_top_rec_of_trx(): Remove the redundant parameter trx->roll_limit. Clear roll_limit when exhausting the undo logs. Consider all 3 undo logs at once, prioritizing the persistent undo logs. row_undo(): Minor cleanup. Let trx_roll_pop_top_rec_of_trx() reset the trx->roll_limit.
9 years ago
MDEV-12288 Reset DB_TRX_ID when the history is removed, to speed up MVCC Let InnoDB purge reset DB_TRX_ID,DB_ROLL_PTR when the history is removed. [TODO: It appears that the resetting is not taking place as often as it could be. We should test that a simple INSERT should eventually cause row_purge_reset_trx_id() to be invoked unless DROP TABLE is invoked soon enough.] The InnoDB clustered index record system columns DB_TRX_ID,DB_ROLL_PTR are used by multi-versioning. After the history is no longer needed, these columns can safely be reset to 0 and 1<<55 (to indicate a fresh insert). When a reader sees 0 in the DB_TRX_ID column, it can instantly determine that the record is present the read view. There is no need to acquire the transaction system mutex to check if the transaction exists, because writes can never be conducted by a transaction whose ID is 0. The persistent InnoDB undo log used to be split into two parts: insert_undo and update_undo. The insert_undo log was discarded at transaction commit or rollback, and the update_undo log was processed by the purge subsystem. As part of this change, we will only generate a single undo log for new transactions, and the purge subsystem will reset the DB_TRX_ID whenever a clustered index record is touched. That is, all persistent undo log will be preserved at transaction commit or rollback, to be removed by purge. The InnoDB redo log format is changed in two ways: We remove the redo log record type MLOG_UNDO_HDR_REUSE, and we introduce the MLOG_ZIP_WRITE_TRX_ID record for updating the DB_TRX_ID,DB_ROLL_PTR in a ROW_FORMAT=COMPRESSED table. This is also changing the format of persistent InnoDB data files: undo log and clustered index leaf page records. It will still be possible via import and export to exchange data files with earlier versions of MariaDB. The change to clustered index leaf page records is simple: we allow DB_TRX_ID to be 0. When it comes to the undo log, we must be able to upgrade from earlier MariaDB versions after a clean shutdown (no redo log to apply). While it would be nice to perform a slow shutdown (innodb_fast_shutdown=0) before an upgrade, to empty the undo logs, we cannot assume that this has been done. So, separate insert_undo log may exist for recovered uncommitted transactions. These transactions may be automatically rolled back, or they may be in XA PREPARE state, in which case InnoDB will preserve the transaction until an explicit XA COMMIT or XA ROLLBACK. Upgrade has been tested by starting up MariaDB 10.2 with ./mysql-test-run --manual-gdb innodb.read_only_recovery and then starting up this patched server with and without --innodb-read-only. trx_undo_ptr_t::undo: Renamed from update_undo. trx_undo_ptr_t::old_insert: Renamed from insert_undo. trx_rseg_t::undo_list: Renamed from update_undo_list. trx_rseg_t::undo_cached: Merged from update_undo_cached and insert_undo_cached. trx_rseg_t::old_insert_list: Renamed from insert_undo_list. row_purge_reset_trx_id(): New function to reset the columns. This will be called for all undo processing in purge that does not remove the clustered index record. trx_undo_update_rec_get_update(): Allow trx_id=0 when copying the old DB_TRX_ID of the record to the undo log. ReadView::changes_visible(): Allow id==0. (Return true for it. This is what speeds up the MVCC.) row_vers_impl_x_locked_low(), row_vers_build_for_semi_consistent_read(): Implement a fast path for DB_TRX_ID=0. Always initialize the TRX_UNDO_PAGE_TYPE to 0. Remove undo->type. MLOG_UNDO_HDR_REUSE: Remove. This changes the redo log format! innobase_start_or_create_for_mysql(): Set srv_undo_sources before starting any transactions. The parsing of the MLOG_ZIP_WRITE_TRX_ID record was successfully tested by running the following: ./mtr --parallel=auto --mysqld=--debug=d,ib_log innodb_zip.bug56680 grep MLOG_ZIP_WRITE_TRX_ID var/*/log/mysqld.1.err
8 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-18543 IMPORT TABLESPACE fails after instant DROP COLUMN ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166a9c76eb3546356aabfd2dbc759671cd0 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
4 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-22456 Dropping the adaptive hash index may cause DDL to lock up InnoDB If the InnoDB buffer pool contains many pages for a table or index that is being dropped or rebuilt, and if many of such pages are pointed to by the adaptive hash index, dropping the adaptive hash index may consume a lot of time. The time-consuming operation of dropping the adaptive hash index entries is being executed while the InnoDB data dictionary cache dict_sys is exclusively locked. It is not actually necessary to drop all adaptive hash index entries at the time a table or index is being dropped or rebuilt. We can let the LRU replacement policy of the buffer pool take care of this gradually. For this to work, we must detach the dict_table_t and dict_index_t objects from the main dict_sys cache, and once the last adaptive hash index entry for the detached table is removed (when the garbage page is evicted from the buffer pool) we can free the dict_table_t and dict_index_t object. Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE skip both the buffer pool eviction and the drop of the adaptive hash index. We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE. We can remove the eviction from DROP TABLE. We must retain the eviction in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the discarded table is being re-imported with the same tablespace identifier, the fresh data from the imported tablespace will replace any stale pages in the buffer pool. rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can no longer be interrupted inside InnoDB. fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(), fseg_free_page_low(), fseg_free_extent(): Remove the parameter that specifies whether the adaptive hash index should be dropped. btr_search_lazy_free(): Lazily free an index when the last reference to it is dropped from the adaptive hash index. buf_pool_clear_hash_index(): Declare static, and move to the same compilation unit with the bulk of the adaptive hash index code. dict_index_t::clone(), dict_index_t::clone_if_needed(): Clone an index that is being rebuilt while adaptive hash index entries exist. The original index will be inserted into dict_table_t::freed_indexes and dict_index_t::set_freed() will be called. dict_index_t::set_freed(), dict_index_t::freed(): Note that or check whether the index has been freed. We will use the impossible page number 1 to denote this condition. dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count(). dict_index_t::detach_columns(): Move the assignment n_fields=0 to ha_innobase_inplace_ctx::clear_added_indexes(). We must have access to the columns when freeing the adaptive hash index. Note: dict_table_t::v_cols[] will remain valid. If virtual columns are dropped or added, the table definition will be reloaded in ha_innobase::commit_inplace_alter_table(). buf_page_mtr_lock(): Drop a stale adaptive hash index if needed. We will also reduce the number of btr_get_search_latch() calls and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT in order to benefit cmake -DWITH_INNODB_AHI=OFF.
6 years ago
MDEV-22456 Dropping the adaptive hash index may cause DDL to lock up InnoDB If the InnoDB buffer pool contains many pages for a table or index that is being dropped or rebuilt, and if many of such pages are pointed to by the adaptive hash index, dropping the adaptive hash index may consume a lot of time. The time-consuming operation of dropping the adaptive hash index entries is being executed while the InnoDB data dictionary cache dict_sys is exclusively locked. It is not actually necessary to drop all adaptive hash index entries at the time a table or index is being dropped or rebuilt. We can let the LRU replacement policy of the buffer pool take care of this gradually. For this to work, we must detach the dict_table_t and dict_index_t objects from the main dict_sys cache, and once the last adaptive hash index entry for the detached table is removed (when the garbage page is evicted from the buffer pool) we can free the dict_table_t and dict_index_t object. Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE skip both the buffer pool eviction and the drop of the adaptive hash index. We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE. We can remove the eviction from DROP TABLE. We must retain the eviction in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the discarded table is being re-imported with the same tablespace identifier, the fresh data from the imported tablespace will replace any stale pages in the buffer pool. rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can no longer be interrupted inside InnoDB. fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(), fseg_free_page_low(), fseg_free_extent(): Remove the parameter that specifies whether the adaptive hash index should be dropped. btr_search_lazy_free(): Lazily free an index when the last reference to it is dropped from the adaptive hash index. buf_pool_clear_hash_index(): Declare static, and move to the same compilation unit with the bulk of the adaptive hash index code. dict_index_t::clone(), dict_index_t::clone_if_needed(): Clone an index that is being rebuilt while adaptive hash index entries exist. The original index will be inserted into dict_table_t::freed_indexes and dict_index_t::set_freed() will be called. dict_index_t::set_freed(), dict_index_t::freed(): Note that or check whether the index has been freed. We will use the impossible page number 1 to denote this condition. dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count(). dict_index_t::detach_columns(): Move the assignment n_fields=0 to ha_innobase_inplace_ctx::clear_added_indexes(). We must have access to the columns when freeing the adaptive hash index. Note: dict_table_t::v_cols[] will remain valid. If virtual columns are dropped or added, the table definition will be reloaded in ha_innobase::commit_inplace_alter_table(). buf_page_mtr_lock(): Drop a stale adaptive hash index if needed. We will also reduce the number of btr_get_search_latch() calls and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT in order to benefit cmake -DWITH_INNODB_AHI=OFF.
6 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12266: Change dict_table_t::space to fil_space_t* InnoDB always keeps all tablespaces in the fil_system cache. The fil_system.LRU is only for closing file handles; the fil_space_t and fil_node_t for all data files will remain in main memory. Between startup to shutdown, they can only be created and removed by DDL statements. Therefore, we can let dict_table_t::space point directly to the fil_space_t. dict_table_t::space_id: A numeric tablespace ID for the corner cases where we do not have a tablespace. The most prominent examples are ALTER TABLE...DISCARD TABLESPACE or a missing or corrupted file. There are a few functional differences; most notably: (1) DROP TABLE will delete matching .ibd and .cfg files, even if they were not attached to the data dictionary. (2) Some error messages will report file names instead of numeric IDs. There still are many functions that use numeric tablespace IDs instead of fil_space_t*, and many functions could be converted to fil_space_t member functions. Also, Tablespace and Datafile should be merged with fil_space_t and fil_node_t. page_id_t and buf_page_get_gen() could use fil_space_t& instead of a numeric ID, and after moving to a single buffer pool (MDEV-15058), buf_pool_t::page_hash could be moved to fil_space_t::page_hash. FilSpace: Remove. Only few calls to fil_space_acquire() will remain, and gradually they should be removed. mtr_t::set_named_space_id(ulint): Renamed from set_named_space(), to prevent accidental calls to this slower function. Very few callers remain. fseg_create(), fsp_reserve_free_extents(): Take fil_space_t* as a parameter instead of a space_id. fil_space_t::rename(): Wrapper for fil_rename_tablespace_check(), fil_name_write_rename(), fil_rename_tablespace(). Mariabackup passes the parameter log=false; InnoDB passes log=true. dict_mem_table_create(): Take fil_space_t* instead of space_id as parameter. dict_process_sys_tables_rec_and_mtr_commit(): Replace the parameter 'status' with 'bool cached'. dict_get_and_save_data_dir_path(): Avoid copying the fil_node_t::name. fil_ibd_open(): Return the tablespace. fil_space_t::set_imported(): Replaces fil_space_set_imported(). truncate_t: Change many member function parameters to fil_space_t*, and remove page_size parameters. row_truncate_prepare(): Merge to its only caller. row_drop_table_from_cache(): Assert that the table is persistent. dict_create_sys_indexes_tuple(): Write SYS_INDEXES.SPACE=FIL_NULL if the tablespace has been discarded. row_import_update_discarded_flag(): Remove a constant parameter.
8 years ago
MDEV-12253: Buffer pool blocks are accessed after they have been freed Problem was that bpage was referenced after it was already freed from LRU. Fixed by adding a new variable encrypted that is passed down to buf_page_check_corrupt() and used in buf_page_get_gen() to stop processing page read. This patch should also address following test failures and bugs: MDEV-12419: IMPORT should not look up tablespace in PageConverter::validate(). This is now removed. MDEV-10099: encryption.innodb_onlinealter_encryption fails sporadically in buildbot MDEV-11420: encryption.innodb_encryption-page-compression failed in buildbot MDEV-11222: encryption.encrypt_and_grep failed in buildbot on P8 Removed dict_table_t::is_encrypted and dict_table_t::ibd_file_missing and replaced these with dict_table_t::file_unreadable. Table ibd file is missing if fil_get_space(space_id) returns NULL and encrypted if not. Removed dict_table_t::is_corrupted field. Ported FilSpace class from 10.2 and using that on buf_page_check_corrupt(), buf_page_decrypt_after_read(), buf_page_encrypt_before_write(), buf_dblwr_process(), buf_read_page(), dict_stats_save_defrag_stats(). Added test cases when enrypted page could be read while doing redo log crash recovery. Also added test case for row compressed blobs. btr_cur_open_at_index_side_func(), btr_cur_open_at_rnd_pos_func(): Avoid referencing block that is NULL. buf_page_get_zip(): Issue error if page read fails. buf_page_get_gen(): Use dberr_t for error detection and do not reference bpage after we hare freed it. buf_mark_space_corrupt(): remove bpage from LRU also when it is encrypted. buf_page_check_corrupt(): @return DB_SUCCESS if page has been read and is not corrupted, DB_PAGE_CORRUPTED if page based on checksum check is corrupted, DB_DECRYPTION_FAILED if page post encryption checksum matches but after decryption normal page checksum does not match. In read case only DB_SUCCESS is possible. buf_page_io_complete(): use dberr_t for error handling. buf_flush_write_block_low(), buf_read_ahead_random(), buf_read_page_async(), buf_read_ahead_linear(), buf_read_ibuf_merge_pages(), buf_read_recv_pages(), fil_aio_wait(): Issue error if page read fails. btr_pcur_move_to_next_page(): Do not reference page if it is NULL. Introduced dict_table_t::is_readable() and dict_index_t::is_readable() that will return true if tablespace exists and pages read from tablespace are not corrupted or page decryption failed. Removed buf_page_t::key_version. After page decryption the key version is not removed from page frame. For unencrypted pages, old key_version is removed at buf_page_encrypt_before_write() dict_stats_update_transient_for_index(), dict_stats_update_transient() Do not continue if table decryption failed or table is corrupted. dict0stats.cc: Introduced a dict_stats_report_error function to avoid code duplication. fil_parse_write_crypt_data(): Check that key read from redo log entry is found from encryption plugin and if it is not, refuse to start. PageConverter::validate(): Removed access to fil_space_t as tablespace is not available during import. Fixed error code on innodb.innodb test. Merged test cased innodb-bad-key-change5 and innodb-bad-key-shutdown to innodb-bad-key-change2. Removed innodb-bad-key-change5 test. Decreased unnecessary complexity on some long lasting tests. Removed fil_inc_pending_ops(), fil_decr_pending_ops(), fil_get_first_space(), fil_get_next_space(), fil_get_first_space_safe(), fil_get_next_space_safe() functions. fil_space_verify_crypt_checksum(): Fixed bug found using ASAN where FIL_PAGE_END_LSN_OLD_CHECKSUM field was incorrectly accessed from row compressed tables. Fixed out of page frame bug for row compressed tables in fil_space_verify_crypt_checksum() found using ASAN. Incorrect function was called for compressed table. Added new tests for discard, rename table and drop (we should allow them even when page decryption fails). Alter table rename is not allowed. Added test for restart with innodb-force-recovery=1 when page read on redo-recovery cant be decrypted. Added test for corrupted table where both page data and FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION is corrupted. Adjusted the test case innodb_bug14147491 so that it does not anymore expect crash. Instead table is just mostly not usable. fil0fil.h: fil_space_acquire_low is not visible function and fil_space_acquire and fil_space_acquire_silent are inline functions. FilSpace class uses fil_space_acquire_low directly. recv_apply_hashed_log_recs() does not return anything.
9 years ago
MDEV-21907: InnoDB: Enable -Wconversion on clang and GCC The -Wconversion in GCC seems to be stricter than in clang. GCC at least since version 4.4.7 issues truncation warnings for assignments to bitfields, while clang 10 appears to only issue warnings when the sizes in bytes rounded to the nearest integer powers of 2 are different. Before GCC 10.0.0, -Wconversion required more casts and would not allow some operations, such as x<<=1 or x+=1 on a data type that is narrower than int. GCC 5 (but not GCC 4, GCC 6, or any later version) is complaining about x|=y even when x and y are compatible types that are narrower than int. Hence, we must rewrite some x|=y as x=static_cast<byte>(x|y) or similar, or we must disable -Wconversion. In GCC 6 and later, the warning for assigning wider to bitfields that are narrower than 8, 16, or 32 bits can be suppressed by applying a bitwise & with the exact bitmask of the bitfield. For older GCC, we must disable -Wconversion for GCC 4 or 5 in such cases. The bitwise negation operator appears to promote short integers to a wider type, and hence we must add explicit truncation casts around them. Microsoft Visual C does not allow a static_cast to truncate a constant, such as static_cast<byte>(1) truncating int. Hence, we will use the constructor-style cast byte(~1) for such cases. This has been tested at least with GCC 4.8.5, 5.4.0, 7.4.0, 9.2.1, 10.0.0, clang 9.0.1, 10.0.0, and MSVC 14.22.27905 (Microsoft Visual Studio 2019) on 64-bit and 32-bit targets (IA-32, AMD64, POWER 8, POWER 9, ARMv8).
6 years ago
MDEV-6076 Persistent AUTO_INCREMENT for InnoDB This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with the notable difference that the file format changes are limited to repurposing a previously unused data field in B-tree pages. For persistent InnoDB tables, write the last used AUTO_INCREMENT value to the root page of the clustered index, in the previously unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC. Unlike some other previously unused InnoDB data fields, this one was actually always zero-initialized, at least since MySQL 3.23.49. The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the root page. The SX latch will allow concurrent read access to the root page. (The field PAGE_ROOT_AUTO_INC will only be read on the first-time call to ha_innobase::open() from the SQL layer. The PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so read/write races are not possible.) During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level function btr_cur_search_to_nth_level(), adding no extra page access. [Adaptive hash index lookup will be disabled during INSERT.] If some rare UPDATE modifies an AUTO_INCREMENT column, the PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in ha_innobase::update_row(). When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC field. During ALTER TABLE, the initial AUTO_INCREMENT value will be copied from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will update PAGE_ROOT_AUTO_INC in real time. innodb_col_no(): Determine the dict_table_t::cols[] element index corresponding to a Field of a non-virtual column. (The MySQL 5.7 implementation of virtual columns breaks the 1:1 relationship between Field::field_index and dict_table_t::cols[]. Virtual columns are omitted from dict_table_t::cols[]. Therefore, we must translate the field_index of AUTO_INCREMENT columns into an index of dict_table_t::cols[].) Upgrade from old data files: By default, the AUTO_INCREMENT sequence in old data files would appear to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain the value 0 in each clustered index page. In new data files, PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain any AUTO_INCREMENT column. For backward compatibility, we use the old method of SELECT MAX(auto_increment_column) for initializing the sequence. btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format data file. btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc() that will resort to reading MAX(auto_increment_column) for data files that did not use AUTO_INCREMENT yet. It was manually tested that during the execution of innodb.autoinc_persist the compatibility logic is not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty clustered index root pages). initialize_auto_increment(): Replaces ha_innobase::innobase_initialize_autoinc(). This initializes the AUTO_INCREMENT metadata. Only called from ha_innobase::open(). ha_innobase::info_low(): Do not try to lazily initialize dict_table_t::autoinc. It must already have been initialized by ha_innobase::open() or ha_innobase::create(). Note: The adjustments to class ha_innopart were not tested, because the source code (native InnoDB partitioning) is not being compiled.
9 years ago
MDEV-33277 In-place upgrade causes invalid AUTO_INCREMENT values MDEV-33308 CHECK TABLE is modifying .frm file even if --read-only As noted in commit d0ef1aaf618452efea5a2e74f6d86ed0a586b63e, MySQL as well as older versions of MariaDB server would during ALTER TABLE ... IMPORT TABLESPACE write bogus values to the PAGE_MAX_TRX_ID field to pages of the clustered index, instead of letting that field remain 0. In commit 8777458a6eb73ac1d7d864ebac390ea7039e21c1 this field was repurposed for PAGE_ROOT_AUTO_INC in the clustered index root page. To avoid trouble when upgrading from MySQL or older versions of MariaDB, we will try to detect and correct bogus values of PAGE_ROOT_AUTO_INC when opening a table for the first time from the SQL layer. btr_read_autoinc_with_fallback(): Add the parameters to mysql_version,max to indicate the TABLE_SHARE::mysql_version of the .frm file and the maximum value allowed for the type of the AUTO_INCREMENT column. In case the table was originally created in MySQL or an older version of MariaDB, read also the maximum value of the AUTO_INCREMENT column from the table and reset the PAGE_ROOT_AUTO_INC if it is above the limit. dict_table_t::get_index(const dict_col_t &) const: Find an index that starts with the specified column. ha_innobase::check_for_upgrade(): Return HA_ADMIN_FAILED if InnoDB needs upgrading but is in read-only mode. In this way, the call to update_frm_version() will be skipped. row_import_autoinc(): Adjust the AUTO_INCREMENT column at the end of ALTER TABLE...IMPORT TABLESPACE. This refinement was suggested by Debarun Banerjee. The changes outside InnoDB were developed by Michael 'Monty' Widenius: Added print_check_msg() service for easy reporting of check/repair messages in ENGINE=Aria and ENGINE=InnoDB. Fixed that CHECK TABLE do not update the .frm file under --read-only. Added 'handler_flags' to HA_CHECK_OPT as a way for storage engines to store state from handler::check_for_upgrade(). Reviewed by: Debarun Banerjee
2 years ago
  1. /*****************************************************************************
  2. Copyright (c) 2012, 2016, Oracle and/or its affiliates. All Rights Reserved.
  3. Copyright (c) 2015, 2022, MariaDB Corporation.
  4. This program is free software; you can redistribute it and/or modify it under
  5. the terms of the GNU General Public License as published by the Free Software
  6. Foundation; version 2 of the License.
  7. This program is distributed in the hope that it will be useful, but WITHOUT
  8. ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
  9. FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
  10. You should have received a copy of the GNU General Public License along with
  11. this program; if not, write to the Free Software Foundation, Inc.,
  12. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA
  13. *****************************************************************************/
  14. /**************************************************//**
  15. @file row/row0import.cc
  16. Import a tablespace to a running instance.
  17. Created 2012-02-08 by Sunny Bains.
  18. *******************************************************/
  19. #include "row0import.h"
  20. #include "btr0pcur.h"
  21. #ifdef BTR_CUR_HASH_ADAPT
  22. # include "btr0sea.h"
  23. #endif
  24. #include "que0que.h"
  25. #include "dict0boot.h"
  26. #include "dict0load.h"
  27. #include "ibuf0ibuf.h"
  28. #include "pars0pars.h"
  29. #include "row0sel.h"
  30. #include "row0mysql.h"
  31. #include "srv0start.h"
  32. #include "row0quiesce.h"
  33. #include "fil0pagecompress.h"
  34. #include "trx0undo.h"
  35. #include "row0row.h"
  36. #ifdef HAVE_LZO
  37. #include "lzo/lzo1x.h"
  38. #endif
  39. #ifdef HAVE_SNAPPY
  40. #include "snappy-c.h"
  41. #endif
  42. #include "log.h"
  43. #include "table.h"
  44. #include "ha_innodb.h"
  45. #include "scope.h"
  46. #include <vector>
  47. #ifdef HAVE_MY_AES_H
  48. #include <my_aes.h>
  49. #endif
  50. using st_::span;
  51. /** The size of the buffer to use for IO.
  52. @param n physical page size
  53. @return number of pages */
  54. #define IO_BUFFER_SIZE(n) ((1024 * 1024) / (n))
  55. /** For gathering stats on records during phase I */
  56. struct row_stats_t {
  57. ulint m_n_deleted; /*!< Number of deleted records
  58. found in the index */
  59. ulint m_n_purged; /*!< Number of records purged
  60. optimisatically */
  61. ulint m_n_rows; /*!< Number of rows */
  62. ulint m_n_purge_failed; /*!< Number of deleted rows
  63. that could not be purged */
  64. };
  65. /** Index information required by IMPORT. */
  66. struct row_index_t {
  67. index_id_t m_id; /*!< Index id of the table
  68. in the exporting server */
  69. byte* m_name; /*!< Index name */
  70. ulint m_space; /*!< Space where it is placed */
  71. ulint m_page_no; /*!< Root page number */
  72. ulint m_type; /*!< Index type */
  73. ulint m_trx_id_offset; /*!< Relevant only for clustered
  74. indexes, offset of transaction
  75. id system column */
  76. ulint m_n_user_defined_cols; /*!< User defined columns */
  77. ulint m_n_uniq; /*!< Number of columns that can
  78. uniquely identify the row */
  79. ulint m_n_nullable; /*!< Number of nullable
  80. columns */
  81. ulint m_n_fields; /*!< Total number of fields */
  82. dict_field_t* m_fields; /*!< Index fields */
  83. const dict_index_t*
  84. m_srv_index; /*!< Index instance in the
  85. importing server */
  86. row_stats_t m_stats; /*!< Statistics gathered during
  87. the import phase */
  88. };
  89. /** Meta data required by IMPORT. */
  90. struct row_import {
  91. row_import() UNIV_NOTHROW
  92. :
  93. m_table(NULL),
  94. m_hostname(NULL),
  95. m_table_name(NULL),
  96. m_autoinc(0),
  97. m_zip_size(0),
  98. m_flags(0),
  99. m_n_cols(0),
  100. m_cols(NULL),
  101. m_col_names(NULL),
  102. m_n_indexes(0),
  103. m_indexes(NULL),
  104. m_missing(true) { }
  105. ~row_import() UNIV_NOTHROW;
  106. /** Find the index entry in in the indexes array.
  107. @param name index name
  108. @return instance if found else 0. */
  109. row_index_t* get_index(const char* name) const UNIV_NOTHROW;
  110. /** Get the number of rows in the index.
  111. @param name index name
  112. @return number of rows (doesn't include delete marked rows). */
  113. ulint get_n_rows(const char* name) const UNIV_NOTHROW;
  114. /** Find the ordinal value of the column name in the cfg table columns.
  115. @param name of column to look for.
  116. @return ULINT_UNDEFINED if not found. */
  117. ulint find_col(const char* name) const UNIV_NOTHROW;
  118. /** Get the number of rows for which purge failed during the
  119. convert phase.
  120. @param name index name
  121. @return number of rows for which purge failed. */
  122. ulint get_n_purge_failed(const char* name) const UNIV_NOTHROW;
  123. /** Check if the index is clean. ie. no delete-marked records
  124. @param name index name
  125. @return true if index needs to be purged. */
  126. bool requires_purge(const char* name) const UNIV_NOTHROW
  127. {
  128. return(get_n_purge_failed(name) > 0);
  129. }
  130. /** Set the index root <space, pageno> using the index name */
  131. void set_root_by_name() UNIV_NOTHROW;
  132. /** Set the index root <space, pageno> using a heuristic
  133. @return DB_SUCCESS or error code */
  134. dberr_t set_root_by_heuristic() UNIV_NOTHROW;
  135. /** Check if the index schema that was read from the .cfg file
  136. matches the in memory index definition.
  137. Note: It will update row_import_t::m_srv_index to map the meta-data
  138. read from the .cfg file to the server index instance.
  139. @return DB_SUCCESS or error code. */
  140. dberr_t match_index_columns(
  141. THD* thd,
  142. const dict_index_t* index) UNIV_NOTHROW;
  143. /** Check if the table schema that was read from the .cfg file
  144. matches the in memory table definition.
  145. @param thd MySQL session variable
  146. @return DB_SUCCESS or error code. */
  147. dberr_t match_table_columns(
  148. THD* thd) UNIV_NOTHROW;
  149. /** Check if the table (and index) schema that was read from the
  150. .cfg file matches the in memory table definition.
  151. @param thd MySQL session variable
  152. @return DB_SUCCESS or error code. */
  153. dberr_t match_schema(
  154. THD* thd) UNIV_NOTHROW;
  155. dberr_t match_flags(THD *thd) const ;
  156. dict_table_t* m_table; /*!< Table instance */
  157. byte* m_hostname; /*!< Hostname where the
  158. tablespace was exported */
  159. byte* m_table_name; /*!< Exporting instance table
  160. name */
  161. ib_uint64_t m_autoinc; /*!< Next autoinc value */
  162. ulint m_zip_size; /*!< ROW_FORMAT=COMPRESSED
  163. page size, or 0 */
  164. ulint m_flags; /*!< Table flags */
  165. ulint m_n_cols; /*!< Number of columns in the
  166. meta-data file */
  167. dict_col_t* m_cols; /*!< Column data */
  168. byte** m_col_names; /*!< Column names, we store the
  169. column naems separately becuase
  170. there is no field to store the
  171. value in dict_col_t */
  172. ulint m_n_indexes; /*!< Number of indexes,
  173. including clustered index */
  174. row_index_t* m_indexes; /*!< Index meta data */
  175. bool m_missing; /*!< true if a .cfg file was
  176. found and was readable */
  177. };
  178. struct fil_iterator_t {
  179. pfs_os_file_t file; /*!< File handle */
  180. const char* filepath; /*!< File path name */
  181. os_offset_t start; /*!< From where to start */
  182. os_offset_t end; /*!< Where to stop */
  183. os_offset_t file_size; /*!< File size in bytes */
  184. ulint n_io_buffers; /*!< Number of pages to use
  185. for IO */
  186. byte* io_buffer; /*!< Buffer to use for IO */
  187. fil_space_crypt_t *crypt_data; /*!< Crypt data (if encrypted) */
  188. byte* crypt_io_buffer; /*!< IO buffer when encrypted */
  189. };
  190. /** Use the page cursor to iterate over records in a block. */
  191. class RecIterator {
  192. public:
  193. /** Default constructor */
  194. RecIterator() UNIV_NOTHROW
  195. {
  196. memset(&m_cur, 0x0, sizeof(m_cur));
  197. /* Make page_cur_delete_rec() happy. */
  198. m_mtr.start();
  199. m_mtr.set_log_mode(MTR_LOG_NO_REDO);
  200. }
  201. /** Position the cursor on the first user record. */
  202. void open(buf_block_t* block) UNIV_NOTHROW
  203. {
  204. page_cur_set_before_first(block, &m_cur);
  205. if (!end()) {
  206. next();
  207. }
  208. }
  209. /** Move to the next record. */
  210. void next() UNIV_NOTHROW
  211. {
  212. page_cur_move_to_next(&m_cur);
  213. }
  214. /**
  215. @return the current record */
  216. rec_t* current() UNIV_NOTHROW
  217. {
  218. ut_ad(!end());
  219. return(page_cur_get_rec(&m_cur));
  220. }
  221. buf_block_t* current_block() const { return m_cur.block; }
  222. /**
  223. @return true if cursor is at the end */
  224. bool end() UNIV_NOTHROW
  225. {
  226. return(page_cur_is_after_last(&m_cur) == TRUE);
  227. }
  228. /** Remove the current record
  229. @return true on success */
  230. bool remove(
  231. const dict_index_t* index,
  232. rec_offs* offsets) UNIV_NOTHROW
  233. {
  234. ut_ad(page_is_leaf(m_cur.block->frame));
  235. /* We can't end up with an empty page unless it is root. */
  236. if (page_get_n_recs(m_cur.block->frame) <= 1) {
  237. return(false);
  238. }
  239. if (!rec_offs_any_extern(offsets)
  240. && m_cur.block->page.id().page_no() != index->page
  241. && ((page_get_data_size(m_cur.block->frame)
  242. - rec_offs_size(offsets)
  243. < BTR_CUR_PAGE_COMPRESS_LIMIT(index))
  244. || !page_has_siblings(m_cur.block->frame)
  245. || (page_get_n_recs(m_cur.block->frame) < 2))) {
  246. return false;
  247. }
  248. #ifdef UNIV_ZIP_DEBUG
  249. page_zip_des_t* page_zip = buf_block_get_page_zip(m_cur.block);
  250. ut_a(!page_zip || page_zip_validate(
  251. page_zip, m_cur.block->frame, index));
  252. #endif /* UNIV_ZIP_DEBUG */
  253. page_cur_delete_rec(&m_cur, index, offsets, &m_mtr);
  254. #ifdef UNIV_ZIP_DEBUG
  255. ut_a(!page_zip || page_zip_validate(
  256. page_zip, m_cur.block->frame, index));
  257. #endif /* UNIV_ZIP_DEBUG */
  258. return true;
  259. }
  260. private:
  261. page_cur_t m_cur;
  262. public:
  263. mtr_t m_mtr;
  264. };
  265. /** Class that purges delete marked records from indexes, both secondary
  266. and cluster. It does a pessimistic delete. This should only be done if we
  267. couldn't purge the delete marked reocrds during Phase I. */
  268. class IndexPurge {
  269. public:
  270. /** Constructor
  271. @param trx the user transaction covering the import tablespace
  272. @param index to be imported
  273. @param space_id space id of the tablespace */
  274. IndexPurge(
  275. trx_t* trx,
  276. dict_index_t* index) UNIV_NOTHROW
  277. :
  278. m_trx(trx),
  279. m_index(index),
  280. m_n_rows(0)
  281. {
  282. ib::info() << "Phase II - Purge records from index "
  283. << index->name;
  284. }
  285. /** Destructor */
  286. ~IndexPurge() UNIV_NOTHROW = default;
  287. /** Purge delete marked records.
  288. @return DB_SUCCESS or error code. */
  289. dberr_t garbage_collect() UNIV_NOTHROW;
  290. /** The number of records that are not delete marked.
  291. @return total records in the index after purge */
  292. ulint get_n_rows() const UNIV_NOTHROW
  293. {
  294. return(m_n_rows);
  295. }
  296. private:
  297. /** Begin import, position the cursor on the first record. */
  298. void open() UNIV_NOTHROW;
  299. /** Close the persistent curosr and commit the mini-transaction. */
  300. void close() UNIV_NOTHROW;
  301. /** Position the cursor on the next record.
  302. @return DB_SUCCESS or error code */
  303. dberr_t next() UNIV_NOTHROW;
  304. /** Store the persistent cursor position and reopen the
  305. B-tree cursor in BTR_MODIFY_TREE mode, because the
  306. tree structure may be changed during a pessimistic delete. */
  307. void purge_pessimistic_delete() UNIV_NOTHROW;
  308. /** Purge delete-marked records.
  309. @param offsets current row offsets. */
  310. void purge() UNIV_NOTHROW;
  311. protected:
  312. // Disable copying
  313. IndexPurge();
  314. IndexPurge(const IndexPurge&);
  315. IndexPurge &operator=(const IndexPurge&);
  316. private:
  317. trx_t* m_trx; /*!< User transaction */
  318. mtr_t m_mtr; /*!< Mini-transaction */
  319. btr_pcur_t m_pcur; /*!< Persistent cursor */
  320. dict_index_t* m_index; /*!< Index to be processed */
  321. ulint m_n_rows; /*!< Records in index */
  322. };
  323. /** Functor that is called for each physical page that is read from the
  324. tablespace file. */
  325. class AbstractCallback
  326. {
  327. public:
  328. /** Constructor
  329. @param trx covering transaction */
  330. AbstractCallback(trx_t* trx, ulint space_id)
  331. :
  332. m_zip_size(0),
  333. m_trx(trx),
  334. m_space(space_id),
  335. m_xdes(),
  336. m_xdes_page_no(ULINT_UNDEFINED),
  337. m_space_flags(ULINT_UNDEFINED) UNIV_NOTHROW { }
  338. /** Free any extent descriptor instance */
  339. virtual ~AbstractCallback()
  340. {
  341. UT_DELETE_ARRAY(m_xdes);
  342. }
  343. /** Determine the page size to use for traversing the tablespace
  344. @param file_size size of the tablespace file in bytes
  345. @param block contents of the first page in the tablespace file.
  346. @retval DB_SUCCESS or error code. */
  347. virtual dberr_t init(
  348. os_offset_t file_size,
  349. const buf_block_t* block) UNIV_NOTHROW;
  350. /** @return true if compressed table. */
  351. bool is_compressed_table() const UNIV_NOTHROW
  352. {
  353. return get_zip_size();
  354. }
  355. /** @return the tablespace flags */
  356. ulint get_space_flags() const
  357. {
  358. return(m_space_flags);
  359. }
  360. /**
  361. Set the name of the physical file and the file handle that is used
  362. to open it for the file that is being iterated over.
  363. @param filename the physical name of the tablespace file
  364. @param file OS file handle */
  365. void set_file(const char* filename, pfs_os_file_t file) UNIV_NOTHROW
  366. {
  367. m_file = file;
  368. m_filepath = filename;
  369. }
  370. ulint get_zip_size() const { return m_zip_size; }
  371. ulint physical_size() const
  372. {
  373. return m_zip_size ? m_zip_size : srv_page_size;
  374. }
  375. const char* filename() const { return m_filepath; }
  376. /**
  377. Called for every page in the tablespace. If the page was not
  378. updated then its state must be set to BUF_PAGE_NOT_USED. For
  379. compressed tables the page descriptor memory will be at offset:
  380. block->frame + srv_page_size;
  381. @param block block read from file, note it is not from the buffer pool
  382. @retval DB_SUCCESS or error code. */
  383. virtual dberr_t operator()(buf_block_t* block) UNIV_NOTHROW = 0;
  384. /** @return the tablespace identifier */
  385. ulint get_space_id() const { return m_space; }
  386. bool is_interrupted() const { return trx_is_interrupted(m_trx); }
  387. /**
  388. Get the data page depending on the table type, compressed or not.
  389. @param block - block read from disk
  390. @retval the buffer frame */
  391. static byte* get_frame(const buf_block_t* block)
  392. {
  393. return block->page.zip.data
  394. ? block->page.zip.data : block->frame;
  395. }
  396. /** Invoke the functionality for the callback */
  397. virtual dberr_t run(const fil_iterator_t& iter,
  398. buf_block_t* block) UNIV_NOTHROW = 0;
  399. protected:
  400. /** Get the physical offset of the extent descriptor within the page.
  401. @param page_no page number of the extent descriptor
  402. @param page contents of the page containing the extent descriptor.
  403. @return the start of the xdes array in a page */
  404. const xdes_t* xdes(
  405. ulint page_no,
  406. const page_t* page) const UNIV_NOTHROW
  407. {
  408. ulint offset;
  409. offset = xdes_calc_descriptor_index(get_zip_size(), page_no);
  410. return(page + XDES_ARR_OFFSET + XDES_SIZE * offset);
  411. }
  412. /** Set the current page directory (xdes). If the extent descriptor is
  413. marked as free then free the current extent descriptor and set it to
  414. 0. This implies that all pages that are covered by this extent
  415. descriptor are also freed.
  416. @param page_no offset of page within the file
  417. @param page page contents
  418. @return DB_SUCCESS or error code. */
  419. dberr_t set_current_xdes(
  420. ulint page_no,
  421. const page_t* page) UNIV_NOTHROW
  422. {
  423. m_xdes_page_no = page_no;
  424. UT_DELETE_ARRAY(m_xdes);
  425. m_xdes = NULL;
  426. if (mach_read_from_4(XDES_ARR_OFFSET + XDES_STATE + page)
  427. != XDES_FREE) {
  428. const ulint physical_size = m_zip_size
  429. ? m_zip_size : srv_page_size;
  430. m_xdes = UT_NEW_ARRAY_NOKEY(xdes_t, physical_size);
  431. /* Trigger OOM */
  432. DBUG_EXECUTE_IF(
  433. "ib_import_OOM_13",
  434. UT_DELETE_ARRAY(m_xdes);
  435. m_xdes = NULL;
  436. );
  437. if (m_xdes == NULL) {
  438. return(DB_OUT_OF_MEMORY);
  439. }
  440. memcpy(m_xdes, page, physical_size);
  441. }
  442. return(DB_SUCCESS);
  443. }
  444. /** Check if the page is marked as free in the extent descriptor.
  445. @param page_no page number to check in the extent descriptor.
  446. @return true if the page is marked as free */
  447. bool is_free(uint32_t page_no) const UNIV_NOTHROW
  448. {
  449. ut_a(xdes_calc_descriptor_page(get_zip_size(), page_no)
  450. == m_xdes_page_no);
  451. if (m_xdes != 0) {
  452. const xdes_t* xdesc = xdes(page_no, m_xdes);
  453. ulint pos = page_no % FSP_EXTENT_SIZE;
  454. return xdes_is_free(xdesc, pos);
  455. }
  456. /* If the current xdes was free, the page must be free. */
  457. return(true);
  458. }
  459. protected:
  460. /** The ROW_FORMAT=COMPRESSED page size, or 0. */
  461. ulint m_zip_size;
  462. /** File handle to the tablespace */
  463. pfs_os_file_t m_file;
  464. /** Physical file path. */
  465. const char* m_filepath;
  466. /** Covering transaction. */
  467. trx_t* m_trx;
  468. /** Space id of the file being iterated over. */
  469. ulint m_space;
  470. /** Current size of the space in pages */
  471. ulint m_size;
  472. /** Current extent descriptor page */
  473. xdes_t* m_xdes;
  474. /** Physical page offset in the file of the extent descriptor */
  475. ulint m_xdes_page_no;
  476. /** Flags value read from the header page */
  477. ulint m_space_flags;
  478. };
  479. ATTRIBUTE_COLD static dberr_t invalid_space_flags(ulint flags)
  480. {
  481. if (fsp_flags_is_incompatible_mysql(flags))
  482. {
  483. sql_print_error("InnoDB: unsupported MySQL tablespace");
  484. return DB_UNSUPPORTED;
  485. }
  486. sql_print_error("InnoDB: Invalid FSP_SPACE_FLAGS=0x%zx", flags);
  487. return DB_CORRUPTION;
  488. }
  489. /** Determine the page size to use for traversing the tablespace
  490. @param file_size size of the tablespace file in bytes
  491. @param block contents of the first page in the tablespace file.
  492. @retval DB_SUCCESS or error code. */
  493. dberr_t
  494. AbstractCallback::init(
  495. os_offset_t file_size,
  496. const buf_block_t* block) UNIV_NOTHROW
  497. {
  498. const page_t* page = block->frame;
  499. m_space_flags = fsp_header_get_flags(page);
  500. if (!fil_space_t::is_valid_flags(m_space_flags, true)) {
  501. ulint cflags = fsp_flags_convert_from_101(m_space_flags);
  502. if (cflags == ULINT_UNDEFINED) {
  503. return DB_CORRUPTION;
  504. }
  505. m_space_flags = cflags;
  506. }
  507. /* Clear the DATA_DIR flag, which is basically garbage. */
  508. m_space_flags &= ~(1U << FSP_FLAGS_POS_RESERVED);
  509. m_zip_size = fil_space_t::zip_size(m_space_flags);
  510. const ulint logical_size = fil_space_t::logical_size(m_space_flags);
  511. const ulint physical_size = fil_space_t::physical_size(m_space_flags);
  512. if (logical_size != srv_page_size) {
  513. ib::error() << "Page size " << logical_size
  514. << " of ibd file is not the same as the server page"
  515. " size " << srv_page_size;
  516. return(DB_CORRUPTION);
  517. } else if (file_size & (physical_size - 1)) {
  518. ib::error() << "File size " << file_size << " is not a"
  519. " multiple of the page size "
  520. << physical_size;
  521. return(DB_CORRUPTION);
  522. }
  523. m_size = mach_read_from_4(page + FSP_SIZE);
  524. if (m_space == ULINT_UNDEFINED) {
  525. m_space = mach_read_from_4(FSP_HEADER_OFFSET + FSP_SPACE_ID
  526. + page);
  527. }
  528. return set_current_xdes(0, page);
  529. }
  530. /**
  531. TODO: This can be made parallel trivially by chunking up the file
  532. and creating a callback per thread.. Main benefit will be to use
  533. multiple CPUs for checksums and compressed tables. We have to do
  534. compressed tables block by block right now. Secondly we need to
  535. decompress/compress and copy too much of data. These are
  536. CPU intensive.
  537. Iterate over all the pages in the tablespace.
  538. @param iter - Tablespace iterator
  539. @param block - block to use for IO
  540. @param callback - Callback to inspect and update page contents
  541. @retval DB_SUCCESS or error code */
  542. static dberr_t fil_iterate(
  543. const fil_iterator_t& iter,
  544. buf_block_t* block,
  545. AbstractCallback& callback);
  546. /**
  547. Try and determine the index root pages by checking if the next/prev
  548. pointers are both FIL_NULL. We need to ensure that skip deleted pages. */
  549. struct FetchIndexRootPages : public AbstractCallback {
  550. /** Index information gathered from the .ibd file. */
  551. struct Index {
  552. Index(index_id_t id, ulint page_no)
  553. :
  554. m_id(id),
  555. m_page_no(page_no) { }
  556. index_id_t m_id; /*!< Index id */
  557. ulint m_page_no; /*!< Root page number */
  558. };
  559. /** Constructor
  560. @param trx covering (user) transaction
  561. @param table table definition in server .*/
  562. FetchIndexRootPages(const dict_table_t* table, trx_t* trx)
  563. :
  564. AbstractCallback(trx, ULINT_UNDEFINED),
  565. m_table(table), m_index(0, 0) UNIV_NOTHROW { }
  566. /** Destructor */
  567. ~FetchIndexRootPages() UNIV_NOTHROW override = default;
  568. /** Fetch the clustered index root page in the tablespace
  569. @param iter Tablespace iterator
  570. @param block Block to use for IO
  571. @retval DB_SUCCESS or error code */
  572. dberr_t run(const fil_iterator_t& iter,
  573. buf_block_t* block) UNIV_NOTHROW override;
  574. /** Called for each block as it is read from the file.
  575. @param block block to convert, it is not from the buffer pool.
  576. @retval DB_SUCCESS or error code. */
  577. dberr_t operator()(buf_block_t* block) UNIV_NOTHROW override;
  578. /** Update the import configuration that will be used to import
  579. the tablespace. */
  580. dberr_t build_row_import(row_import* cfg) const UNIV_NOTHROW;
  581. /** Table definition in server. */
  582. const dict_table_t* m_table;
  583. /** Index information */
  584. Index m_index;
  585. };
  586. /** Called for each block as it is read from the file. Check index pages to
  587. determine the exact row format. We can't get that from the tablespace
  588. header flags alone.
  589. @param block block to convert, it is not from the buffer pool.
  590. @retval DB_SUCCESS or error code. */
  591. dberr_t FetchIndexRootPages::operator()(buf_block_t* block) UNIV_NOTHROW
  592. {
  593. if (is_interrupted()) return DB_INTERRUPTED;
  594. const page_t* page = get_frame(block);
  595. m_index.m_id = btr_page_get_index_id(page);
  596. m_index.m_page_no = block->page.id().page_no();
  597. /* Check that the tablespace flags match the table flags. */
  598. ulint expected = dict_tf_to_fsp_flags(m_table->flags);
  599. if (!fsp_flags_match(expected, m_space_flags)) {
  600. ib_errf(m_trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  601. ER_TABLE_SCHEMA_MISMATCH,
  602. "Expected FSP_SPACE_FLAGS=0x%x, .ibd "
  603. "file contains 0x%x.",
  604. unsigned(expected),
  605. unsigned(m_space_flags));
  606. return(DB_CORRUPTION);
  607. }
  608. if (!page_is_comp(block->frame) !=
  609. !dict_table_is_comp(m_table)) {
  610. ib_errf(m_trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  611. ER_TABLE_SCHEMA_MISMATCH,
  612. "ROW_FORMAT mismatch");
  613. return DB_CORRUPTION;
  614. }
  615. return DB_SUCCESS;
  616. }
  617. /**
  618. Update the import configuration that will be used to import the tablespace.
  619. @return error code or DB_SUCCESS */
  620. dberr_t
  621. FetchIndexRootPages::build_row_import(row_import* cfg) const UNIV_NOTHROW
  622. {
  623. ut_a(cfg->m_table == m_table);
  624. cfg->m_zip_size = m_zip_size;
  625. cfg->m_n_indexes = 1;
  626. if (cfg->m_n_indexes == 0) {
  627. ib::error() << "No B+Tree found in tablespace";
  628. return(DB_CORRUPTION);
  629. }
  630. cfg->m_indexes = UT_NEW_ARRAY_NOKEY(row_index_t, cfg->m_n_indexes);
  631. /* Trigger OOM */
  632. DBUG_EXECUTE_IF(
  633. "ib_import_OOM_11",
  634. UT_DELETE_ARRAY(cfg->m_indexes);
  635. cfg->m_indexes = NULL;
  636. );
  637. if (cfg->m_indexes == NULL) {
  638. return(DB_OUT_OF_MEMORY);
  639. }
  640. memset(cfg->m_indexes, 0x0, sizeof(*cfg->m_indexes) * cfg->m_n_indexes);
  641. row_index_t* cfg_index = cfg->m_indexes;
  642. char name[BUFSIZ];
  643. snprintf(name, sizeof(name), "index" IB_ID_FMT, m_index.m_id);
  644. ulint len = strlen(name) + 1;
  645. cfg_index->m_name = UT_NEW_ARRAY_NOKEY(byte, len);
  646. /* Trigger OOM */
  647. DBUG_EXECUTE_IF(
  648. "ib_import_OOM_12",
  649. UT_DELETE_ARRAY(cfg_index->m_name);
  650. cfg_index->m_name = NULL;
  651. );
  652. if (cfg_index->m_name == NULL) {
  653. return(DB_OUT_OF_MEMORY);
  654. }
  655. memcpy(cfg_index->m_name, name, len);
  656. cfg_index->m_id = m_index.m_id;
  657. cfg_index->m_space = m_space;
  658. cfg_index->m_page_no = m_index.m_page_no;
  659. return(DB_SUCCESS);
  660. }
  661. /* Functor that is called for each physical page that is read from the
  662. tablespace file.
  663. 1. Check each page for corruption.
  664. 2. Update the space id and LSN on every page
  665. * For the header page
  666. - Validate the flags
  667. - Update the LSN
  668. 3. On Btree pages
  669. * Set the index id
  670. * Update the max trx id
  671. * In a cluster index, update the system columns
  672. * In a cluster index, update the BLOB ptr, set the space id
  673. * Purge delete marked records, but only if they can be easily
  674. removed from the page
  675. * Keep a counter of number of rows, ie. non-delete-marked rows
  676. * Keep a counter of number of delete marked rows
  677. * Keep a counter of number of purge failure
  678. * If a page is stamped with an index id that isn't in the .cfg file
  679. we assume it is deleted and the page can be ignored.
  680. 4. Set the page state to dirty so that it will be written to disk.
  681. */
  682. class PageConverter : public AbstractCallback {
  683. public:
  684. /** Constructor
  685. @param cfg config of table being imported.
  686. @param space_id tablespace identifier
  687. @param trx transaction covering the import */
  688. PageConverter(row_import* cfg, ulint space_id, trx_t* trx)
  689. :
  690. AbstractCallback(trx, space_id),
  691. m_cfg(cfg),
  692. m_index(cfg->m_indexes),
  693. m_rec_iter(),
  694. m_offsets_(), m_offsets(m_offsets_),
  695. m_heap(0),
  696. m_cluster_index(dict_table_get_first_index(cfg->m_table))
  697. {
  698. rec_offs_init(m_offsets_);
  699. }
  700. ~PageConverter() UNIV_NOTHROW override
  701. {
  702. if (m_heap != 0) {
  703. mem_heap_free(m_heap);
  704. }
  705. }
  706. dberr_t run(const fil_iterator_t& iter,
  707. buf_block_t* block) UNIV_NOTHROW override
  708. {
  709. return fil_iterate(iter, block, *this);
  710. }
  711. /** Called for each block as it is read from the file.
  712. @param block block to convert, it is not from the buffer pool.
  713. @retval DB_SUCCESS or error code. */
  714. dberr_t operator()(buf_block_t* block) UNIV_NOTHROW override;
  715. private:
  716. /** Update the page, set the space id, max trx id and index id.
  717. @param block block read from file
  718. @param page_type type of the page
  719. @retval DB_SUCCESS or error code */
  720. dberr_t update_page(buf_block_t* block, uint16_t& page_type)
  721. UNIV_NOTHROW;
  722. /** Update the space, index id, trx id.
  723. @param block block to convert
  724. @return DB_SUCCESS or error code */
  725. dberr_t update_index_page(buf_block_t* block) UNIV_NOTHROW;
  726. /** Update the BLOB refrences and write UNDO log entries for
  727. rows that can't be purged optimistically.
  728. @param block block to update
  729. @retval DB_SUCCESS or error code */
  730. dberr_t update_records(buf_block_t* block) UNIV_NOTHROW;
  731. /** Validate the space flags and update tablespace header page.
  732. @param block block read from file, not from the buffer pool.
  733. @retval DB_SUCCESS or error code */
  734. dberr_t update_header(buf_block_t* block) UNIV_NOTHROW;
  735. /** Adjust the BLOB reference for a single column that is externally stored
  736. @param rec record to update
  737. @param offsets column offsets for the record
  738. @param i column ordinal value
  739. @return DB_SUCCESS or error code */
  740. dberr_t adjust_cluster_index_blob_column(
  741. rec_t* rec,
  742. const rec_offs* offsets,
  743. ulint i) UNIV_NOTHROW;
  744. /** Adjusts the BLOB reference in the clustered index row for all
  745. externally stored columns.
  746. @param rec record to update
  747. @param offsets column offsets for the record
  748. @return DB_SUCCESS or error code */
  749. dberr_t adjust_cluster_index_blob_columns(
  750. rec_t* rec,
  751. const rec_offs* offsets) UNIV_NOTHROW;
  752. /** In the clustered index, adjist the BLOB pointers as needed.
  753. Also update the BLOB reference, write the new space id.
  754. @param rec record to update
  755. @param offsets column offsets for the record
  756. @return DB_SUCCESS or error code */
  757. dberr_t adjust_cluster_index_blob_ref(
  758. rec_t* rec,
  759. const rec_offs* offsets) UNIV_NOTHROW;
  760. /** Purge delete-marked records, only if it is possible to do
  761. so without re-organising the B+tree.
  762. @retval true if purged */
  763. bool purge() UNIV_NOTHROW;
  764. /** Adjust the BLOB references and sys fields for the current record.
  765. @param rec record to update
  766. @param offsets column offsets for the record
  767. @return DB_SUCCESS or error code. */
  768. dberr_t adjust_cluster_record(
  769. rec_t* rec,
  770. const rec_offs* offsets) UNIV_NOTHROW;
  771. /** Find an index with the matching id.
  772. @return row_index_t* instance or 0 */
  773. row_index_t* find_index(index_id_t id) UNIV_NOTHROW
  774. {
  775. row_index_t* index = &m_cfg->m_indexes[0];
  776. for (ulint i = 0; i < m_cfg->m_n_indexes; ++i, ++index) {
  777. if (id == index->m_id) {
  778. return(index);
  779. }
  780. }
  781. return(0);
  782. }
  783. private:
  784. /** Config for table that is being imported. */
  785. row_import* m_cfg;
  786. /** Current index whose pages are being imported */
  787. row_index_t* m_index;
  788. /** Iterator over records in a block */
  789. RecIterator m_rec_iter;
  790. /** Record offset */
  791. rec_offs m_offsets_[REC_OFFS_NORMAL_SIZE];
  792. /** Pointer to m_offsets_ */
  793. rec_offs* m_offsets;
  794. /** Memory heap for the record offsets */
  795. mem_heap_t* m_heap;
  796. /** Cluster index instance */
  797. dict_index_t* m_cluster_index;
  798. };
  799. /**
  800. row_import destructor. */
  801. row_import::~row_import() UNIV_NOTHROW
  802. {
  803. for (ulint i = 0; m_indexes != 0 && i < m_n_indexes; ++i) {
  804. UT_DELETE_ARRAY(m_indexes[i].m_name);
  805. if (m_indexes[i].m_fields == NULL) {
  806. continue;
  807. }
  808. dict_field_t* fields = m_indexes[i].m_fields;
  809. ulint n_fields = m_indexes[i].m_n_fields;
  810. for (ulint j = 0; j < n_fields; ++j) {
  811. UT_DELETE_ARRAY(const_cast<char*>(fields[j].name()));
  812. }
  813. UT_DELETE_ARRAY(fields);
  814. }
  815. for (ulint i = 0; m_col_names != 0 && i < m_n_cols; ++i) {
  816. UT_DELETE_ARRAY(m_col_names[i]);
  817. }
  818. UT_DELETE_ARRAY(m_cols);
  819. UT_DELETE_ARRAY(m_indexes);
  820. UT_DELETE_ARRAY(m_col_names);
  821. UT_DELETE_ARRAY(m_table_name);
  822. UT_DELETE_ARRAY(m_hostname);
  823. }
  824. /** Find the index entry in in the indexes array.
  825. @param name index name
  826. @return instance if found else 0. */
  827. row_index_t*
  828. row_import::get_index(
  829. const char* name) const UNIV_NOTHROW
  830. {
  831. for (ulint i = 0; i < m_n_indexes; ++i) {
  832. const char* index_name;
  833. row_index_t* index = &m_indexes[i];
  834. index_name = reinterpret_cast<const char*>(index->m_name);
  835. if (strcmp(index_name, name) == 0) {
  836. return(index);
  837. }
  838. }
  839. return(0);
  840. }
  841. /** Get the number of rows in the index.
  842. @param name index name
  843. @return number of rows (doesn't include delete marked rows). */
  844. ulint
  845. row_import::get_n_rows(
  846. const char* name) const UNIV_NOTHROW
  847. {
  848. const row_index_t* index = get_index(name);
  849. ut_a(name != 0);
  850. return(index->m_stats.m_n_rows);
  851. }
  852. /** Get the number of rows for which purge failed uding the convert phase.
  853. @param name index name
  854. @return number of rows for which purge failed. */
  855. ulint
  856. row_import::get_n_purge_failed(
  857. const char* name) const UNIV_NOTHROW
  858. {
  859. const row_index_t* index = get_index(name);
  860. ut_a(name != 0);
  861. return(index->m_stats.m_n_purge_failed);
  862. }
  863. /** Find the ordinal value of the column name in the cfg table columns.
  864. @param name of column to look for.
  865. @return ULINT_UNDEFINED if not found. */
  866. ulint
  867. row_import::find_col(
  868. const char* name) const UNIV_NOTHROW
  869. {
  870. for (ulint i = 0; i < m_n_cols; ++i) {
  871. const char* col_name;
  872. col_name = reinterpret_cast<const char*>(m_col_names[i]);
  873. if (strcmp(col_name, name) == 0) {
  874. return(i);
  875. }
  876. }
  877. return(ULINT_UNDEFINED);
  878. }
  879. /**
  880. Check if the index schema that was read from the .cfg file matches the
  881. in memory index definition.
  882. @return DB_SUCCESS or error code. */
  883. dberr_t
  884. row_import::match_index_columns(
  885. THD* thd,
  886. const dict_index_t* index) UNIV_NOTHROW
  887. {
  888. row_index_t* cfg_index;
  889. dberr_t err = DB_SUCCESS;
  890. cfg_index = get_index(index->name);
  891. if (cfg_index == 0) {
  892. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  893. ER_TABLE_SCHEMA_MISMATCH,
  894. "Index %s not found in tablespace meta-data file.",
  895. index->name());
  896. return(DB_ERROR);
  897. }
  898. if (cfg_index->m_n_fields != index->n_fields) {
  899. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  900. ER_TABLE_SCHEMA_MISMATCH,
  901. "Index field count %u doesn't match"
  902. " tablespace metadata file value " ULINTPF,
  903. index->n_fields, cfg_index->m_n_fields);
  904. return(DB_ERROR);
  905. }
  906. cfg_index->m_srv_index = index;
  907. const dict_field_t* field = index->fields;
  908. const dict_field_t* cfg_field = cfg_index->m_fields;
  909. for (ulint i = 0; i < index->n_fields; ++i, ++field, ++cfg_field) {
  910. if (field->name() && cfg_field->name()
  911. && strcmp(field->name(), cfg_field->name()) != 0) {
  912. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  913. ER_TABLE_SCHEMA_MISMATCH,
  914. "Index field name %s doesn't match"
  915. " tablespace metadata field name %s"
  916. " for field position " ULINTPF,
  917. field->name(), cfg_field->name(), i);
  918. err = DB_ERROR;
  919. }
  920. if (cfg_field->prefix_len != field->prefix_len) {
  921. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  922. ER_TABLE_SCHEMA_MISMATCH,
  923. "Index %s field %s prefix len %u"
  924. " doesn't match metadata file value %u",
  925. index->name(), field->name(),
  926. field->prefix_len, cfg_field->prefix_len);
  927. err = DB_ERROR;
  928. }
  929. if (cfg_field->fixed_len != field->fixed_len) {
  930. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  931. ER_TABLE_SCHEMA_MISMATCH,
  932. "Index %s field %s fixed len %u"
  933. " doesn't match metadata file value %u",
  934. index->name(), field->name(),
  935. field->fixed_len,
  936. cfg_field->fixed_len);
  937. err = DB_ERROR;
  938. }
  939. }
  940. return(err);
  941. }
  942. /** Check if the table schema that was read from the .cfg file matches the
  943. in memory table definition.
  944. @param thd MySQL session variable
  945. @return DB_SUCCESS or error code. */
  946. dberr_t
  947. row_import::match_table_columns(
  948. THD* thd) UNIV_NOTHROW
  949. {
  950. dberr_t err = DB_SUCCESS;
  951. const dict_col_t* col = m_table->cols;
  952. for (ulint i = 0; i < m_table->n_cols; ++i, ++col) {
  953. const char* col_name;
  954. ulint cfg_col_index;
  955. col_name = dict_table_get_col_name(
  956. m_table, dict_col_get_no(col));
  957. cfg_col_index = find_col(col_name);
  958. if (cfg_col_index == ULINT_UNDEFINED) {
  959. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  960. ER_TABLE_SCHEMA_MISMATCH,
  961. "Column %s not found in tablespace.",
  962. col_name);
  963. err = DB_ERROR;
  964. } else if (cfg_col_index != col->ind) {
  965. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  966. ER_TABLE_SCHEMA_MISMATCH,
  967. "Column %s ordinal value mismatch, it's at %u"
  968. " in the table and " ULINTPF
  969. " in the tablespace meta-data file",
  970. col_name, col->ind, cfg_col_index);
  971. err = DB_ERROR;
  972. } else {
  973. const dict_col_t* cfg_col;
  974. cfg_col = &m_cols[cfg_col_index];
  975. ut_a(cfg_col->ind == cfg_col_index);
  976. if (cfg_col->prtype != col->prtype) {
  977. ib_errf(thd,
  978. IB_LOG_LEVEL_ERROR,
  979. ER_TABLE_SCHEMA_MISMATCH,
  980. "Column %s precise type mismatch,"
  981. " it's 0X%X in the table and 0X%X"
  982. " in the tablespace meta file",
  983. col_name, col->prtype, cfg_col->prtype);
  984. err = DB_ERROR;
  985. }
  986. if (cfg_col->mtype != col->mtype) {
  987. ib_errf(thd,
  988. IB_LOG_LEVEL_ERROR,
  989. ER_TABLE_SCHEMA_MISMATCH,
  990. "Column %s main type mismatch,"
  991. " it's 0X%X in the table and 0X%X"
  992. " in the tablespace meta file",
  993. col_name, col->mtype, cfg_col->mtype);
  994. err = DB_ERROR;
  995. }
  996. if (cfg_col->len != col->len) {
  997. ib_errf(thd,
  998. IB_LOG_LEVEL_ERROR,
  999. ER_TABLE_SCHEMA_MISMATCH,
  1000. "Column %s length mismatch,"
  1001. " it's %u in the table and %u"
  1002. " in the tablespace meta file",
  1003. col_name, col->len, cfg_col->len);
  1004. err = DB_ERROR;
  1005. }
  1006. if (cfg_col->mbminlen != col->mbminlen
  1007. || cfg_col->mbmaxlen != col->mbmaxlen) {
  1008. ib_errf(thd,
  1009. IB_LOG_LEVEL_ERROR,
  1010. ER_TABLE_SCHEMA_MISMATCH,
  1011. "Column %s multi-byte len mismatch,"
  1012. " it's %u-%u in the table and %u-%u"
  1013. " in the tablespace meta file",
  1014. col_name, col->mbminlen, col->mbmaxlen,
  1015. cfg_col->mbminlen, cfg_col->mbmaxlen);
  1016. err = DB_ERROR;
  1017. }
  1018. if (cfg_col->ind != col->ind) {
  1019. ib_errf(thd,
  1020. IB_LOG_LEVEL_ERROR,
  1021. ER_TABLE_SCHEMA_MISMATCH,
  1022. "Column %s position mismatch,"
  1023. " it's %u in the table and %u"
  1024. " in the tablespace meta file",
  1025. col_name, col->ind, cfg_col->ind);
  1026. err = DB_ERROR;
  1027. }
  1028. if (cfg_col->ord_part != col->ord_part) {
  1029. ib_errf(thd,
  1030. IB_LOG_LEVEL_ERROR,
  1031. ER_TABLE_SCHEMA_MISMATCH,
  1032. "Column %s ordering mismatch,"
  1033. " it's %u in the table and %u"
  1034. " in the tablespace meta file",
  1035. col_name, col->ord_part,
  1036. cfg_col->ord_part);
  1037. err = DB_ERROR;
  1038. }
  1039. if (cfg_col->max_prefix != col->max_prefix) {
  1040. ib_errf(thd,
  1041. IB_LOG_LEVEL_ERROR,
  1042. ER_TABLE_SCHEMA_MISMATCH,
  1043. "Column %s max prefix mismatch"
  1044. " it's %u in the table and %u"
  1045. " in the tablespace meta file",
  1046. col_name, col->max_prefix,
  1047. cfg_col->max_prefix);
  1048. err = DB_ERROR;
  1049. }
  1050. }
  1051. }
  1052. return(err);
  1053. }
  1054. dberr_t row_import::match_flags(THD *thd) const
  1055. {
  1056. ulint mismatch= (m_table->flags ^ m_flags) & ~DICT_TF_MASK_DATA_DIR;
  1057. if (!mismatch)
  1058. return DB_SUCCESS;
  1059. const char *msg;
  1060. if (mismatch & DICT_TF_MASK_ZIP_SSIZE)
  1061. {
  1062. if ((m_table->flags & DICT_TF_MASK_ZIP_SSIZE) &&
  1063. (m_flags & DICT_TF_MASK_ZIP_SSIZE))
  1064. {
  1065. switch (m_flags & DICT_TF_MASK_ZIP_SSIZE) {
  1066. case 0U << DICT_TF_POS_ZIP_SSIZE:
  1067. goto uncompressed;
  1068. case 1U << DICT_TF_POS_ZIP_SSIZE:
  1069. msg= "ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=1";
  1070. break;
  1071. case 2U << DICT_TF_POS_ZIP_SSIZE:
  1072. msg= "ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=2";
  1073. break;
  1074. case 3U << DICT_TF_POS_ZIP_SSIZE:
  1075. msg= "ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=4";
  1076. break;
  1077. case 4U << DICT_TF_POS_ZIP_SSIZE:
  1078. msg= "ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8";
  1079. break;
  1080. case 5U << DICT_TF_POS_ZIP_SSIZE:
  1081. msg= "ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=16";
  1082. break;
  1083. default:
  1084. msg= "strange KEY_BLOCK_SIZE";
  1085. }
  1086. }
  1087. else if (m_flags & DICT_TF_MASK_ZIP_SSIZE)
  1088. msg= "ROW_FORMAT=COMPRESSED";
  1089. else
  1090. goto uncompressed;
  1091. }
  1092. else
  1093. {
  1094. uncompressed:
  1095. msg= (m_flags & DICT_TF_MASK_ATOMIC_BLOBS) ? "ROW_FORMAT=DYNAMIC"
  1096. : (m_flags & DICT_TF_MASK_COMPACT) ? "ROW_FORMAT=COMPACT"
  1097. : "ROW_FORMAT=REDUNDANT";
  1098. }
  1099. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  1100. "Table flags don't match, server table has 0x%x and the meta-data "
  1101. "file has 0x%zx; .cfg file uses %s",
  1102. m_table->flags, m_flags, msg);
  1103. return DB_ERROR;
  1104. }
  1105. /** Check if the table (and index) schema that was read from the .cfg file
  1106. matches the in memory table definition.
  1107. @param thd MySQL session variable
  1108. @return DB_SUCCESS or error code. */
  1109. dberr_t
  1110. row_import::match_schema(
  1111. THD* thd) UNIV_NOTHROW
  1112. {
  1113. /* Do some simple checks. */
  1114. if (UT_LIST_GET_LEN(m_table->indexes) != m_n_indexes) {
  1115. /* If the number of indexes don't match then it is better
  1116. to abort the IMPORT. It is easy for the user to create a
  1117. table matching the IMPORT definition. */
  1118. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  1119. "Number of indexes don't match, table has " ULINTPF
  1120. " indexes but the tablespace meta-data file has "
  1121. ULINTPF " indexes",
  1122. UT_LIST_GET_LEN(m_table->indexes), m_n_indexes);
  1123. return(DB_ERROR);
  1124. }
  1125. dberr_t err = match_table_columns(thd);
  1126. if (err != DB_SUCCESS) {
  1127. return(err);
  1128. }
  1129. /* Check if the index definitions match. */
  1130. const dict_index_t* index;
  1131. for (index = UT_LIST_GET_FIRST(m_table->indexes);
  1132. index != 0;
  1133. index = UT_LIST_GET_NEXT(indexes, index)) {
  1134. dberr_t index_err;
  1135. index_err = match_index_columns(thd, index);
  1136. if (index_err != DB_SUCCESS) {
  1137. err = index_err;
  1138. }
  1139. }
  1140. return(err);
  1141. }
  1142. /**
  1143. Set the index root <space, pageno>, using index name. */
  1144. void
  1145. row_import::set_root_by_name() UNIV_NOTHROW
  1146. {
  1147. row_index_t* cfg_index = m_indexes;
  1148. for (ulint i = 0; i < m_n_indexes; ++i, ++cfg_index) {
  1149. dict_index_t* index;
  1150. const char* index_name;
  1151. index_name = reinterpret_cast<const char*>(cfg_index->m_name);
  1152. index = dict_table_get_index_on_name(m_table, index_name);
  1153. /* We've already checked that it exists. */
  1154. ut_a(index != 0);
  1155. index->page = static_cast<uint32_t>(cfg_index->m_page_no);
  1156. }
  1157. }
  1158. /**
  1159. Set the index root <space, pageno>, using a heuristic.
  1160. @return DB_SUCCESS or error code */
  1161. dberr_t
  1162. row_import::set_root_by_heuristic() UNIV_NOTHROW
  1163. {
  1164. row_index_t* cfg_index = m_indexes;
  1165. ut_a(m_n_indexes > 0);
  1166. // TODO: For now use brute force, based on ordinality
  1167. if (UT_LIST_GET_LEN(m_table->indexes) != m_n_indexes) {
  1168. ib::warn() << "Table " << m_table->name << " should have "
  1169. << UT_LIST_GET_LEN(m_table->indexes) << " indexes but"
  1170. " the tablespace has " << m_n_indexes << " indexes";
  1171. }
  1172. dict_mutex_enter_for_mysql();
  1173. ulint i = 0;
  1174. dberr_t err = DB_SUCCESS;
  1175. for (dict_index_t* index = UT_LIST_GET_FIRST(m_table->indexes);
  1176. index != 0;
  1177. index = UT_LIST_GET_NEXT(indexes, index)) {
  1178. if (index->type & DICT_FTS) {
  1179. index->type |= DICT_CORRUPT;
  1180. ib::warn() << "Skipping FTS index: " << index->name;
  1181. } else if (i < m_n_indexes) {
  1182. UT_DELETE_ARRAY(cfg_index[i].m_name);
  1183. ulint len = strlen(index->name) + 1;
  1184. cfg_index[i].m_name = UT_NEW_ARRAY_NOKEY(byte, len);
  1185. /* Trigger OOM */
  1186. DBUG_EXECUTE_IF(
  1187. "ib_import_OOM_14",
  1188. UT_DELETE_ARRAY(cfg_index[i].m_name);
  1189. cfg_index[i].m_name = NULL;
  1190. );
  1191. if (cfg_index[i].m_name == NULL) {
  1192. err = DB_OUT_OF_MEMORY;
  1193. break;
  1194. }
  1195. memcpy(cfg_index[i].m_name, index->name, len);
  1196. cfg_index[i].m_srv_index = index;
  1197. index->page = static_cast<uint32_t>(
  1198. cfg_index[i++].m_page_no);
  1199. }
  1200. }
  1201. dict_mutex_exit_for_mysql();
  1202. return(err);
  1203. }
  1204. /**
  1205. Purge delete marked records.
  1206. @return DB_SUCCESS or error code. */
  1207. dberr_t
  1208. IndexPurge::garbage_collect() UNIV_NOTHROW
  1209. {
  1210. dberr_t err;
  1211. ibool comp = dict_table_is_comp(m_index->table);
  1212. /* Open the persistent cursor and start the mini-transaction. */
  1213. open();
  1214. while ((err = next()) == DB_SUCCESS) {
  1215. rec_t* rec = btr_pcur_get_rec(&m_pcur);
  1216. ibool deleted = rec_get_deleted_flag(rec, comp);
  1217. if (!deleted) {
  1218. ++m_n_rows;
  1219. } else {
  1220. purge();
  1221. }
  1222. }
  1223. /* Close the persistent cursor and commit the mini-transaction. */
  1224. close();
  1225. return(err == DB_END_OF_INDEX ? DB_SUCCESS : err);
  1226. }
  1227. /**
  1228. Begin import, position the cursor on the first record. */
  1229. void
  1230. IndexPurge::open() UNIV_NOTHROW
  1231. {
  1232. mtr_start(&m_mtr);
  1233. mtr_set_log_mode(&m_mtr, MTR_LOG_NO_REDO);
  1234. btr_pcur_open_at_index_side(
  1235. true, m_index, BTR_MODIFY_LEAF, &m_pcur, true, 0, &m_mtr);
  1236. btr_pcur_move_to_next_user_rec(&m_pcur, &m_mtr);
  1237. if (rec_is_metadata(btr_pcur_get_rec(&m_pcur), *m_index)) {
  1238. ut_ad(btr_pcur_is_on_user_rec(&m_pcur));
  1239. /* Skip the metadata pseudo-record. */
  1240. } else {
  1241. btr_pcur_move_to_prev_on_page(&m_pcur);
  1242. }
  1243. }
  1244. /**
  1245. Close the persistent curosr and commit the mini-transaction. */
  1246. void
  1247. IndexPurge::close() UNIV_NOTHROW
  1248. {
  1249. btr_pcur_close(&m_pcur);
  1250. mtr_commit(&m_mtr);
  1251. }
  1252. /**
  1253. Position the cursor on the next record.
  1254. @return DB_SUCCESS or error code */
  1255. dberr_t
  1256. IndexPurge::next() UNIV_NOTHROW
  1257. {
  1258. btr_pcur_move_to_next_on_page(&m_pcur);
  1259. /* When switching pages, commit the mini-transaction
  1260. in order to release the latch on the old page. */
  1261. if (!btr_pcur_is_after_last_on_page(&m_pcur)) {
  1262. return(DB_SUCCESS);
  1263. } else if (trx_is_interrupted(m_trx)) {
  1264. /* Check after every page because the check
  1265. is expensive. */
  1266. return(DB_INTERRUPTED);
  1267. }
  1268. btr_pcur_store_position(&m_pcur, &m_mtr);
  1269. mtr_commit(&m_mtr);
  1270. mtr_start(&m_mtr);
  1271. mtr_set_log_mode(&m_mtr, MTR_LOG_NO_REDO);
  1272. btr_pcur_restore_position(BTR_MODIFY_LEAF, &m_pcur, &m_mtr);
  1273. /* The following is based on btr_pcur_move_to_next_user_rec(). */
  1274. m_pcur.old_stored = false;
  1275. ut_ad(m_pcur.latch_mode == BTR_MODIFY_LEAF);
  1276. do {
  1277. if (btr_pcur_is_after_last_on_page(&m_pcur)) {
  1278. if (btr_pcur_is_after_last_in_tree(&m_pcur)) {
  1279. return DB_END_OF_INDEX;
  1280. }
  1281. buf_block_t* block = btr_pcur_get_block(&m_pcur);
  1282. uint32_t next_page = btr_page_get_next(block->frame);
  1283. /* MDEV-13542 FIXME: Make these checks part of
  1284. btr_pcur_move_to_next_page(), and introduce a
  1285. return status that will be checked in all callers! */
  1286. switch (next_page) {
  1287. default:
  1288. if (next_page != block->page.id().page_no()) {
  1289. break;
  1290. }
  1291. /* MDEV-20931 FIXME: Check that
  1292. next_page is within the tablespace
  1293. bounds! Also check that it is not a
  1294. change buffer bitmap page. */
  1295. /* fall through */
  1296. case 0:
  1297. case 1:
  1298. case FIL_NULL:
  1299. return DB_CORRUPTION;
  1300. }
  1301. dict_index_t* index = m_pcur.btr_cur.index;
  1302. buf_block_t* next_block = btr_block_get(
  1303. *index, next_page, BTR_MODIFY_LEAF, false,
  1304. &m_mtr);
  1305. if (UNIV_UNLIKELY(!next_block
  1306. || !fil_page_index_page_check(
  1307. next_block->frame)
  1308. || !!dict_index_is_spatial(index)
  1309. != (fil_page_get_type(
  1310. next_block->frame)
  1311. == FIL_PAGE_RTREE)
  1312. || page_is_comp(next_block->frame)
  1313. != page_is_comp(block->frame)
  1314. || btr_page_get_prev(
  1315. next_block->frame)
  1316. != block->page.id().page_no())) {
  1317. return DB_CORRUPTION;
  1318. }
  1319. btr_leaf_page_release(block, BTR_MODIFY_LEAF, &m_mtr);
  1320. page_cur_set_before_first(next_block,
  1321. &m_pcur.btr_cur.page_cur);
  1322. ut_d(page_check_dir(next_block->frame));
  1323. } else {
  1324. btr_pcur_move_to_next_on_page(&m_pcur);
  1325. }
  1326. } while (!btr_pcur_is_on_user_rec(&m_pcur));
  1327. return DB_SUCCESS;
  1328. }
  1329. /**
  1330. Store the persistent cursor position and reopen the
  1331. B-tree cursor in BTR_MODIFY_TREE mode, because the
  1332. tree structure may be changed during a pessimistic delete. */
  1333. void
  1334. IndexPurge::purge_pessimistic_delete() UNIV_NOTHROW
  1335. {
  1336. dberr_t err;
  1337. btr_pcur_restore_position(BTR_MODIFY_TREE | BTR_LATCH_FOR_DELETE,
  1338. &m_pcur, &m_mtr);
  1339. ut_ad(rec_get_deleted_flag(
  1340. btr_pcur_get_rec(&m_pcur),
  1341. dict_table_is_comp(m_index->table)));
  1342. btr_cur_pessimistic_delete(
  1343. &err, FALSE, btr_pcur_get_btr_cur(&m_pcur), 0, false, &m_mtr);
  1344. ut_a(err == DB_SUCCESS);
  1345. /* Reopen the B-tree cursor in BTR_MODIFY_LEAF mode */
  1346. mtr_commit(&m_mtr);
  1347. }
  1348. /**
  1349. Purge delete-marked records. */
  1350. void
  1351. IndexPurge::purge() UNIV_NOTHROW
  1352. {
  1353. btr_pcur_store_position(&m_pcur, &m_mtr);
  1354. purge_pessimistic_delete();
  1355. mtr_start(&m_mtr);
  1356. mtr_set_log_mode(&m_mtr, MTR_LOG_NO_REDO);
  1357. btr_pcur_restore_position(BTR_MODIFY_LEAF, &m_pcur, &m_mtr);
  1358. }
  1359. /** Adjust the BLOB reference for a single column that is externally stored
  1360. @param rec record to update
  1361. @param offsets column offsets for the record
  1362. @param i column ordinal value
  1363. @return DB_SUCCESS or error code */
  1364. inline
  1365. dberr_t
  1366. PageConverter::adjust_cluster_index_blob_column(
  1367. rec_t* rec,
  1368. const rec_offs* offsets,
  1369. ulint i) UNIV_NOTHROW
  1370. {
  1371. ulint len;
  1372. byte* field;
  1373. field = rec_get_nth_field(rec, offsets, i, &len);
  1374. DBUG_EXECUTE_IF("ib_import_trigger_corruption_2",
  1375. len = BTR_EXTERN_FIELD_REF_SIZE - 1;);
  1376. if (len < BTR_EXTERN_FIELD_REF_SIZE) {
  1377. ib_errf(m_trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  1378. ER_INNODB_INDEX_CORRUPT,
  1379. "Externally stored column(" ULINTPF
  1380. ") has a reference length of " ULINTPF
  1381. " in the cluster index %s",
  1382. i, len, m_cluster_index->name());
  1383. return(DB_CORRUPTION);
  1384. }
  1385. field += len - (BTR_EXTERN_FIELD_REF_SIZE - BTR_EXTERN_SPACE_ID);
  1386. mach_write_to_4(field, get_space_id());
  1387. if (UNIV_LIKELY_NULL(m_rec_iter.current_block()->page.zip.data)) {
  1388. page_zip_write_blob_ptr(
  1389. m_rec_iter.current_block(), rec, m_cluster_index,
  1390. offsets, i, &m_rec_iter.m_mtr);
  1391. }
  1392. return(DB_SUCCESS);
  1393. }
  1394. /** Adjusts the BLOB reference in the clustered index row for all externally
  1395. stored columns.
  1396. @param rec record to update
  1397. @param offsets column offsets for the record
  1398. @return DB_SUCCESS or error code */
  1399. inline
  1400. dberr_t
  1401. PageConverter::adjust_cluster_index_blob_columns(
  1402. rec_t* rec,
  1403. const rec_offs* offsets) UNIV_NOTHROW
  1404. {
  1405. ut_ad(rec_offs_any_extern(offsets));
  1406. /* Adjust the space_id in the BLOB pointers. */
  1407. for (ulint i = 0; i < rec_offs_n_fields(offsets); ++i) {
  1408. /* Only if the column is stored "externally". */
  1409. if (rec_offs_nth_extern(offsets, i)) {
  1410. dberr_t err;
  1411. err = adjust_cluster_index_blob_column(rec, offsets, i);
  1412. if (err != DB_SUCCESS) {
  1413. return(err);
  1414. }
  1415. }
  1416. }
  1417. return(DB_SUCCESS);
  1418. }
  1419. /** In the clustered index, adjust BLOB pointers as needed. Also update the
  1420. BLOB reference, write the new space id.
  1421. @param rec record to update
  1422. @param offsets column offsets for the record
  1423. @return DB_SUCCESS or error code */
  1424. inline
  1425. dberr_t
  1426. PageConverter::adjust_cluster_index_blob_ref(
  1427. rec_t* rec,
  1428. const rec_offs* offsets) UNIV_NOTHROW
  1429. {
  1430. if (rec_offs_any_extern(offsets)) {
  1431. dberr_t err;
  1432. err = adjust_cluster_index_blob_columns(rec, offsets);
  1433. if (err != DB_SUCCESS) {
  1434. return(err);
  1435. }
  1436. }
  1437. return(DB_SUCCESS);
  1438. }
  1439. /** Purge delete-marked records, only if it is possible to do so without
  1440. re-organising the B+tree.
  1441. @return true if purge succeeded */
  1442. inline bool PageConverter::purge() UNIV_NOTHROW
  1443. {
  1444. const dict_index_t* index = m_index->m_srv_index;
  1445. /* We can't have a page that is empty and not root. */
  1446. if (m_rec_iter.remove(index, m_offsets)) {
  1447. ++m_index->m_stats.m_n_purged;
  1448. return(true);
  1449. } else {
  1450. ++m_index->m_stats.m_n_purge_failed;
  1451. }
  1452. return(false);
  1453. }
  1454. /** Adjust the BLOB references and sys fields for the current record.
  1455. @param rec record to update
  1456. @param offsets column offsets for the record
  1457. @return DB_SUCCESS or error code. */
  1458. inline
  1459. dberr_t
  1460. PageConverter::adjust_cluster_record(
  1461. rec_t* rec,
  1462. const rec_offs* offsets) UNIV_NOTHROW
  1463. {
  1464. dberr_t err;
  1465. if ((err = adjust_cluster_index_blob_ref(rec, offsets)) == DB_SUCCESS) {
  1466. /* Reset DB_TRX_ID and DB_ROLL_PTR. Normally, these fields
  1467. are only written in conjunction with other changes to the
  1468. record. */
  1469. ulint trx_id_pos = m_cluster_index->n_uniq
  1470. ? m_cluster_index->n_uniq : 1;
  1471. if (UNIV_LIKELY_NULL(m_rec_iter.current_block()
  1472. ->page.zip.data)) {
  1473. page_zip_write_trx_id_and_roll_ptr(
  1474. m_rec_iter.current_block(),
  1475. rec, m_offsets, trx_id_pos,
  1476. 0, roll_ptr_t(1) << ROLL_PTR_INSERT_FLAG_POS,
  1477. &m_rec_iter.m_mtr);
  1478. } else {
  1479. ulint len;
  1480. byte* ptr = rec_get_nth_field(
  1481. rec, m_offsets, trx_id_pos, &len);
  1482. ut_ad(len == DATA_TRX_ID_LEN);
  1483. memcpy(ptr, reset_trx_id, sizeof reset_trx_id);
  1484. }
  1485. }
  1486. return(err);
  1487. }
  1488. /** Update the BLOB refrences and write UNDO log entries for
  1489. rows that can't be purged optimistically.
  1490. @param block block to update
  1491. @retval DB_SUCCESS or error code */
  1492. inline
  1493. dberr_t
  1494. PageConverter::update_records(
  1495. buf_block_t* block) UNIV_NOTHROW
  1496. {
  1497. ibool comp = dict_table_is_comp(m_cfg->m_table);
  1498. bool clust_index = m_index->m_srv_index == m_cluster_index;
  1499. /* This will also position the cursor on the first user record. */
  1500. m_rec_iter.open(block);
  1501. while (!m_rec_iter.end()) {
  1502. rec_t* rec = m_rec_iter.current();
  1503. ibool deleted = rec_get_deleted_flag(rec, comp);
  1504. /* For the clustered index we have to adjust the BLOB
  1505. reference and the system fields irrespective of the
  1506. delete marked flag. The adjustment of delete marked
  1507. cluster records is required for purge to work later. */
  1508. if (deleted || clust_index) {
  1509. m_offsets = rec_get_offsets(
  1510. rec, m_index->m_srv_index, m_offsets,
  1511. m_index->m_srv_index->n_core_fields,
  1512. ULINT_UNDEFINED, &m_heap);
  1513. }
  1514. if (clust_index) {
  1515. dberr_t err = adjust_cluster_record(rec, m_offsets);
  1516. if (err != DB_SUCCESS) {
  1517. return(err);
  1518. }
  1519. }
  1520. /* If it is a delete marked record then try an
  1521. optimistic delete. */
  1522. if (deleted) {
  1523. /* A successful purge will move the cursor to the
  1524. next record. */
  1525. if (!purge()) {
  1526. m_rec_iter.next();
  1527. }
  1528. ++m_index->m_stats.m_n_deleted;
  1529. } else {
  1530. ++m_index->m_stats.m_n_rows;
  1531. m_rec_iter.next();
  1532. }
  1533. }
  1534. return(DB_SUCCESS);
  1535. }
  1536. /** Update the space, index id, trx id.
  1537. @return DB_SUCCESS or error code */
  1538. inline
  1539. dberr_t
  1540. PageConverter::update_index_page(
  1541. buf_block_t* block) UNIV_NOTHROW
  1542. {
  1543. const page_id_t page_id(block->page.id());
  1544. if (is_free(page_id.page_no())) {
  1545. return(DB_SUCCESS);
  1546. }
  1547. buf_frame_t* page = block->frame;
  1548. const index_id_t id = btr_page_get_index_id(page);
  1549. if (id != m_index->m_id) {
  1550. row_index_t* index = find_index(id);
  1551. if (UNIV_UNLIKELY(!index)) {
  1552. if (!m_cfg->m_missing) {
  1553. ib::warn() << "Unknown index id " << id
  1554. << " on page " << page_id.page_no();
  1555. }
  1556. return DB_SUCCESS;
  1557. }
  1558. m_index = index;
  1559. }
  1560. /* If the .cfg file is missing and there is an index mismatch
  1561. then ignore the error. */
  1562. if (m_cfg->m_missing && !m_index->m_srv_index) {
  1563. return(DB_SUCCESS);
  1564. }
  1565. if (m_index && page_id.page_no() == m_index->m_page_no) {
  1566. byte *b = FIL_PAGE_DATA + PAGE_BTR_SEG_LEAF + FSEG_HDR_SPACE
  1567. + page;
  1568. mach_write_to_4(b, page_id.space());
  1569. memcpy(FIL_PAGE_DATA + PAGE_BTR_SEG_TOP + FSEG_HDR_SPACE
  1570. + page, b, 4);
  1571. if (UNIV_LIKELY_NULL(block->page.zip.data)) {
  1572. memcpy(&block->page.zip.data[FIL_PAGE_DATA
  1573. + PAGE_BTR_SEG_TOP
  1574. + FSEG_HDR_SPACE], b, 4);
  1575. memcpy(&block->page.zip.data[FIL_PAGE_DATA
  1576. + PAGE_BTR_SEG_LEAF
  1577. + FSEG_HDR_SPACE], b, 4);
  1578. }
  1579. }
  1580. #ifdef UNIV_ZIP_DEBUG
  1581. ut_a(!block->page.zip.data || page_zip_validate(&block->page.zip, page,
  1582. m_index->m_srv_index));
  1583. #endif /* UNIV_ZIP_DEBUG */
  1584. /* This has to be written to uncompressed index header. Set it to
  1585. the current index id. */
  1586. mach_write_to_8(page + (PAGE_HEADER + PAGE_INDEX_ID),
  1587. m_index->m_srv_index->id);
  1588. if (UNIV_LIKELY_NULL(block->page.zip.data)) {
  1589. memcpy(&block->page.zip.data[PAGE_HEADER + PAGE_INDEX_ID],
  1590. &block->frame[PAGE_HEADER + PAGE_INDEX_ID], 8);
  1591. }
  1592. if (m_index->m_srv_index->is_clust()) {
  1593. if (page_id.page_no() != m_index->m_srv_index->page) {
  1594. goto clear_page_max_trx_id;
  1595. }
  1596. } else if (page_is_leaf(page)) {
  1597. /* Set PAGE_MAX_TRX_ID on secondary index leaf pages. */
  1598. mach_write_to_8(&block->frame[PAGE_HEADER + PAGE_MAX_TRX_ID],
  1599. m_trx->id);
  1600. if (UNIV_LIKELY_NULL(block->page.zip.data)) {
  1601. memcpy_aligned<8>(&block->page.zip.data
  1602. [PAGE_HEADER + PAGE_MAX_TRX_ID],
  1603. &block->frame
  1604. [PAGE_HEADER + PAGE_MAX_TRX_ID], 8);
  1605. }
  1606. } else {
  1607. clear_page_max_trx_id:
  1608. /* Clear PAGE_MAX_TRX_ID so that it can be
  1609. used for other purposes in the future. IMPORT
  1610. in MySQL 5.6, 5.7 and MariaDB 10.0 and 10.1
  1611. would set the field to the transaction ID even
  1612. on clustered index pages. */
  1613. memset_aligned<8>(&block->frame[PAGE_HEADER + PAGE_MAX_TRX_ID],
  1614. 0, 8);
  1615. if (UNIV_LIKELY_NULL(block->page.zip.data)) {
  1616. memset_aligned<8>(&block->page.zip.data
  1617. [PAGE_HEADER + PAGE_MAX_TRX_ID],
  1618. 0, 8);
  1619. }
  1620. }
  1621. if (page_is_empty(page)) {
  1622. /* Only a root page can be empty. */
  1623. if (page_has_siblings(page)) {
  1624. // TODO: We should relax this and skip secondary
  1625. // indexes. Mark them as corrupt because they can
  1626. // always be rebuilt.
  1627. return(DB_CORRUPTION);
  1628. }
  1629. return(DB_SUCCESS);
  1630. }
  1631. return page_is_leaf(block->frame) ? update_records(block) : DB_SUCCESS;
  1632. }
  1633. /** Validate the space flags and update tablespace header page.
  1634. @param block block read from file, not from the buffer pool.
  1635. @retval DB_SUCCESS or error code */
  1636. inline dberr_t PageConverter::update_header(buf_block_t* block) UNIV_NOTHROW
  1637. {
  1638. byte *frame= get_frame(block);
  1639. if (memcmp_aligned<2>(FIL_PAGE_SPACE_ID + frame,
  1640. FSP_HEADER_OFFSET + FSP_SPACE_ID + frame, 4))
  1641. ib::warn() << "Space id check in the header failed: ignored";
  1642. else if (!mach_read_from_4(FIL_PAGE_SPACE_ID + frame))
  1643. return DB_CORRUPTION;
  1644. memset(frame + FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION, 0, 8);
  1645. /* Write space_id to the tablespace header, page 0. */
  1646. mach_write_to_4(FIL_PAGE_SPACE_ID + frame, get_space_id());
  1647. memcpy_aligned<2>(FSP_HEADER_OFFSET + FSP_SPACE_ID + frame,
  1648. FIL_PAGE_SPACE_ID + frame, 4);
  1649. /* Write back the adjusted flags. */
  1650. mach_write_to_4(FSP_HEADER_OFFSET + FSP_SPACE_FLAGS + frame, m_space_flags);
  1651. return DB_SUCCESS;
  1652. }
  1653. /** Update the page, set the space id, max trx id and index id.
  1654. @param block block read from file
  1655. @retval DB_SUCCESS or error code */
  1656. inline
  1657. dberr_t
  1658. PageConverter::update_page(buf_block_t* block, uint16_t& page_type)
  1659. UNIV_NOTHROW
  1660. {
  1661. dberr_t err = DB_SUCCESS;
  1662. ut_ad(!block->page.zip.data == !is_compressed_table());
  1663. switch (page_type = fil_page_get_type(get_frame(block))) {
  1664. case FIL_PAGE_TYPE_FSP_HDR:
  1665. ut_a(block->page.id().page_no() == 0);
  1666. /* Work directly on the uncompressed page headers. */
  1667. return(update_header(block));
  1668. case FIL_PAGE_INDEX:
  1669. case FIL_PAGE_RTREE:
  1670. /* We need to decompress the contents into block->frame
  1671. before we can do any thing with Btree pages. */
  1672. if (is_compressed_table() && !buf_zip_decompress(block, TRUE)) {
  1673. return(DB_CORRUPTION);
  1674. }
  1675. /* fall through */
  1676. case FIL_PAGE_TYPE_INSTANT:
  1677. /* This is on every page in the tablespace. */
  1678. mach_write_to_4(
  1679. get_frame(block)
  1680. + FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID, get_space_id());
  1681. /* Only update the Btree nodes. */
  1682. return(update_index_page(block));
  1683. case FIL_PAGE_TYPE_SYS:
  1684. /* This is page 0 in the system tablespace. */
  1685. return(DB_CORRUPTION);
  1686. case FIL_PAGE_TYPE_XDES:
  1687. err = set_current_xdes(
  1688. block->page.id().page_no(), get_frame(block));
  1689. /* fall through */
  1690. case FIL_PAGE_INODE:
  1691. case FIL_PAGE_TYPE_TRX_SYS:
  1692. case FIL_PAGE_IBUF_FREE_LIST:
  1693. case FIL_PAGE_TYPE_ALLOCATED:
  1694. case FIL_PAGE_IBUF_BITMAP:
  1695. case FIL_PAGE_TYPE_BLOB:
  1696. case FIL_PAGE_TYPE_ZBLOB:
  1697. case FIL_PAGE_TYPE_ZBLOB2:
  1698. /* Work directly on the uncompressed page headers. */
  1699. /* This is on every page in the tablespace. */
  1700. mach_write_to_4(
  1701. get_frame(block)
  1702. + FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID, get_space_id());
  1703. return(err);
  1704. }
  1705. ib::warn() << "Unknown page type (" << page_type << ")";
  1706. return(DB_CORRUPTION);
  1707. }
  1708. /** Called for every page in the tablespace. If the page was not
  1709. updated then its state must be set to BUF_PAGE_NOT_USED.
  1710. @param block block read from file, note it is not from the buffer pool
  1711. @retval DB_SUCCESS or error code. */
  1712. dberr_t PageConverter::operator()(buf_block_t* block) UNIV_NOTHROW
  1713. {
  1714. /* If we already had an old page with matching number
  1715. in the buffer pool, evict it now, because
  1716. we no longer evict the pages on DISCARD TABLESPACE. */
  1717. buf_page_get_gen(block->page.id(), get_zip_size(),
  1718. RW_NO_LATCH, NULL, BUF_EVICT_IF_IN_POOL,
  1719. __FILE__, __LINE__, NULL, NULL);
  1720. uint16_t page_type;
  1721. if (dberr_t err = update_page(block, page_type)) {
  1722. return err;
  1723. }
  1724. const bool full_crc32 = fil_space_t::full_crc32(get_space_flags());
  1725. byte* frame = get_frame(block);
  1726. memset_aligned<8>(frame + FIL_PAGE_LSN, 0, 8);
  1727. if (!block->page.zip.data) {
  1728. buf_flush_init_for_writing(
  1729. NULL, block->frame, NULL, full_crc32);
  1730. } else if (fil_page_type_is_index(page_type)) {
  1731. buf_flush_init_for_writing(
  1732. NULL, block->page.zip.data, &block->page.zip,
  1733. full_crc32);
  1734. } else {
  1735. /* Calculate and update the checksum of non-index
  1736. pages for ROW_FORMAT=COMPRESSED tables. */
  1737. buf_flush_update_zip_checksum(
  1738. block->page.zip.data, block->zip_size());
  1739. }
  1740. return DB_SUCCESS;
  1741. }
  1742. /*****************************************************************//**
  1743. Clean up after import tablespace. */
  1744. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1745. dberr_t
  1746. row_import_cleanup(
  1747. /*===============*/
  1748. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from handler */
  1749. trx_t* trx, /*!< in/out: transaction for import */
  1750. dberr_t err) /*!< in: error code */
  1751. {
  1752. ut_a(prebuilt->trx != trx);
  1753. if (err != DB_SUCCESS) {
  1754. dict_table_t* table = prebuilt->table;
  1755. table->file_unreadable = true;
  1756. if (table->space) {
  1757. fil_close_tablespace(table->space_id);
  1758. table->space = NULL;
  1759. }
  1760. prebuilt->trx->error_info = NULL;
  1761. ib::info() << "Discarding tablespace of table "
  1762. << table->name << ": " << err;
  1763. if (!trx->dict_operation_lock_mode) {
  1764. row_mysql_lock_data_dictionary(trx);
  1765. }
  1766. for (dict_index_t* index = UT_LIST_GET_FIRST(table->indexes);
  1767. index;
  1768. index = UT_LIST_GET_NEXT(indexes, index)) {
  1769. index->page = FIL_NULL;
  1770. }
  1771. }
  1772. ut_a(trx->dict_operation_lock_mode == RW_X_LATCH);
  1773. DBUG_EXECUTE_IF("ib_import_before_commit_crash", DBUG_SUICIDE(););
  1774. trx_commit_for_mysql(trx);
  1775. row_mysql_unlock_data_dictionary(trx);
  1776. trx->free();
  1777. prebuilt->trx->op_info = "";
  1778. DBUG_EXECUTE_IF("ib_import_before_checkpoint_crash", DBUG_SUICIDE(););
  1779. return(err);
  1780. }
  1781. /*****************************************************************//**
  1782. Report error during tablespace import. */
  1783. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1784. dberr_t
  1785. row_import_error(
  1786. /*=============*/
  1787. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from handler */
  1788. trx_t* trx, /*!< in/out: transaction for import */
  1789. dberr_t err) /*!< in: error code */
  1790. {
  1791. if (!trx_is_interrupted(trx)) {
  1792. char table_name[MAX_FULL_NAME_LEN + 1];
  1793. innobase_format_name(
  1794. table_name, sizeof(table_name),
  1795. prebuilt->table->name.m_name);
  1796. ib_senderrf(
  1797. trx->mysql_thd, IB_LOG_LEVEL_WARN,
  1798. ER_INNODB_IMPORT_ERROR,
  1799. table_name, (ulong) err, ut_strerr(err));
  1800. }
  1801. return(row_import_cleanup(prebuilt, trx, err));
  1802. }
  1803. /*****************************************************************//**
  1804. Adjust the root page index node and leaf node segment headers, update
  1805. with the new space id. For all the table's secondary indexes.
  1806. @return error code */
  1807. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1808. dberr_t
  1809. row_import_adjust_root_pages_of_secondary_indexes(
  1810. /*==============================================*/
  1811. trx_t* trx, /*!< in: transaction used for
  1812. the import */
  1813. dict_table_t* table, /*!< in: table the indexes
  1814. belong to */
  1815. const row_import& cfg) /*!< Import context */
  1816. {
  1817. dict_index_t* index;
  1818. ulint n_rows_in_table;
  1819. dberr_t err = DB_SUCCESS;
  1820. /* Skip the clustered index. */
  1821. index = dict_table_get_first_index(table);
  1822. n_rows_in_table = cfg.get_n_rows(index->name);
  1823. DBUG_EXECUTE_IF("ib_import_sec_rec_count_mismatch_failure",
  1824. n_rows_in_table++;);
  1825. /* Adjust the root pages of the secondary indexes only. */
  1826. while ((index = dict_table_get_next_index(index)) != NULL) {
  1827. ut_a(!dict_index_is_clust(index));
  1828. if (!(index->type & DICT_CORRUPT)
  1829. && index->page != FIL_NULL) {
  1830. /* Update the Btree segment headers for index node and
  1831. leaf nodes in the root page. Set the new space id. */
  1832. err = btr_root_adjust_on_import(index);
  1833. } else {
  1834. ib::warn() << "Skip adjustment of root pages for"
  1835. " index " << index->name << ".";
  1836. err = DB_CORRUPTION;
  1837. }
  1838. if (err != DB_SUCCESS) {
  1839. if (index->type & DICT_CLUSTERED) {
  1840. break;
  1841. }
  1842. ib_errf(trx->mysql_thd,
  1843. IB_LOG_LEVEL_WARN,
  1844. ER_INNODB_INDEX_CORRUPT,
  1845. "Index %s not found or corrupt,"
  1846. " you should recreate this index.",
  1847. index->name());
  1848. /* Do not bail out, so that the data
  1849. can be recovered. */
  1850. err = DB_SUCCESS;
  1851. index->type |= DICT_CORRUPT;
  1852. continue;
  1853. }
  1854. /* If we failed to purge any records in the index then
  1855. do it the hard way.
  1856. TODO: We can do this in the first pass by generating UNDO log
  1857. records for the failed rows. */
  1858. if (!cfg.requires_purge(index->name)) {
  1859. continue;
  1860. }
  1861. IndexPurge purge(trx, index);
  1862. trx->op_info = "secondary: purge delete marked records";
  1863. err = purge.garbage_collect();
  1864. trx->op_info = "";
  1865. if (err != DB_SUCCESS) {
  1866. break;
  1867. } else if (purge.get_n_rows() != n_rows_in_table) {
  1868. ib_errf(trx->mysql_thd,
  1869. IB_LOG_LEVEL_WARN,
  1870. ER_INNODB_INDEX_CORRUPT,
  1871. "Index '%s' contains " ULINTPF " entries, "
  1872. "should be " ULINTPF ", you should recreate "
  1873. "this index.", index->name(),
  1874. purge.get_n_rows(), n_rows_in_table);
  1875. index->type |= DICT_CORRUPT;
  1876. /* Do not bail out, so that the data
  1877. can be recovered. */
  1878. err = DB_SUCCESS;
  1879. }
  1880. }
  1881. return(err);
  1882. }
  1883. /*****************************************************************//**
  1884. Ensure that dict_sys.row_id exceeds SELECT MAX(DB_ROW_ID). */
  1885. MY_ATTRIBUTE((nonnull)) static
  1886. void
  1887. row_import_set_sys_max_row_id(
  1888. /*==========================*/
  1889. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt from
  1890. handler */
  1891. const dict_table_t* table) /*!< in: table to import */
  1892. {
  1893. const rec_t* rec;
  1894. mtr_t mtr;
  1895. btr_pcur_t pcur;
  1896. row_id_t row_id = 0;
  1897. dict_index_t* index;
  1898. index = dict_table_get_first_index(table);
  1899. ut_ad(index->is_primary());
  1900. ut_ad(dict_index_is_auto_gen_clust(index));
  1901. mtr_start(&mtr);
  1902. mtr_set_log_mode(&mtr, MTR_LOG_NO_REDO);
  1903. btr_pcur_open_at_index_side(
  1904. false, // High end
  1905. index,
  1906. BTR_SEARCH_LEAF,
  1907. &pcur,
  1908. true, // Init cursor
  1909. 0, // Leaf level
  1910. &mtr);
  1911. btr_pcur_move_to_prev_on_page(&pcur);
  1912. rec = btr_pcur_get_rec(&pcur);
  1913. /* Check for empty table. */
  1914. if (page_rec_is_infimum(rec)) {
  1915. /* The table is empty. */
  1916. } else if (rec_is_metadata(rec, *index)) {
  1917. /* The clustered index contains the metadata record only,
  1918. that is, the table is empty. */
  1919. } else {
  1920. row_id = mach_read_from_6(rec);
  1921. }
  1922. btr_pcur_close(&pcur);
  1923. mtr_commit(&mtr);
  1924. if (row_id) {
  1925. /* Update the system row id if the imported index row id is
  1926. greater than the max system row id. */
  1927. mutex_enter(&dict_sys.mutex);
  1928. if (row_id >= dict_sys.row_id) {
  1929. dict_sys.row_id = row_id + 1;
  1930. dict_hdr_flush_row_id();
  1931. }
  1932. mutex_exit(&dict_sys.mutex);
  1933. }
  1934. }
  1935. /*****************************************************************//**
  1936. Read the a string from the meta data file.
  1937. @return DB_SUCCESS or error code. */
  1938. static
  1939. dberr_t
  1940. row_import_cfg_read_string(
  1941. /*=======================*/
  1942. FILE* file, /*!< in/out: File to read from */
  1943. byte* ptr, /*!< out: string to read */
  1944. ulint max_len) /*!< in: maximum length of the output
  1945. buffer in bytes */
  1946. {
  1947. DBUG_EXECUTE_IF("ib_import_string_read_error",
  1948. errno = EINVAL; return(DB_IO_ERROR););
  1949. ulint len = 0;
  1950. while (!feof(file)) {
  1951. int ch = fgetc(file);
  1952. if (ch == EOF) {
  1953. break;
  1954. } else if (ch != 0) {
  1955. if (len < max_len) {
  1956. ptr[len++] = static_cast<byte>(ch);
  1957. } else {
  1958. break;
  1959. }
  1960. /* max_len includes the NUL byte */
  1961. } else if (len != max_len - 1) {
  1962. break;
  1963. } else {
  1964. ptr[len] = 0;
  1965. return(DB_SUCCESS);
  1966. }
  1967. }
  1968. errno = EINVAL;
  1969. return(DB_IO_ERROR);
  1970. }
  1971. /*********************************************************************//**
  1972. Write the meta data (index user fields) config file.
  1973. @return DB_SUCCESS or error code. */
  1974. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  1975. dberr_t
  1976. row_import_cfg_read_index_fields(
  1977. /*=============================*/
  1978. FILE* file, /*!< in: file to write to */
  1979. THD* thd, /*!< in/out: session */
  1980. row_index_t* index) /*!< Index being read in */
  1981. {
  1982. byte row[sizeof(ib_uint32_t) * 3];
  1983. ulint n_fields = index->m_n_fields;
  1984. index->m_fields = UT_NEW_ARRAY_NOKEY(dict_field_t, n_fields);
  1985. /* Trigger OOM */
  1986. DBUG_EXECUTE_IF(
  1987. "ib_import_OOM_4",
  1988. UT_DELETE_ARRAY(index->m_fields);
  1989. index->m_fields = NULL;
  1990. );
  1991. if (index->m_fields == NULL) {
  1992. return(DB_OUT_OF_MEMORY);
  1993. }
  1994. dict_field_t* field = index->m_fields;
  1995. for (ulint i = 0; i < n_fields; ++i, ++field) {
  1996. byte* ptr = row;
  1997. /* Trigger EOF */
  1998. DBUG_EXECUTE_IF("ib_import_io_read_error_1",
  1999. (void) fseek(file, 0L, SEEK_END););
  2000. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  2001. ib_senderrf(
  2002. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2003. (ulong) errno, strerror(errno),
  2004. "while reading index fields.");
  2005. return(DB_IO_ERROR);
  2006. }
  2007. new (field) dict_field_t();
  2008. field->prefix_len = mach_read_from_4(ptr) & ((1U << 12) - 1);
  2009. ptr += sizeof(ib_uint32_t);
  2010. field->fixed_len = mach_read_from_4(ptr) & ((1U << 10) - 1);
  2011. ptr += sizeof(ib_uint32_t);
  2012. /* Include the NUL byte in the length. */
  2013. ulint len = mach_read_from_4(ptr);
  2014. byte* name = UT_NEW_ARRAY_NOKEY(byte, len);
  2015. /* Trigger OOM */
  2016. DBUG_EXECUTE_IF(
  2017. "ib_import_OOM_5",
  2018. UT_DELETE_ARRAY(name);
  2019. name = NULL;
  2020. );
  2021. if (name == NULL) {
  2022. return(DB_OUT_OF_MEMORY);
  2023. }
  2024. field->name = reinterpret_cast<const char*>(name);
  2025. dberr_t err = row_import_cfg_read_string(file, name, len);
  2026. if (err != DB_SUCCESS) {
  2027. ib_senderrf(
  2028. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2029. (ulong) errno, strerror(errno),
  2030. "while parsing table name.");
  2031. return(err);
  2032. }
  2033. }
  2034. return(DB_SUCCESS);
  2035. }
  2036. /*****************************************************************//**
  2037. Read the index names and root page numbers of the indexes and set the values.
  2038. Row format [root_page_no, len of str, str ... ]
  2039. @return DB_SUCCESS or error code. */
  2040. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2041. dberr_t
  2042. row_import_read_index_data(
  2043. /*=======================*/
  2044. FILE* file, /*!< in: File to read from */
  2045. THD* thd, /*!< in: session */
  2046. row_import* cfg) /*!< in/out: meta-data read */
  2047. {
  2048. byte* ptr;
  2049. row_index_t* cfg_index;
  2050. byte row[sizeof(index_id_t) + sizeof(ib_uint32_t) * 9];
  2051. /* FIXME: What is the max value? */
  2052. ut_a(cfg->m_n_indexes > 0);
  2053. ut_a(cfg->m_n_indexes < 1024);
  2054. cfg->m_indexes = UT_NEW_ARRAY_NOKEY(row_index_t, cfg->m_n_indexes);
  2055. /* Trigger OOM */
  2056. DBUG_EXECUTE_IF(
  2057. "ib_import_OOM_6",
  2058. UT_DELETE_ARRAY(cfg->m_indexes);
  2059. cfg->m_indexes = NULL;
  2060. );
  2061. if (cfg->m_indexes == NULL) {
  2062. return(DB_OUT_OF_MEMORY);
  2063. }
  2064. memset(cfg->m_indexes, 0x0, sizeof(*cfg->m_indexes) * cfg->m_n_indexes);
  2065. cfg_index = cfg->m_indexes;
  2066. for (ulint i = 0; i < cfg->m_n_indexes; ++i, ++cfg_index) {
  2067. /* Trigger EOF */
  2068. DBUG_EXECUTE_IF("ib_import_io_read_error_2",
  2069. (void) fseek(file, 0L, SEEK_END););
  2070. /* Read the index data. */
  2071. size_t n_bytes = fread(row, 1, sizeof(row), file);
  2072. /* Trigger EOF */
  2073. DBUG_EXECUTE_IF("ib_import_io_read_error",
  2074. (void) fseek(file, 0L, SEEK_END););
  2075. if (n_bytes != sizeof(row)) {
  2076. char msg[BUFSIZ];
  2077. snprintf(msg, sizeof(msg),
  2078. "while reading index meta-data, expected "
  2079. "to read " ULINTPF
  2080. " bytes but read only " ULINTPF " bytes",
  2081. sizeof(row), n_bytes);
  2082. ib_senderrf(
  2083. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2084. (ulong) errno, strerror(errno), msg);
  2085. ib::error() << "IO Error: " << msg;
  2086. return(DB_IO_ERROR);
  2087. }
  2088. ptr = row;
  2089. cfg_index->m_id = mach_read_from_8(ptr);
  2090. ptr += sizeof(index_id_t);
  2091. cfg_index->m_space = mach_read_from_4(ptr);
  2092. ptr += sizeof(ib_uint32_t);
  2093. cfg_index->m_page_no = mach_read_from_4(ptr);
  2094. ptr += sizeof(ib_uint32_t);
  2095. cfg_index->m_type = mach_read_from_4(ptr);
  2096. ptr += sizeof(ib_uint32_t);
  2097. cfg_index->m_trx_id_offset = mach_read_from_4(ptr);
  2098. if (cfg_index->m_trx_id_offset != mach_read_from_4(ptr)) {
  2099. ut_ad(0);
  2100. /* Overflow. Pretend that the clustered index
  2101. has a variable-length PRIMARY KEY. */
  2102. cfg_index->m_trx_id_offset = 0;
  2103. }
  2104. ptr += sizeof(ib_uint32_t);
  2105. cfg_index->m_n_user_defined_cols = mach_read_from_4(ptr);
  2106. ptr += sizeof(ib_uint32_t);
  2107. cfg_index->m_n_uniq = mach_read_from_4(ptr);
  2108. ptr += sizeof(ib_uint32_t);
  2109. cfg_index->m_n_nullable = mach_read_from_4(ptr);
  2110. ptr += sizeof(ib_uint32_t);
  2111. cfg_index->m_n_fields = mach_read_from_4(ptr);
  2112. ptr += sizeof(ib_uint32_t);
  2113. /* The NUL byte is included in the name length. */
  2114. ulint len = mach_read_from_4(ptr);
  2115. if (len > OS_FILE_MAX_PATH) {
  2116. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  2117. ER_INNODB_INDEX_CORRUPT,
  2118. "Index name length (" ULINTPF ") is too long, "
  2119. "the meta-data is corrupt", len);
  2120. return(DB_CORRUPTION);
  2121. }
  2122. cfg_index->m_name = UT_NEW_ARRAY_NOKEY(byte, len);
  2123. /* Trigger OOM */
  2124. DBUG_EXECUTE_IF(
  2125. "ib_import_OOM_7",
  2126. UT_DELETE_ARRAY(cfg_index->m_name);
  2127. cfg_index->m_name = NULL;
  2128. );
  2129. if (cfg_index->m_name == NULL) {
  2130. return(DB_OUT_OF_MEMORY);
  2131. }
  2132. dberr_t err;
  2133. err = row_import_cfg_read_string(file, cfg_index->m_name, len);
  2134. if (err != DB_SUCCESS) {
  2135. ib_senderrf(
  2136. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2137. (ulong) errno, strerror(errno),
  2138. "while parsing index name.");
  2139. return(err);
  2140. }
  2141. err = row_import_cfg_read_index_fields(file, thd, cfg_index);
  2142. if (err != DB_SUCCESS) {
  2143. return(err);
  2144. }
  2145. }
  2146. return(DB_SUCCESS);
  2147. }
  2148. /*****************************************************************//**
  2149. Set the index root page number for v1 format.
  2150. @return DB_SUCCESS or error code. */
  2151. static
  2152. dberr_t
  2153. row_import_read_indexes(
  2154. /*====================*/
  2155. FILE* file, /*!< in: File to read from */
  2156. THD* thd, /*!< in: session */
  2157. row_import* cfg) /*!< in/out: meta-data read */
  2158. {
  2159. byte row[sizeof(ib_uint32_t)];
  2160. /* Trigger EOF */
  2161. DBUG_EXECUTE_IF("ib_import_io_read_error_3",
  2162. (void) fseek(file, 0L, SEEK_END););
  2163. /* Read the number of indexes. */
  2164. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  2165. ib_senderrf(
  2166. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2167. (ulong) errno, strerror(errno),
  2168. "while reading number of indexes.");
  2169. return(DB_IO_ERROR);
  2170. }
  2171. cfg->m_n_indexes = mach_read_from_4(row);
  2172. if (cfg->m_n_indexes == 0) {
  2173. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2174. "Number of indexes in meta-data file is 0");
  2175. return(DB_CORRUPTION);
  2176. } else if (cfg->m_n_indexes > 1024) {
  2177. // FIXME: What is the upper limit? */
  2178. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2179. "Number of indexes in meta-data file is too high: "
  2180. ULINTPF, cfg->m_n_indexes);
  2181. cfg->m_n_indexes = 0;
  2182. return(DB_CORRUPTION);
  2183. }
  2184. return(row_import_read_index_data(file, thd, cfg));
  2185. }
  2186. /*********************************************************************//**
  2187. Read the meta data (table columns) config file. Deserialise the contents of
  2188. dict_col_t structure, along with the column name. */
  2189. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2190. dberr_t
  2191. row_import_read_columns(
  2192. /*====================*/
  2193. FILE* file, /*!< in: file to write to */
  2194. THD* thd, /*!< in/out: session */
  2195. row_import* cfg) /*!< in/out: meta-data read */
  2196. {
  2197. dict_col_t* col;
  2198. byte row[sizeof(ib_uint32_t) * 8];
  2199. /* FIXME: What should the upper limit be? */
  2200. ut_a(cfg->m_n_cols > 0);
  2201. ut_a(cfg->m_n_cols < 1024);
  2202. cfg->m_cols = UT_NEW_ARRAY_NOKEY(dict_col_t, cfg->m_n_cols);
  2203. /* Trigger OOM */
  2204. DBUG_EXECUTE_IF(
  2205. "ib_import_OOM_8",
  2206. UT_DELETE_ARRAY(cfg->m_cols);
  2207. cfg->m_cols = NULL;
  2208. );
  2209. if (cfg->m_cols == NULL) {
  2210. return(DB_OUT_OF_MEMORY);
  2211. }
  2212. cfg->m_col_names = UT_NEW_ARRAY_NOKEY(byte*, cfg->m_n_cols);
  2213. /* Trigger OOM */
  2214. DBUG_EXECUTE_IF(
  2215. "ib_import_OOM_9",
  2216. UT_DELETE_ARRAY(cfg->m_col_names);
  2217. cfg->m_col_names = NULL;
  2218. );
  2219. if (cfg->m_col_names == NULL) {
  2220. return(DB_OUT_OF_MEMORY);
  2221. }
  2222. memset(cfg->m_cols, 0x0, sizeof(cfg->m_cols) * cfg->m_n_cols);
  2223. memset(cfg->m_col_names, 0x0, sizeof(cfg->m_col_names) * cfg->m_n_cols);
  2224. col = cfg->m_cols;
  2225. for (ulint i = 0; i < cfg->m_n_cols; ++i, ++col) {
  2226. byte* ptr = row;
  2227. /* Trigger EOF */
  2228. DBUG_EXECUTE_IF("ib_import_io_read_error_4",
  2229. (void) fseek(file, 0L, SEEK_END););
  2230. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  2231. ib_senderrf(
  2232. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2233. (ulong) errno, strerror(errno),
  2234. "while reading table column meta-data.");
  2235. return(DB_IO_ERROR);
  2236. }
  2237. col->prtype = mach_read_from_4(ptr);
  2238. ptr += sizeof(ib_uint32_t);
  2239. col->mtype = static_cast<byte>(mach_read_from_4(ptr));
  2240. ptr += sizeof(ib_uint32_t);
  2241. col->len = static_cast<uint16_t>(mach_read_from_4(ptr));
  2242. ptr += sizeof(ib_uint32_t);
  2243. uint32_t mbminmaxlen = mach_read_from_4(ptr);
  2244. col->mbmaxlen = (mbminmaxlen / 5) & 7;
  2245. col->mbminlen = (mbminmaxlen % 5) & 7;
  2246. ptr += sizeof(ib_uint32_t);
  2247. col->ind = mach_read_from_4(ptr) & dict_index_t::MAX_N_FIELDS;
  2248. ptr += sizeof(ib_uint32_t);
  2249. col->ord_part = mach_read_from_4(ptr) & 1;
  2250. ptr += sizeof(ib_uint32_t);
  2251. col->max_prefix = mach_read_from_4(ptr) & ((1U << 12) - 1);
  2252. ptr += sizeof(ib_uint32_t);
  2253. /* Read in the column name as [len, byte array]. The len
  2254. includes the NUL byte. */
  2255. ulint len = mach_read_from_4(ptr);
  2256. /* FIXME: What is the maximum column name length? */
  2257. if (len == 0 || len > 128) {
  2258. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  2259. ER_IO_READ_ERROR,
  2260. "Column name length " ULINTPF ", is invalid",
  2261. len);
  2262. return(DB_CORRUPTION);
  2263. }
  2264. cfg->m_col_names[i] = UT_NEW_ARRAY_NOKEY(byte, len);
  2265. /* Trigger OOM */
  2266. DBUG_EXECUTE_IF(
  2267. "ib_import_OOM_10",
  2268. UT_DELETE_ARRAY(cfg->m_col_names[i]);
  2269. cfg->m_col_names[i] = NULL;
  2270. );
  2271. if (cfg->m_col_names[i] == NULL) {
  2272. return(DB_OUT_OF_MEMORY);
  2273. }
  2274. dberr_t err;
  2275. err = row_import_cfg_read_string(
  2276. file, cfg->m_col_names[i], len);
  2277. if (err != DB_SUCCESS) {
  2278. ib_senderrf(
  2279. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2280. (ulong) errno, strerror(errno),
  2281. "while parsing table column name.");
  2282. return(err);
  2283. }
  2284. }
  2285. return(DB_SUCCESS);
  2286. }
  2287. /*****************************************************************//**
  2288. Read the contents of the <tablespace>.cfg file.
  2289. @return DB_SUCCESS or error code. */
  2290. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2291. dberr_t
  2292. row_import_read_v1(
  2293. /*===============*/
  2294. FILE* file, /*!< in: File to read from */
  2295. THD* thd, /*!< in: session */
  2296. row_import* cfg) /*!< out: meta data */
  2297. {
  2298. byte value[sizeof(ib_uint32_t)];
  2299. /* Trigger EOF */
  2300. DBUG_EXECUTE_IF("ib_import_io_read_error_5",
  2301. (void) fseek(file, 0L, SEEK_END););
  2302. /* Read the hostname where the tablespace was exported. */
  2303. if (fread(value, 1, sizeof(value), file) != sizeof(value)) {
  2304. ib_senderrf(
  2305. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2306. (ulong) errno, strerror(errno),
  2307. "while reading meta-data export hostname length.");
  2308. return(DB_IO_ERROR);
  2309. }
  2310. ulint len = mach_read_from_4(value);
  2311. /* NUL byte is part of name length. */
  2312. cfg->m_hostname = UT_NEW_ARRAY_NOKEY(byte, len);
  2313. /* Trigger OOM */
  2314. DBUG_EXECUTE_IF(
  2315. "ib_import_OOM_1",
  2316. UT_DELETE_ARRAY(cfg->m_hostname);
  2317. cfg->m_hostname = NULL;
  2318. );
  2319. if (cfg->m_hostname == NULL) {
  2320. return(DB_OUT_OF_MEMORY);
  2321. }
  2322. dberr_t err = row_import_cfg_read_string(file, cfg->m_hostname, len);
  2323. if (err != DB_SUCCESS) {
  2324. ib_senderrf(
  2325. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2326. (ulong) errno, strerror(errno),
  2327. "while parsing export hostname.");
  2328. return(err);
  2329. }
  2330. /* Trigger EOF */
  2331. DBUG_EXECUTE_IF("ib_import_io_read_error_6",
  2332. (void) fseek(file, 0L, SEEK_END););
  2333. /* Read the table name of tablespace that was exported. */
  2334. if (fread(value, 1, sizeof(value), file) != sizeof(value)) {
  2335. ib_senderrf(
  2336. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2337. (ulong) errno, strerror(errno),
  2338. "while reading meta-data table name length.");
  2339. return(DB_IO_ERROR);
  2340. }
  2341. len = mach_read_from_4(value);
  2342. /* NUL byte is part of name length. */
  2343. cfg->m_table_name = UT_NEW_ARRAY_NOKEY(byte, len);
  2344. /* Trigger OOM */
  2345. DBUG_EXECUTE_IF(
  2346. "ib_import_OOM_2",
  2347. UT_DELETE_ARRAY(cfg->m_table_name);
  2348. cfg->m_table_name = NULL;
  2349. );
  2350. if (cfg->m_table_name == NULL) {
  2351. return(DB_OUT_OF_MEMORY);
  2352. }
  2353. err = row_import_cfg_read_string(file, cfg->m_table_name, len);
  2354. if (err != DB_SUCCESS) {
  2355. ib_senderrf(
  2356. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2357. (ulong) errno, strerror(errno),
  2358. "while parsing table name.");
  2359. return(err);
  2360. }
  2361. ib::info() << "Importing tablespace for table '" << cfg->m_table_name
  2362. << "' that was exported from host '" << cfg->m_hostname << "'";
  2363. byte row[sizeof(ib_uint32_t) * 3];
  2364. /* Trigger EOF */
  2365. DBUG_EXECUTE_IF("ib_import_io_read_error_7",
  2366. (void) fseek(file, 0L, SEEK_END););
  2367. /* Read the autoinc value. */
  2368. if (fread(row, 1, sizeof(ib_uint64_t), file) != sizeof(ib_uint64_t)) {
  2369. ib_senderrf(
  2370. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2371. (ulong) errno, strerror(errno),
  2372. "while reading autoinc value.");
  2373. return(DB_IO_ERROR);
  2374. }
  2375. cfg->m_autoinc = mach_read_from_8(row);
  2376. /* Trigger EOF */
  2377. DBUG_EXECUTE_IF("ib_import_io_read_error_8",
  2378. (void) fseek(file, 0L, SEEK_END););
  2379. /* Read the tablespace page size. */
  2380. if (fread(row, 1, sizeof(row), file) != sizeof(row)) {
  2381. ib_senderrf(
  2382. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2383. (ulong) errno, strerror(errno),
  2384. "while reading meta-data header.");
  2385. return(DB_IO_ERROR);
  2386. }
  2387. byte* ptr = row;
  2388. const ulint logical_page_size = mach_read_from_4(ptr);
  2389. ptr += sizeof(ib_uint32_t);
  2390. if (logical_page_size != srv_page_size) {
  2391. ib_errf(thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  2392. "Tablespace to be imported has a different"
  2393. " page size than this server. Server page size"
  2394. " is %lu, whereas tablespace page size"
  2395. " is " ULINTPF,
  2396. srv_page_size,
  2397. logical_page_size);
  2398. return(DB_ERROR);
  2399. }
  2400. cfg->m_flags = mach_read_from_4(ptr);
  2401. ptr += sizeof(ib_uint32_t);
  2402. cfg->m_zip_size = dict_tf_get_zip_size(cfg->m_flags);
  2403. cfg->m_n_cols = mach_read_from_4(ptr);
  2404. if (!dict_tf_is_valid(cfg->m_flags)) {
  2405. ib_errf(thd, IB_LOG_LEVEL_ERROR,
  2406. ER_TABLE_SCHEMA_MISMATCH,
  2407. "Invalid table flags: " ULINTPF, cfg->m_flags);
  2408. return(DB_CORRUPTION);
  2409. }
  2410. err = row_import_read_columns(file, thd, cfg);
  2411. if (err == DB_SUCCESS) {
  2412. err = row_import_read_indexes(file, thd, cfg);
  2413. }
  2414. return(err);
  2415. }
  2416. /**
  2417. Read the contents of the <tablespace>.cfg file.
  2418. @return DB_SUCCESS or error code. */
  2419. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2420. dberr_t
  2421. row_import_read_meta_data(
  2422. /*======================*/
  2423. FILE* file, /*!< in: File to read from */
  2424. THD* thd, /*!< in: session */
  2425. row_import& cfg) /*!< out: contents of the .cfg file */
  2426. {
  2427. byte row[sizeof(ib_uint32_t)];
  2428. /* Trigger EOF */
  2429. DBUG_EXECUTE_IF("ib_import_io_read_error_9",
  2430. (void) fseek(file, 0L, SEEK_END););
  2431. if (fread(&row, 1, sizeof(row), file) != sizeof(row)) {
  2432. ib_senderrf(
  2433. thd, IB_LOG_LEVEL_ERROR, ER_IO_READ_ERROR,
  2434. (ulong) errno, strerror(errno),
  2435. "while reading meta-data version.");
  2436. return(DB_IO_ERROR);
  2437. }
  2438. /* Check the version number. */
  2439. switch (mach_read_from_4(row)) {
  2440. case IB_EXPORT_CFG_VERSION_V1:
  2441. return(row_import_read_v1(file, thd, &cfg));
  2442. default:
  2443. ib_senderrf(thd, IB_LOG_LEVEL_ERROR, ER_NOT_SUPPORTED_YET,
  2444. "meta-data version");
  2445. }
  2446. return(DB_ERROR);
  2447. }
  2448. #define BTR_BLOB_HDR_PART_LEN 0 /*!< BLOB part len on this page */
  2449. #define BTR_BLOB_HDR_NEXT_PAGE_NO 4 /*!< next BLOB part page no,
  2450. FIL_NULL if none */
  2451. #define BTR_BLOB_HDR_SIZE 8 /*!< Size of a BLOB part header, in bytes */
  2452. /* decrypt and decompress page if needed */
  2453. static dberr_t decrypt_decompress(fil_space_crypt_t *space_crypt,
  2454. size_t space_flags, span<byte> page,
  2455. size_t space_id, byte *page_compress_buf)
  2456. {
  2457. auto *data= page.data();
  2458. if (space_crypt && space_crypt->should_encrypt())
  2459. {
  2460. if (!buf_page_verify_crypt_checksum(data, space_flags))
  2461. return DB_CORRUPTION;
  2462. if (dberr_t err= fil_space_decrypt(space_id, space_crypt, data,
  2463. page.size(), space_flags, data))
  2464. return err;
  2465. }
  2466. bool page_compressed= false;
  2467. if (fil_space_t::full_crc32(space_flags) &&
  2468. fil_space_t::is_compressed(space_flags))
  2469. page_compressed= buf_page_is_compressed(data, space_flags);
  2470. else
  2471. {
  2472. switch (fil_page_get_type(data)) {
  2473. case FIL_PAGE_PAGE_COMPRESSED:
  2474. case FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED:
  2475. page_compressed= true;
  2476. }
  2477. }
  2478. if (page_compressed)
  2479. {
  2480. auto compress_length=
  2481. fil_page_decompress(page_compress_buf, data, space_flags);
  2482. ut_ad(compress_length != srv_page_size);
  2483. if (compress_length == 0)
  2484. return DB_CORRUPTION;
  2485. }
  2486. return DB_SUCCESS;
  2487. }
  2488. static size_t get_buf_size()
  2489. {
  2490. return srv_page_size
  2491. #ifdef HAVE_LZO
  2492. + LZO1X_1_15_MEM_COMPRESS
  2493. #elif defined HAVE_SNAPPY
  2494. + snappy_max_compressed_length(srv_page_size)
  2495. #endif
  2496. ;
  2497. }
  2498. /* find, parse instant metadata, performing variaous checks,
  2499. and apply it to dict_table_t
  2500. @return DB_SUCCESS or some error */
  2501. static dberr_t handle_instant_metadata(dict_table_t *table,
  2502. const row_import &cfg)
  2503. {
  2504. dict_get_and_save_data_dir_path(table, false);
  2505. char *filepath;
  2506. if (DICT_TF_HAS_DATA_DIR(table->flags))
  2507. {
  2508. ut_a(table->data_dir_path);
  2509. filepath=
  2510. fil_make_filepath(table->data_dir_path, table->name.m_name, IBD, true);
  2511. }
  2512. else
  2513. filepath= fil_make_filepath(nullptr, table->name.m_name, IBD, false);
  2514. if (!filepath)
  2515. return DB_OUT_OF_MEMORY;
  2516. SCOPE_EXIT([filepath]() { ut_free(filepath); });
  2517. bool success;
  2518. auto file= os_file_create_simple_no_error_handling(
  2519. innodb_data_file_key, filepath, OS_FILE_OPEN, OS_FILE_READ_WRITE, false,
  2520. &success);
  2521. if (!success)
  2522. return DB_IO_ERROR;
  2523. if (os_file_get_size(file) < srv_page_size)
  2524. return DB_CORRUPTION;
  2525. SCOPE_EXIT([&file]() { os_file_close(file); });
  2526. std::unique_ptr<byte[], decltype(&aligned_free)> first_page(
  2527. static_cast<byte *>(aligned_malloc(srv_page_size, srv_page_size)),
  2528. &aligned_free);
  2529. if (dberr_t err= os_file_read_no_error_handling(IORequestReadPartial,
  2530. file, first_page.get(), 0,
  2531. srv_page_size, nullptr))
  2532. return err;
  2533. auto space_flags= fsp_header_get_flags(first_page.get());
  2534. if (!fil_space_t::is_valid_flags(space_flags, true))
  2535. {
  2536. auto cflags= fsp_flags_convert_from_101(space_flags);
  2537. if (cflags == ULINT_UNDEFINED)
  2538. return invalid_space_flags(space_flags);
  2539. space_flags= static_cast<decltype(space_flags)>(cflags);
  2540. }
  2541. if (!cfg.m_missing)
  2542. {
  2543. if (dberr_t err= cfg.match_flags(current_thd))
  2544. return err;
  2545. }
  2546. const unsigned zip_size= fil_space_t::zip_size(space_flags);
  2547. const unsigned physical_size= zip_size ? zip_size : unsigned(srv_page_size);
  2548. ut_ad(physical_size <= UNIV_PAGE_SIZE_MAX);
  2549. const uint32_t space_id= page_get_space_id(first_page.get());
  2550. auto *space_crypt= fil_space_read_crypt_data(zip_size, first_page.get());
  2551. SCOPE_EXIT([&space_crypt]() {
  2552. if (space_crypt)
  2553. fil_space_destroy_crypt_data(&space_crypt);
  2554. });
  2555. std::unique_ptr<byte[], decltype(&aligned_free)> page(
  2556. static_cast<byte *>(
  2557. aligned_malloc(UNIV_PAGE_SIZE_MAX, UNIV_PAGE_SIZE_MAX)),
  2558. &aligned_free);
  2559. if (dberr_t err= os_file_read_no_error_handling(
  2560. IORequestReadPartial, file, page.get(), 3 * physical_size,
  2561. physical_size, nullptr))
  2562. return err;
  2563. std::unique_ptr<byte[]> page_compress_buf(new byte[get_buf_size()]);
  2564. if (dberr_t err= decrypt_decompress(space_crypt, space_flags,
  2565. {page.get(), static_cast<size_t>
  2566. (physical_size)},
  2567. space_id, page_compress_buf.get()))
  2568. return err;
  2569. if (table->supports_instant())
  2570. {
  2571. dict_index_t *index= dict_table_get_first_index(table);
  2572. auto tmp1= table->space_id;
  2573. table->space_id= page_get_space_id(page.get());
  2574. SCOPE_EXIT([tmp1, table]() { table->space_id= tmp1; });
  2575. auto tmp2= index->page;
  2576. index->page= page_get_page_no(page.get());
  2577. SCOPE_EXIT([tmp2, index]() { index->page= tmp2; });
  2578. if (!page_is_comp(page.get()) != !dict_table_is_comp(table))
  2579. {
  2580. ib_errf(current_thd, IB_LOG_LEVEL_ERROR, ER_TABLE_SCHEMA_MISMATCH,
  2581. "ROW_FORMAT mismatch");
  2582. return DB_CORRUPTION;
  2583. }
  2584. if (btr_cur_instant_root_init(index, page.get()))
  2585. return DB_ERROR;
  2586. ut_ad(index->n_core_null_bytes != dict_index_t::NO_CORE_NULL_BYTES);
  2587. if (fil_page_get_type(page.get()) == FIL_PAGE_INDEX)
  2588. {
  2589. ut_ad(!index->is_instant());
  2590. return DB_SUCCESS;
  2591. }
  2592. mem_heap_t *heap= NULL;
  2593. SCOPE_EXIT([&heap]() {
  2594. if (heap)
  2595. mem_heap_free(heap);
  2596. });
  2597. while (btr_page_get_level(page.get()) != 0)
  2598. {
  2599. const rec_t *rec= page_rec_get_next(page_get_infimum_rec(page.get()));
  2600. /* Relax the assertion in rec_init_offsets(). */
  2601. ut_ad(!index->in_instant_init);
  2602. ut_d(index->in_instant_init= true);
  2603. rec_offs *offsets=
  2604. rec_get_offsets(rec, index, nullptr, 0, ULINT_UNDEFINED, &heap);
  2605. ut_d(index->in_instant_init= false);
  2606. uint64_t child_page_no= btr_node_ptr_get_child_page_no(rec, offsets);
  2607. if (dberr_t err=
  2608. os_file_read_no_error_handling(IORequestReadPartial, file,
  2609. page.get(),
  2610. child_page_no * physical_size,
  2611. physical_size, nullptr))
  2612. return err;
  2613. if (dberr_t err= decrypt_decompress(space_crypt, space_flags,
  2614. {page.get(), static_cast<size_t>
  2615. (physical_size)}, space_id,
  2616. page_compress_buf.get()))
  2617. return err;
  2618. }
  2619. const auto *rec= page_rec_get_next(page_get_infimum_rec(page.get()));
  2620. const auto comp= dict_table_is_comp(index->table);
  2621. const auto info_bits= rec_get_info_bits(rec, comp);
  2622. if (page_rec_is_supremum(rec) || !(info_bits & REC_INFO_MIN_REC_FLAG))
  2623. {
  2624. ib::error() << "Table " << index->table->name
  2625. << " is missing instant ALTER metadata";
  2626. index->table->corrupted= true;
  2627. return DB_CORRUPTION;
  2628. }
  2629. if ((info_bits & ~REC_INFO_DELETED_FLAG) != REC_INFO_MIN_REC_FLAG ||
  2630. (comp && rec_get_status(rec) != REC_STATUS_INSTANT))
  2631. {
  2632. incompatible:
  2633. ib::error() << "Table " << index->table->name
  2634. << " contains unrecognizable instant ALTER metadata";
  2635. index->table->corrupted= true;
  2636. return DB_CORRUPTION;
  2637. }
  2638. if (info_bits & REC_INFO_DELETED_FLAG)
  2639. {
  2640. ulint trx_id_offset= index->trx_id_offset;
  2641. ut_ad(index->n_uniq);
  2642. if (trx_id_offset)
  2643. {
  2644. }
  2645. else if (index->table->not_redundant())
  2646. {
  2647. for (uint i= index->n_uniq; i--;)
  2648. trx_id_offset+= index->fields[i].fixed_len;
  2649. }
  2650. else if (rec_get_1byte_offs_flag(rec))
  2651. {
  2652. trx_id_offset= rec_1_get_field_end_info(rec, index->n_uniq - 1);
  2653. ut_ad(!(trx_id_offset & REC_1BYTE_SQL_NULL_MASK));
  2654. trx_id_offset&= ~REC_1BYTE_SQL_NULL_MASK;
  2655. }
  2656. else
  2657. {
  2658. trx_id_offset= rec_2_get_field_end_info(rec, index->n_uniq - 1);
  2659. ut_ad(!(trx_id_offset & REC_2BYTE_SQL_NULL_MASK));
  2660. trx_id_offset&= ~REC_2BYTE_SQL_NULL_MASK;
  2661. }
  2662. const byte *ptr=
  2663. rec + trx_id_offset + (DATA_TRX_ID_LEN + DATA_ROLL_PTR_LEN);
  2664. if (mach_read_from_4(ptr + BTR_EXTERN_LEN))
  2665. goto incompatible;
  2666. uint len= mach_read_from_4(ptr + BTR_EXTERN_LEN + 4);
  2667. if (!len || mach_read_from_4(ptr + BTR_EXTERN_OFFSET) != FIL_PAGE_DATA)
  2668. goto incompatible;
  2669. std::unique_ptr<byte[], decltype(&aligned_free)>
  2670. second_page(static_cast<byte*>(aligned_malloc(physical_size,
  2671. physical_size)),
  2672. &aligned_free);
  2673. if (dberr_t err=
  2674. os_file_read_no_error_handling(IORequestReadPartial, file,
  2675. second_page.get(), physical_size *
  2676. mach_read_from_4(ptr +
  2677. BTR_EXTERN_PAGE_NO),
  2678. srv_page_size, nullptr))
  2679. return err;
  2680. if (dberr_t err= decrypt_decompress(space_crypt, space_flags,
  2681. {second_page.get(),
  2682. static_cast<size_t>(physical_size)},
  2683. space_id, page_compress_buf.get()))
  2684. return err;
  2685. if (fil_page_get_type(second_page.get()) != FIL_PAGE_TYPE_BLOB ||
  2686. mach_read_from_4(
  2687. &second_page[FIL_PAGE_DATA + BTR_BLOB_HDR_NEXT_PAGE_NO]) !=
  2688. FIL_NULL ||
  2689. mach_read_from_4(
  2690. &second_page[FIL_PAGE_DATA + BTR_BLOB_HDR_PART_LEN]) != len)
  2691. goto incompatible;
  2692. /* The unused part of the BLOB page should be zero-filled. */
  2693. for (const byte *
  2694. b= second_page.get() + (FIL_PAGE_DATA + BTR_BLOB_HDR_SIZE) +
  2695. len,
  2696. *const end= second_page.get() + srv_page_size - BTR_EXTERN_LEN;
  2697. b < end;)
  2698. {
  2699. if (*b++)
  2700. goto incompatible;
  2701. }
  2702. if (index->table->deserialise_columns(
  2703. &second_page[FIL_PAGE_DATA + BTR_BLOB_HDR_SIZE], len))
  2704. goto incompatible;
  2705. }
  2706. rec_offs *offsets= rec_get_offsets(
  2707. rec, index, nullptr, index->n_core_fields, ULINT_UNDEFINED, &heap);
  2708. if (rec_offs_any_default(offsets))
  2709. {
  2710. inconsistent:
  2711. goto incompatible;
  2712. }
  2713. /* In fact, because we only ever append fields to the metadata
  2714. record, it is also OK to perform READ UNCOMMITTED and
  2715. then ignore any extra fields, provided that
  2716. trx_sys.is_registered(DB_TRX_ID). */
  2717. if (rec_offs_n_fields(offsets) >
  2718. ulint(index->n_fields) + !!index->table->instant &&
  2719. !trx_sys.is_registered(current_trx(),
  2720. row_get_rec_trx_id(rec, index, offsets)))
  2721. goto inconsistent;
  2722. for (unsigned i= index->n_core_fields; i < index->n_fields; i++)
  2723. {
  2724. dict_col_t *col= index->fields[i].col;
  2725. const unsigned o= i + !!index->table->instant;
  2726. ulint len;
  2727. const byte *data= rec_get_nth_field(rec, offsets, o, &len);
  2728. ut_ad(!col->is_added());
  2729. ut_ad(!col->def_val.data);
  2730. col->def_val.len= len;
  2731. switch (len) {
  2732. case UNIV_SQL_NULL:
  2733. continue;
  2734. case 0:
  2735. col->def_val.data= field_ref_zero;
  2736. continue;
  2737. }
  2738. ut_ad(len != UNIV_SQL_DEFAULT);
  2739. if (!rec_offs_nth_extern(offsets, o))
  2740. col->def_val.data= mem_heap_dup(index->table->heap, data, len);
  2741. else if (len < BTR_EXTERN_FIELD_REF_SIZE ||
  2742. !memcmp(data + len - BTR_EXTERN_FIELD_REF_SIZE, field_ref_zero,
  2743. BTR_EXTERN_FIELD_REF_SIZE))
  2744. {
  2745. col->def_val.len= UNIV_SQL_DEFAULT;
  2746. goto inconsistent;
  2747. }
  2748. else
  2749. {
  2750. col->def_val.data= btr_copy_externally_stored_field(
  2751. &col->def_val.len, data, srv_page_size, len, index->table->heap);
  2752. }
  2753. }
  2754. }
  2755. return DB_SUCCESS;
  2756. }
  2757. /**
  2758. Read the contents of the <tablename>.cfg file.
  2759. @return DB_SUCCESS or error code. */
  2760. static MY_ATTRIBUTE((nonnull, warn_unused_result))
  2761. dberr_t
  2762. row_import_read_cfg(
  2763. /*================*/
  2764. dict_table_t* table, /*!< in: table */
  2765. THD* thd, /*!< in: session */
  2766. row_import& cfg) /*!< out: contents of the .cfg file */
  2767. {
  2768. dberr_t err;
  2769. char name[OS_FILE_MAX_PATH];
  2770. cfg.m_table = table;
  2771. srv_get_meta_data_filename(table, name, sizeof(name));
  2772. FILE* file = fopen(name, "rb");
  2773. if (file == NULL) {
  2774. char msg[BUFSIZ];
  2775. snprintf(msg, sizeof(msg),
  2776. "Error opening '%s', will attempt to import"
  2777. " without schema verification", name);
  2778. ib_senderrf(
  2779. thd, IB_LOG_LEVEL_WARN, ER_IO_READ_ERROR,
  2780. (ulong) errno, strerror(errno), msg);
  2781. cfg.m_missing = true;
  2782. err = DB_FAIL;
  2783. } else {
  2784. cfg.m_missing = false;
  2785. err = row_import_read_meta_data(file, thd, cfg);
  2786. fclose(file);
  2787. }
  2788. return(err);
  2789. }
  2790. /** Update the root page numbers and tablespace ID of a table.
  2791. @param[in,out] trx dictionary transaction
  2792. @param[in,out] table persistent table
  2793. @param[in] reset whether to reset the fields to FIL_NULL
  2794. @return DB_SUCCESS or error code */
  2795. dberr_t
  2796. row_import_update_index_root(trx_t* trx, dict_table_t* table, bool reset)
  2797. {
  2798. const dict_index_t* index;
  2799. que_t* graph = 0;
  2800. dberr_t err = DB_SUCCESS;
  2801. ut_ad(reset || table->space->id == table->space_id);
  2802. static const char sql[] = {
  2803. "PROCEDURE UPDATE_INDEX_ROOT() IS\n"
  2804. "BEGIN\n"
  2805. "UPDATE SYS_INDEXES\n"
  2806. "SET SPACE = :space,\n"
  2807. " PAGE_NO = :page,\n"
  2808. " TYPE = :type\n"
  2809. "WHERE TABLE_ID = :table_id AND ID = :index_id;\n"
  2810. "END;\n"};
  2811. table->def_trx_id = trx->id;
  2812. for (index = dict_table_get_first_index(table);
  2813. index != 0;
  2814. index = dict_table_get_next_index(index)) {
  2815. pars_info_t* info;
  2816. ib_uint32_t page;
  2817. ib_uint32_t space;
  2818. ib_uint32_t type;
  2819. index_id_t index_id;
  2820. table_id_t table_id;
  2821. info = (graph != 0) ? graph->info : pars_info_create();
  2822. mach_write_to_4(
  2823. reinterpret_cast<byte*>(&type),
  2824. index->type);
  2825. mach_write_to_4(
  2826. reinterpret_cast<byte*>(&page),
  2827. reset ? FIL_NULL : index->page);
  2828. mach_write_to_4(
  2829. reinterpret_cast<byte*>(&space),
  2830. reset ? FIL_NULL : index->table->space_id);
  2831. mach_write_to_8(
  2832. reinterpret_cast<byte*>(&index_id),
  2833. index->id);
  2834. mach_write_to_8(
  2835. reinterpret_cast<byte*>(&table_id),
  2836. table->id);
  2837. /* If we set the corrupt bit during the IMPORT phase then
  2838. we need to update the system tables. */
  2839. pars_info_bind_int4_literal(info, "type", &type);
  2840. pars_info_bind_int4_literal(info, "space", &space);
  2841. pars_info_bind_int4_literal(info, "page", &page);
  2842. pars_info_bind_ull_literal(info, "index_id", &index_id);
  2843. pars_info_bind_ull_literal(info, "table_id", &table_id);
  2844. if (graph == 0) {
  2845. graph = pars_sql(info, sql);
  2846. ut_a(graph);
  2847. graph->trx = trx;
  2848. }
  2849. que_thr_t* thr;
  2850. graph->fork_type = QUE_FORK_MYSQL_INTERFACE;
  2851. ut_a(thr = que_fork_start_command(graph));
  2852. que_run_threads(thr);
  2853. DBUG_EXECUTE_IF("ib_import_internal_error",
  2854. trx->error_state = DB_ERROR;);
  2855. err = trx->error_state;
  2856. if (err != DB_SUCCESS) {
  2857. ib_errf(trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  2858. ER_INTERNAL_ERROR,
  2859. "While updating the <space, root page"
  2860. " number> of index %s - %s",
  2861. index->name(), ut_strerr(err));
  2862. break;
  2863. }
  2864. }
  2865. que_graph_free(graph);
  2866. return(err);
  2867. }
  2868. /** Callback arg for row_import_set_discarded. */
  2869. struct discard_t {
  2870. ib_uint32_t flags2; /*!< Value read from column */
  2871. bool state; /*!< New state of the flag */
  2872. ulint n_recs; /*!< Number of recs processed */
  2873. };
  2874. /******************************************************************//**
  2875. Fetch callback that sets or unsets the DISCARDED tablespace flag in
  2876. SYS_TABLES. The flags is stored in MIX_LEN column.
  2877. @return FALSE if all OK */
  2878. static
  2879. ibool
  2880. row_import_set_discarded(
  2881. /*=====================*/
  2882. void* row, /*!< in: sel_node_t* */
  2883. void* user_arg) /*!< in: bool set/unset flag */
  2884. {
  2885. sel_node_t* node = static_cast<sel_node_t*>(row);
  2886. discard_t* discard = static_cast<discard_t*>(user_arg);
  2887. dfield_t* dfield = que_node_get_val(node->select_list);
  2888. dtype_t* type = dfield_get_type(dfield);
  2889. ulint len = dfield_get_len(dfield);
  2890. ut_a(dtype_get_mtype(type) == DATA_INT);
  2891. ut_a(len == sizeof(ib_uint32_t));
  2892. ulint flags2 = mach_read_from_4(
  2893. static_cast<byte*>(dfield_get_data(dfield)));
  2894. if (discard->state) {
  2895. flags2 |= DICT_TF2_DISCARDED;
  2896. } else {
  2897. flags2 &= ~DICT_TF2_DISCARDED;
  2898. }
  2899. mach_write_to_4(reinterpret_cast<byte*>(&discard->flags2), flags2);
  2900. ++discard->n_recs;
  2901. /* There should be at most one matching record. */
  2902. ut_a(discard->n_recs == 1);
  2903. return(FALSE);
  2904. }
  2905. /** Update the DICT_TF2_DISCARDED flag in SYS_TABLES.MIX_LEN.
  2906. @param[in,out] trx dictionary transaction
  2907. @param[in] table_id table identifier
  2908. @param[in] discarded whether to set or clear the flag
  2909. @return DB_SUCCESS or error code */
  2910. dberr_t row_import_update_discarded_flag(trx_t* trx, table_id_t table_id,
  2911. bool discarded)
  2912. {
  2913. pars_info_t* info;
  2914. discard_t discard;
  2915. static const char sql[] =
  2916. "PROCEDURE UPDATE_DISCARDED_FLAG() IS\n"
  2917. "DECLARE FUNCTION my_func;\n"
  2918. "DECLARE CURSOR c IS\n"
  2919. " SELECT MIX_LEN"
  2920. " FROM SYS_TABLES"
  2921. " WHERE ID = :table_id FOR UPDATE;"
  2922. "\n"
  2923. "BEGIN\n"
  2924. "OPEN c;\n"
  2925. "WHILE 1 = 1 LOOP\n"
  2926. " FETCH c INTO my_func();\n"
  2927. " IF c % NOTFOUND THEN\n"
  2928. " EXIT;\n"
  2929. " END IF;\n"
  2930. "END LOOP;\n"
  2931. "UPDATE SYS_TABLES"
  2932. " SET MIX_LEN = :flags2"
  2933. " WHERE ID = :table_id;\n"
  2934. "CLOSE c;\n"
  2935. "END;\n";
  2936. discard.n_recs = 0;
  2937. discard.state = discarded;
  2938. discard.flags2 = ULINT32_UNDEFINED;
  2939. info = pars_info_create();
  2940. pars_info_add_ull_literal(info, "table_id", table_id);
  2941. pars_info_bind_int4_literal(info, "flags2", &discard.flags2);
  2942. pars_info_bind_function(
  2943. info, "my_func", row_import_set_discarded, &discard);
  2944. dberr_t err = que_eval_sql(info, sql, false, trx);
  2945. ut_a(discard.n_recs == 1);
  2946. ut_a(discard.flags2 != ULINT32_UNDEFINED);
  2947. return(err);
  2948. }
  2949. /** InnoDB writes page by page when there is page compressed
  2950. tablespace involved. It does help to save the disk space when
  2951. punch hole is enabled
  2952. @param iter Tablespace iterator
  2953. @param full_crc32 whether the file is in the full_crc32 format
  2954. @param offset offset of the file to be written
  2955. @param writeptr buffer to be written
  2956. @param n_bytes number of bytes to be written
  2957. @param try_punch_only Try the range punch only because the
  2958. current range is full of empty pages
  2959. @return DB_SUCCESS */
  2960. static
  2961. dberr_t fil_import_compress_fwrite(const fil_iterator_t &iter,
  2962. bool full_crc32,
  2963. os_offset_t offset,
  2964. const byte *writeptr,
  2965. ulint n_bytes,
  2966. bool try_punch_only= false)
  2967. {
  2968. if (dberr_t err= os_file_punch_hole(iter.file, offset, n_bytes))
  2969. return err;
  2970. if (try_punch_only)
  2971. return DB_SUCCESS;
  2972. for (ulint j= 0; j < n_bytes; j+= srv_page_size)
  2973. {
  2974. /* Read the original data length from block and
  2975. safer to read FIL_PAGE_COMPRESSED_SIZE because it
  2976. is not encrypted*/
  2977. ulint n_write_bytes= srv_page_size;
  2978. if (j || offset)
  2979. {
  2980. n_write_bytes= mach_read_from_2(writeptr + j + FIL_PAGE_DATA);
  2981. const unsigned ptype= mach_read_from_2(writeptr + j + FIL_PAGE_TYPE);
  2982. /* Ignore the empty page */
  2983. if (ptype == 0 && n_write_bytes == 0)
  2984. continue;
  2985. if (full_crc32)
  2986. n_write_bytes= buf_page_full_crc32_size(writeptr + j,
  2987. nullptr, nullptr);
  2988. else
  2989. {
  2990. n_write_bytes+= ptype == FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED
  2991. ? FIL_PAGE_DATA + FIL_PAGE_ENCRYPT_COMP_METADATA_LEN
  2992. : FIL_PAGE_DATA + FIL_PAGE_COMP_METADATA_LEN;
  2993. }
  2994. }
  2995. if (dberr_t err= os_file_write(IORequestWrite, iter.filepath, iter.file,
  2996. writeptr + j, offset + j, n_write_bytes))
  2997. return err;
  2998. }
  2999. return DB_SUCCESS;
  3000. }
  3001. dberr_t FetchIndexRootPages::run(const fil_iterator_t& iter,
  3002. buf_block_t* block) UNIV_NOTHROW
  3003. {
  3004. const unsigned zip_size= fil_space_t::zip_size(m_space_flags);
  3005. const unsigned size= zip_size ? zip_size : unsigned(srv_page_size);
  3006. byte* page_compress_buf= static_cast<byte*>(malloc(get_buf_size()));
  3007. const bool full_crc32 = fil_space_t::full_crc32(m_space_flags);
  3008. bool skip_checksum_check = false;
  3009. ut_ad(!srv_read_only_mode);
  3010. if (!page_compress_buf)
  3011. return DB_OUT_OF_MEMORY;
  3012. const bool encrypted= iter.crypt_data != NULL &&
  3013. iter.crypt_data->should_encrypt();
  3014. byte* const readptr= iter.io_buffer;
  3015. block->frame= readptr;
  3016. if (block->page.zip.data)
  3017. block->page.zip.data= readptr;
  3018. bool page_compressed= false;
  3019. dberr_t err= os_file_read_no_error_handling(
  3020. IORequestReadPartial, iter.file, readptr, 3 * size, size, 0);
  3021. if (err != DB_SUCCESS)
  3022. {
  3023. ib::error() << iter.filepath << ": os_file_read() failed";
  3024. goto func_exit;
  3025. }
  3026. if (page_get_page_no(readptr) != 3)
  3027. {
  3028. page_corrupted:
  3029. ib::warn() << filename() << ": Page 3 at offset "
  3030. << 3 * size << " looks corrupted.";
  3031. err= DB_CORRUPTION;
  3032. goto func_exit;
  3033. }
  3034. block->page.id_.set_page_no(3);
  3035. if (full_crc32 && fil_space_t::is_compressed(m_space_flags))
  3036. page_compressed= buf_page_is_compressed(readptr, m_space_flags);
  3037. else
  3038. {
  3039. switch (fil_page_get_type(readptr)) {
  3040. case FIL_PAGE_PAGE_COMPRESSED:
  3041. case FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED:
  3042. if (block->page.zip.data)
  3043. goto page_corrupted;
  3044. page_compressed= true;
  3045. }
  3046. }
  3047. if (encrypted)
  3048. {
  3049. if (!buf_page_verify_crypt_checksum(readptr, m_space_flags))
  3050. goto page_corrupted;
  3051. if (ENCRYPTION_KEY_NOT_ENCRYPTED ==
  3052. buf_page_get_key_version(readptr, m_space_flags))
  3053. goto page_corrupted;
  3054. if ((err= fil_space_decrypt(get_space_id(), iter.crypt_data, readptr, size,
  3055. m_space_flags, readptr)))
  3056. goto func_exit;
  3057. }
  3058. /* For full_crc32 format, skip checksum check
  3059. after decryption. */
  3060. skip_checksum_check= full_crc32 && encrypted;
  3061. if (page_compressed)
  3062. {
  3063. ulint compress_length= fil_page_decompress(page_compress_buf,
  3064. readptr,
  3065. m_space_flags);
  3066. ut_ad(compress_length != srv_page_size);
  3067. if (compress_length == 0)
  3068. goto page_corrupted;
  3069. }
  3070. else if (!skip_checksum_check
  3071. && buf_page_is_corrupted(false, readptr, m_space_flags))
  3072. goto page_corrupted;
  3073. err= this->operator()(block);
  3074. func_exit:
  3075. free(page_compress_buf);
  3076. return err;
  3077. }
  3078. static dberr_t fil_iterate(
  3079. const fil_iterator_t& iter,
  3080. buf_block_t* block,
  3081. AbstractCallback& callback)
  3082. {
  3083. os_offset_t offset;
  3084. const ulint size = callback.physical_size();
  3085. ulint n_bytes = iter.n_io_buffers * size;
  3086. byte* page_compress_buf= static_cast<byte*>(malloc(get_buf_size()));
  3087. ut_ad(!srv_read_only_mode);
  3088. if (!page_compress_buf) {
  3089. return DB_OUT_OF_MEMORY;
  3090. }
  3091. ulint actual_space_id = 0;
  3092. const bool full_crc32 = fil_space_t::full_crc32(
  3093. callback.get_space_flags());
  3094. /* TODO: For ROW_FORMAT=COMPRESSED tables we do a lot of useless
  3095. copying for non-index pages. Unfortunately, it is
  3096. required by buf_zip_decompress() */
  3097. dberr_t err = DB_SUCCESS;
  3098. bool page_compressed = false;
  3099. bool punch_hole = true;
  3100. for (offset = iter.start; offset < iter.end; offset += n_bytes) {
  3101. if (callback.is_interrupted()) {
  3102. err = DB_INTERRUPTED;
  3103. goto func_exit;
  3104. }
  3105. byte* io_buffer = iter.io_buffer;
  3106. block->frame = io_buffer;
  3107. if (block->page.zip.data) {
  3108. /* Zip IO is done in the compressed page buffer. */
  3109. io_buffer = block->page.zip.data;
  3110. }
  3111. /* We have to read the exact number of bytes. Otherwise the
  3112. InnoDB IO functions croak on failed reads. */
  3113. n_bytes = ulint(ut_min(os_offset_t(n_bytes),
  3114. iter.end - offset));
  3115. ut_ad(n_bytes > 0);
  3116. ut_ad(!(n_bytes % size));
  3117. const bool encrypted = iter.crypt_data != NULL
  3118. && iter.crypt_data->should_encrypt();
  3119. /* Use additional crypt io buffer if tablespace is encrypted */
  3120. byte* const readptr = encrypted
  3121. ? iter.crypt_io_buffer : io_buffer;
  3122. byte* const writeptr = readptr;
  3123. err = os_file_read_no_error_handling(
  3124. IORequestReadPartial,
  3125. iter.file, readptr, offset, n_bytes, 0);
  3126. if (err != DB_SUCCESS) {
  3127. ib::error() << iter.filepath
  3128. << ": os_file_read() failed";
  3129. goto func_exit;
  3130. }
  3131. bool updated = false;
  3132. os_offset_t page_off = offset;
  3133. ulint n_pages_read = n_bytes / size;
  3134. /* This block is not attached to buf_pool */
  3135. block->page.id_.set_page_no(uint32_t(page_off / size));
  3136. for (ulint i = 0; i < n_pages_read;
  3137. ++block->page.id_,
  3138. ++i, page_off += size, block->frame += size) {
  3139. byte* src = readptr + i * size;
  3140. const ulint page_no = page_get_page_no(src);
  3141. if (!page_no && block->page.id().page_no()) {
  3142. if (!buf_is_zeroes(span<const byte>(src,
  3143. size))) {
  3144. goto page_corrupted;
  3145. }
  3146. /* Proceed to the next page,
  3147. because this one is all zero. */
  3148. continue;
  3149. }
  3150. if (page_no != block->page.id().page_no()) {
  3151. page_corrupted:
  3152. ib::warn() << callback.filename()
  3153. << ": Page " << (offset / size)
  3154. << " at offset " << offset
  3155. << " looks corrupted.";
  3156. err = DB_CORRUPTION;
  3157. goto func_exit;
  3158. }
  3159. if (block->page.id().page_no() == 0) {
  3160. actual_space_id = mach_read_from_4(
  3161. src + FIL_PAGE_SPACE_ID);
  3162. }
  3163. const uint16_t type = fil_page_get_type(src);
  3164. page_compressed =
  3165. (full_crc32
  3166. && fil_space_t::is_compressed(
  3167. callback.get_space_flags())
  3168. && buf_page_is_compressed(
  3169. src, callback.get_space_flags()))
  3170. || type == FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED
  3171. || type == FIL_PAGE_PAGE_COMPRESSED;
  3172. if (page_compressed && block->page.zip.data) {
  3173. goto page_corrupted;
  3174. }
  3175. bool decrypted = false;
  3176. byte* dst = io_buffer + i * size;
  3177. bool frame_changed = false;
  3178. uint key_version = buf_page_get_key_version(
  3179. src, callback.get_space_flags());
  3180. if (!encrypted) {
  3181. } else if (!key_version) {
  3182. if (block->page.id().page_no() == 0
  3183. && block->page.zip.data) {
  3184. block->page.zip.data = src;
  3185. frame_changed = true;
  3186. } else if (!page_compressed
  3187. && type != FIL_PAGE_TYPE_XDES
  3188. && !block->page.zip.data) {
  3189. block->frame = src;
  3190. frame_changed = true;
  3191. } else {
  3192. ut_ad(dst != src);
  3193. memcpy(dst, src, size);
  3194. }
  3195. } else {
  3196. if (!buf_page_verify_crypt_checksum(
  3197. src, callback.get_space_flags())) {
  3198. goto page_corrupted;
  3199. }
  3200. if ((err = fil_space_decrypt(
  3201. actual_space_id,
  3202. iter.crypt_data, dst,
  3203. callback.physical_size(),
  3204. callback.get_space_flags(),
  3205. src))) {
  3206. goto func_exit;
  3207. }
  3208. decrypted = true;
  3209. updated = true;
  3210. }
  3211. /* For full_crc32 format, skip checksum check
  3212. after decryption. */
  3213. bool skip_checksum_check = full_crc32 && encrypted;
  3214. /* If the original page is page_compressed, we need
  3215. to decompress it before adjusting further. */
  3216. if (page_compressed) {
  3217. ulint compress_length = fil_page_decompress(
  3218. page_compress_buf, dst,
  3219. callback.get_space_flags());
  3220. ut_ad(compress_length != srv_page_size);
  3221. if (compress_length == 0) {
  3222. goto page_corrupted;
  3223. }
  3224. updated = true;
  3225. } else if (!skip_checksum_check
  3226. && buf_page_is_corrupted(
  3227. false,
  3228. encrypted && !frame_changed
  3229. ? dst : src,
  3230. callback.get_space_flags())) {
  3231. goto page_corrupted;
  3232. }
  3233. if ((err = callback(block)) != DB_SUCCESS) {
  3234. goto func_exit;
  3235. } else if (!updated) {
  3236. updated = block->page.state()
  3237. == BUF_BLOCK_FILE_PAGE;
  3238. }
  3239. /* If tablespace is encrypted we use additional
  3240. temporary scratch area where pages are read
  3241. for decrypting readptr == crypt_io_buffer != io_buffer.
  3242. Destination for decryption is a buffer pool block
  3243. block->frame == dst == io_buffer that is updated.
  3244. Pages that did not require decryption even when
  3245. tablespace is marked as encrypted are not copied
  3246. instead block->frame is set to src == readptr.
  3247. For encryption we again use temporary scratch area
  3248. writeptr != io_buffer == dst
  3249. that is then written to the tablespace
  3250. (1) For normal tables io_buffer == dst == writeptr
  3251. (2) For only page compressed tables
  3252. io_buffer == dst == writeptr
  3253. (3) For encrypted (and page compressed)
  3254. readptr != io_buffer == dst != writeptr
  3255. */
  3256. ut_ad(!encrypted && !page_compressed ?
  3257. src == dst && dst == writeptr + (i * size):1);
  3258. ut_ad(page_compressed && !encrypted ?
  3259. src == dst && dst == writeptr + (i * size):1);
  3260. ut_ad(encrypted ?
  3261. src != dst && dst != writeptr + (i * size):1);
  3262. /* When tablespace is encrypted or compressed its
  3263. first page (i.e. page 0) is not encrypted or
  3264. compressed and there is no need to copy frame. */
  3265. if (encrypted && block->page.id().page_no() != 0) {
  3266. byte *local_frame = callback.get_frame(block);
  3267. ut_ad((writeptr + (i * size)) != local_frame);
  3268. memcpy((writeptr + (i * size)), local_frame, size);
  3269. }
  3270. if (frame_changed) {
  3271. if (block->page.zip.data) {
  3272. block->page.zip.data = dst;
  3273. } else {
  3274. block->frame = dst;
  3275. }
  3276. }
  3277. src = io_buffer + (i * size);
  3278. if (page_compressed) {
  3279. updated = true;
  3280. if (ulint len = fil_page_compress(
  3281. src,
  3282. page_compress_buf,
  3283. callback.get_space_flags(),
  3284. 512,/* FIXME: proper block size */
  3285. encrypted)) {
  3286. /* FIXME: remove memcpy() */
  3287. memcpy(src, page_compress_buf, len);
  3288. memset(src + len, 0,
  3289. srv_page_size - len);
  3290. }
  3291. }
  3292. /* Encrypt the page if encryption was used. */
  3293. if (encrypted && decrypted) {
  3294. byte *dest = writeptr + i * size;
  3295. byte* tmp = fil_encrypt_buf(
  3296. iter.crypt_data,
  3297. block->page.id().space(),
  3298. block->page.id().page_no(),
  3299. src, block->zip_size(), dest,
  3300. full_crc32);
  3301. if (tmp == src) {
  3302. /* TODO: remove unnecessary memcpy's */
  3303. ut_ad(dest != src);
  3304. memcpy(dest, src, size);
  3305. }
  3306. updated = true;
  3307. }
  3308. /* Write checksum for the compressed full crc32 page.*/
  3309. if (full_crc32 && page_compressed) {
  3310. ut_ad(updated);
  3311. byte* dest = writeptr + i * size;
  3312. ut_d(bool comp = false);
  3313. ut_d(bool corrupt = false);
  3314. ulint size = buf_page_full_crc32_size(
  3315. dest,
  3316. #ifdef UNIV_DEBUG
  3317. &comp, &corrupt
  3318. #else
  3319. NULL, NULL
  3320. #endif
  3321. );
  3322. ut_ad(!comp == (size == srv_page_size));
  3323. ut_ad(!corrupt);
  3324. mach_write_to_4(dest + (size - 4),
  3325. ut_crc32(dest, size - 4));
  3326. }
  3327. }
  3328. if (page_compressed && punch_hole) {
  3329. err = fil_import_compress_fwrite(
  3330. iter, full_crc32, offset, writeptr, n_bytes,
  3331. !updated);
  3332. if (err != DB_SUCCESS) {
  3333. punch_hole = false;
  3334. if (updated) {
  3335. goto normal_write;
  3336. }
  3337. }
  3338. } else if (updated) {
  3339. normal_write:
  3340. /* A page was updated in the set, write it back. */
  3341. err = os_file_write(IORequestWrite,
  3342. iter.filepath, iter.file,
  3343. writeptr, offset, n_bytes);
  3344. if (err != DB_SUCCESS) {
  3345. goto func_exit;
  3346. }
  3347. }
  3348. }
  3349. func_exit:
  3350. free(page_compress_buf);
  3351. return err;
  3352. }
  3353. /********************************************************************//**
  3354. Iterate over all the pages in the tablespace.
  3355. @param table - the table definiton in the server
  3356. @param n_io_buffers - number of blocks to read and write together
  3357. @param callback - functor that will do the page updates
  3358. @return DB_SUCCESS or error code */
  3359. static
  3360. dberr_t
  3361. fil_tablespace_iterate(
  3362. /*===================*/
  3363. dict_table_t* table,
  3364. ulint n_io_buffers,
  3365. AbstractCallback& callback)
  3366. {
  3367. dberr_t err;
  3368. pfs_os_file_t file;
  3369. char* filepath;
  3370. ut_a(n_io_buffers > 0);
  3371. ut_ad(!srv_read_only_mode);
  3372. DBUG_EXECUTE_IF("ib_import_trigger_corruption_1",
  3373. return(DB_CORRUPTION););
  3374. /* Make sure the data_dir_path is set. */
  3375. dict_get_and_save_data_dir_path(table, false);
  3376. if (DICT_TF_HAS_DATA_DIR(table->flags)) {
  3377. ut_a(table->data_dir_path);
  3378. filepath = fil_make_filepath(
  3379. table->data_dir_path, table->name.m_name, IBD, true);
  3380. } else {
  3381. filepath = fil_make_filepath(
  3382. NULL, table->name.m_name, IBD, false);
  3383. }
  3384. if (!filepath) {
  3385. return(DB_OUT_OF_MEMORY);
  3386. } else {
  3387. bool success;
  3388. file = os_file_create_simple_no_error_handling(
  3389. innodb_data_file_key, filepath,
  3390. OS_FILE_OPEN, OS_FILE_READ_WRITE, false, &success);
  3391. if (!success) {
  3392. /* The following call prints an error message */
  3393. os_file_get_last_error(true);
  3394. ib::error() << "Trying to import a tablespace,"
  3395. " but could not open the tablespace file "
  3396. << filepath;
  3397. ut_free(filepath);
  3398. return DB_TABLESPACE_NOT_FOUND;
  3399. } else {
  3400. err = DB_SUCCESS;
  3401. }
  3402. }
  3403. callback.set_file(filepath, file);
  3404. os_offset_t file_size = os_file_get_size(file);
  3405. ut_a(file_size != (os_offset_t) -1);
  3406. /* Allocate a page to read in the tablespace header, so that we
  3407. can determine the page size and zip_size (if it is compressed).
  3408. We allocate an extra page in case it is a compressed table. */
  3409. byte* page = static_cast<byte*>(aligned_malloc(2 * srv_page_size,
  3410. srv_page_size));
  3411. buf_block_t* block = reinterpret_cast<buf_block_t*>
  3412. (ut_zalloc_nokey(sizeof *block));
  3413. block->frame = page;
  3414. block->page.init(BUF_BLOCK_FILE_PAGE, page_id_t(~0ULL), 1);
  3415. /* Read the first page and determine the page and zip size. */
  3416. err = os_file_read_no_error_handling(IORequestReadPartial,
  3417. file, page, 0, srv_page_size, 0);
  3418. if (err == DB_SUCCESS) {
  3419. err = callback.init(file_size, block);
  3420. }
  3421. if (err == DB_SUCCESS) {
  3422. block->page.id_ = page_id_t(callback.get_space_id(), 0);
  3423. if (ulint zip_size = callback.get_zip_size()) {
  3424. page_zip_set_size(&block->page.zip, zip_size);
  3425. /* ROW_FORMAT=COMPRESSED is not optimised for block IO
  3426. for now. We do the IMPORT page by page. */
  3427. n_io_buffers = 1;
  3428. }
  3429. fil_iterator_t iter;
  3430. /* read (optional) crypt data */
  3431. iter.crypt_data = fil_space_read_crypt_data(
  3432. callback.get_zip_size(), page);
  3433. /* If tablespace is encrypted, it needs extra buffers */
  3434. if (iter.crypt_data && n_io_buffers > 1) {
  3435. /* decrease io buffers so that memory
  3436. consumption will not double */
  3437. n_io_buffers /= 2;
  3438. }
  3439. iter.file = file;
  3440. iter.start = 0;
  3441. iter.end = file_size;
  3442. iter.filepath = filepath;
  3443. iter.file_size = file_size;
  3444. iter.n_io_buffers = n_io_buffers;
  3445. /* Add an extra page for compressed page scratch area. */
  3446. iter.io_buffer = static_cast<byte*>(
  3447. aligned_malloc((1 + iter.n_io_buffers)
  3448. << srv_page_size_shift, srv_page_size));
  3449. iter.crypt_io_buffer = iter.crypt_data
  3450. ? static_cast<byte*>(
  3451. aligned_malloc((1 + iter.n_io_buffers)
  3452. << srv_page_size_shift,
  3453. srv_page_size))
  3454. : NULL;
  3455. if (block->page.zip.ssize) {
  3456. ut_ad(iter.n_io_buffers == 1);
  3457. block->frame = iter.io_buffer;
  3458. block->page.zip.data = block->frame + srv_page_size;
  3459. }
  3460. err = callback.run(iter, block);
  3461. if (iter.crypt_data) {
  3462. fil_space_destroy_crypt_data(&iter.crypt_data);
  3463. }
  3464. aligned_free(iter.crypt_io_buffer);
  3465. aligned_free(iter.io_buffer);
  3466. }
  3467. if (err == DB_SUCCESS) {
  3468. ib::info() << "Sync to disk";
  3469. if (!os_file_flush(file)) {
  3470. ib::info() << "os_file_flush() failed!";
  3471. err = DB_IO_ERROR;
  3472. } else {
  3473. ib::info() << "Sync to disk - done!";
  3474. }
  3475. }
  3476. os_file_close(file);
  3477. aligned_free(page);
  3478. ut_free(filepath);
  3479. ut_free(block);
  3480. return(err);
  3481. }
  3482. static void row_import_autoinc(dict_table_t *table, row_prebuilt_t *prebuilt,
  3483. uint64_t autoinc)
  3484. {
  3485. if (!table->persistent_autoinc)
  3486. {
  3487. ut_ad(!autoinc);
  3488. return;
  3489. }
  3490. if (autoinc)
  3491. {
  3492. btr_write_autoinc(dict_table_get_first_index(table), autoinc - 1);
  3493. autoinc_set:
  3494. table->autoinc= autoinc;
  3495. sql_print_information("InnoDB: %`.*s.%`s autoinc value set to " UINT64PF,
  3496. int(table->name.dblen()), table->name.m_name,
  3497. table->name.basename(), autoinc);
  3498. }
  3499. else if (TABLE *t= prebuilt->m_mysql_table)
  3500. {
  3501. if (const Field *ai= t->found_next_number_field)
  3502. {
  3503. autoinc= 1 +
  3504. btr_read_autoinc_with_fallback(table, innodb_col_no(ai),
  3505. t->s->mysql_version,
  3506. innobase_get_int_col_max_value(ai));
  3507. goto autoinc_set;
  3508. }
  3509. }
  3510. }
  3511. /*****************************************************************//**
  3512. Imports a tablespace. The space id in the .ibd file must match the space id
  3513. of the table in the data dictionary.
  3514. @return error code or DB_SUCCESS */
  3515. dberr_t
  3516. row_import_for_mysql(
  3517. /*=================*/
  3518. dict_table_t* table, /*!< in/out: table */
  3519. row_prebuilt_t* prebuilt) /*!< in: prebuilt struct in MySQL */
  3520. {
  3521. dberr_t err;
  3522. trx_t* trx;
  3523. ib_uint64_t autoinc = 0;
  3524. char* filepath = NULL;
  3525. /* The caller assured that this is not read_only_mode and that no
  3526. temorary tablespace is being imported. */
  3527. ut_ad(!srv_read_only_mode);
  3528. ut_ad(!table->is_temporary());
  3529. ut_ad(table->space_id);
  3530. ut_ad(table->space_id < SRV_SPACE_ID_UPPER_BOUND);
  3531. ut_ad(prebuilt->trx);
  3532. ut_ad(!table->is_readable());
  3533. ibuf_delete_for_discarded_space(table->space_id);
  3534. trx_start_if_not_started(prebuilt->trx, true);
  3535. trx = trx_create();
  3536. /* So that the table is not DROPped during recovery. */
  3537. trx_set_dict_operation(trx, TRX_DICT_OP_INDEX);
  3538. trx_start_if_not_started(trx, true);
  3539. /* So that we can send error messages to the user. */
  3540. trx->mysql_thd = prebuilt->trx->mysql_thd;
  3541. /* Ensure that the table will be dropped by trx_rollback_active()
  3542. in case of a crash. */
  3543. trx->table_id = table->id;
  3544. /* Assign an undo segment for the transaction, so that the
  3545. transaction will be recovered after a crash. */
  3546. /* TODO: Do not write any undo log for the IMPORT cleanup. */
  3547. {
  3548. mtr_t mtr;
  3549. mtr.start();
  3550. trx_undo_assign(trx, &err, &mtr);
  3551. mtr.commit();
  3552. }
  3553. DBUG_EXECUTE_IF("ib_import_undo_assign_failure",
  3554. err = DB_TOO_MANY_CONCURRENT_TRXS;);
  3555. if (err != DB_SUCCESS) {
  3556. return(row_import_cleanup(prebuilt, trx, err));
  3557. } else if (trx->rsegs.m_redo.undo == 0) {
  3558. err = DB_TOO_MANY_CONCURRENT_TRXS;
  3559. return(row_import_cleanup(prebuilt, trx, err));
  3560. }
  3561. prebuilt->trx->op_info = "read meta-data file";
  3562. /* Prevent DDL operations while we are checking. */
  3563. rw_lock_s_lock(&dict_sys.latch);
  3564. row_import cfg;
  3565. err = row_import_read_cfg(table, trx->mysql_thd, cfg);
  3566. /* Check if the table column definitions match the contents
  3567. of the config file. */
  3568. if (err == DB_SUCCESS) {
  3569. if (dberr_t err = handle_instant_metadata(table, cfg)) {
  3570. rw_lock_s_unlock(&dict_sys.latch);
  3571. return row_import_error(prebuilt, trx, err);
  3572. }
  3573. /* We have a schema file, try and match it with our
  3574. data dictionary. */
  3575. err = cfg.match_schema(trx->mysql_thd);
  3576. /* Update index->page and SYS_INDEXES.PAGE_NO to match the
  3577. B-tree root page numbers in the tablespace. Use the index
  3578. name from the .cfg file to find match. */
  3579. if (err == DB_SUCCESS) {
  3580. cfg.set_root_by_name();
  3581. autoinc = cfg.m_autoinc;
  3582. }
  3583. rw_lock_s_unlock(&dict_sys.latch);
  3584. DBUG_EXECUTE_IF("ib_import_set_index_root_failure",
  3585. err = DB_TOO_MANY_CONCURRENT_TRXS;);
  3586. } else if (cfg.m_missing) {
  3587. rw_lock_s_unlock(&dict_sys.latch);
  3588. /* We don't have a schema file, we will have to discover
  3589. the index root pages from the .ibd file and skip the schema
  3590. matching step. */
  3591. ut_a(err == DB_FAIL);
  3592. cfg.m_zip_size = 0;
  3593. if (UT_LIST_GET_LEN(table->indexes) > 1) {
  3594. ib_errf(trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  3595. ER_INTERNAL_ERROR,
  3596. "Drop all secondary indexes before importing "
  3597. "table %s when .cfg file is missing.",
  3598. table->name.m_name);
  3599. err = DB_ERROR;
  3600. return row_import_error(prebuilt, trx, err);
  3601. }
  3602. FetchIndexRootPages fetchIndexRootPages(table, trx);
  3603. err = fil_tablespace_iterate(
  3604. table, IO_BUFFER_SIZE(srv_page_size),
  3605. fetchIndexRootPages);
  3606. if (err == DB_SUCCESS) {
  3607. err = fetchIndexRootPages.build_row_import(&cfg);
  3608. /* Update index->page and SYS_INDEXES.PAGE_NO
  3609. to match the B-tree root page numbers in the
  3610. tablespace. */
  3611. if (err == DB_SUCCESS) {
  3612. err = cfg.set_root_by_heuristic();
  3613. if (err == DB_SUCCESS) {
  3614. if (dberr_t err =
  3615. handle_instant_metadata(table,
  3616. cfg)) {
  3617. return row_import_error(
  3618. prebuilt, trx, err);
  3619. }
  3620. }
  3621. }
  3622. }
  3623. } else {
  3624. rw_lock_s_unlock(&dict_sys.latch);
  3625. }
  3626. if (err != DB_SUCCESS) {
  3627. return(row_import_error(prebuilt, trx, err));
  3628. }
  3629. prebuilt->trx->op_info = "importing tablespace";
  3630. ib::info() << "Phase I - Update all pages";
  3631. /* Iterate over all the pages and do the sanity checking and
  3632. the conversion required to import the tablespace. */
  3633. PageConverter converter(&cfg, table->space_id, trx);
  3634. /* Set the IO buffer size in pages. */
  3635. err = fil_tablespace_iterate(
  3636. table, IO_BUFFER_SIZE(cfg.m_zip_size ? cfg.m_zip_size
  3637. : srv_page_size), converter);
  3638. DBUG_EXECUTE_IF("ib_import_reset_space_and_lsn_failure",
  3639. err = DB_TOO_MANY_CONCURRENT_TRXS;);
  3640. #ifdef BTR_CUR_HASH_ADAPT
  3641. /* On DISCARD TABLESPACE, we did not drop any adaptive hash
  3642. index entries. If we replaced the discarded tablespace with a
  3643. smaller one here, there could still be some adaptive hash
  3644. index entries that point to cached garbage pages in the buffer
  3645. pool, because PageConverter::operator() only evicted those
  3646. pages that were replaced by the imported pages. We must
  3647. detach any remaining adaptive hash index entries, because the
  3648. adaptive hash index must be a subset of the table contents;
  3649. false positives are not tolerated. */
  3650. for (dict_index_t* index = UT_LIST_GET_FIRST(table->indexes); index;
  3651. index = UT_LIST_GET_NEXT(indexes, index)) {
  3652. index = index->clone_if_needed();
  3653. }
  3654. #endif /* BTR_CUR_HASH_ADAPT */
  3655. if (err != DB_SUCCESS) {
  3656. char table_name[MAX_FULL_NAME_LEN + 1];
  3657. innobase_format_name(
  3658. table_name, sizeof(table_name),
  3659. table->name.m_name);
  3660. if (err != DB_DECRYPTION_FAILED) {
  3661. ib_errf(trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  3662. ER_INTERNAL_ERROR,
  3663. "Error importing tablespace for table %s : %s",
  3664. table_name, ut_strerr(err));
  3665. }
  3666. return(row_import_cleanup(prebuilt, trx, err));
  3667. }
  3668. row_mysql_lock_data_dictionary(trx);
  3669. /* If the table is stored in a remote tablespace, we need to
  3670. determine that filepath from the link file and system tables.
  3671. Find the space ID in SYS_TABLES since this is an ALTER TABLE. */
  3672. dict_get_and_save_data_dir_path(table, true);
  3673. if (DICT_TF_HAS_DATA_DIR(table->flags)) {
  3674. ut_a(table->data_dir_path);
  3675. filepath = fil_make_filepath(
  3676. table->data_dir_path, table->name.m_name, IBD, true);
  3677. } else {
  3678. filepath = fil_make_filepath(
  3679. NULL, table->name.m_name, IBD, false);
  3680. }
  3681. DBUG_EXECUTE_IF(
  3682. "ib_import_OOM_15",
  3683. ut_free(filepath);
  3684. filepath = NULL;
  3685. );
  3686. if (filepath == NULL) {
  3687. row_mysql_unlock_data_dictionary(trx);
  3688. return(row_import_cleanup(prebuilt, trx, DB_OUT_OF_MEMORY));
  3689. }
  3690. /* Open the tablespace so that we can access via the buffer pool.
  3691. We set the 2nd param (fix_dict = true) here because we already
  3692. have an x-lock on dict_sys.latch and dict_sys.mutex.
  3693. The tablespace is initially opened as a temporary one, because
  3694. we will not be writing any redo log for it before we have invoked
  3695. fil_space_t::set_imported() to declare it a persistent tablespace. */
  3696. ulint fsp_flags = dict_tf_to_fsp_flags(table->flags);
  3697. table->space = fil_ibd_open(
  3698. true, true, FIL_TYPE_IMPORT, table->space_id,
  3699. fsp_flags, table->name, filepath, &err);
  3700. ut_ad((table->space == NULL) == (err != DB_SUCCESS));
  3701. DBUG_EXECUTE_IF("ib_import_open_tablespace_failure",
  3702. err = DB_TABLESPACE_NOT_FOUND; table->space = NULL;);
  3703. if (!table->space) {
  3704. row_mysql_unlock_data_dictionary(trx);
  3705. ib_senderrf(trx->mysql_thd, IB_LOG_LEVEL_ERROR,
  3706. ER_GET_ERRMSG,
  3707. err, ut_strerr(err), filepath);
  3708. ut_free(filepath);
  3709. return(row_import_cleanup(prebuilt, trx, err));
  3710. }
  3711. row_mysql_unlock_data_dictionary(trx);
  3712. ut_free(filepath);
  3713. err = ibuf_check_bitmap_on_import(trx, table->space);
  3714. DBUG_EXECUTE_IF("ib_import_check_bitmap_failure", err = DB_CORRUPTION;);
  3715. if (err != DB_SUCCESS) {
  3716. return(row_import_cleanup(prebuilt, trx, err));
  3717. }
  3718. /* The first index must always be the clustered index. */
  3719. dict_index_t* index = dict_table_get_first_index(table);
  3720. if (!dict_index_is_clust(index)) {
  3721. return(row_import_error(prebuilt, trx, DB_CORRUPTION));
  3722. }
  3723. /* Update the Btree segment headers for index node and
  3724. leaf nodes in the root page. Set the new space id. */
  3725. err = btr_root_adjust_on_import(index);
  3726. DBUG_EXECUTE_IF("ib_import_cluster_root_adjust_failure",
  3727. err = DB_CORRUPTION;);
  3728. if (err != DB_SUCCESS) {
  3729. return(row_import_error(prebuilt, trx, err));
  3730. } else if (cfg.requires_purge(index->name)) {
  3731. /* Purge any delete-marked records that couldn't be
  3732. purged during the page conversion phase from the
  3733. cluster index. */
  3734. IndexPurge purge(trx, index);
  3735. trx->op_info = "cluster: purging delete marked records";
  3736. err = purge.garbage_collect();
  3737. trx->op_info = "";
  3738. }
  3739. DBUG_EXECUTE_IF("ib_import_cluster_failure", err = DB_CORRUPTION;);
  3740. if (err != DB_SUCCESS) {
  3741. return(row_import_error(prebuilt, trx, err));
  3742. }
  3743. /* For secondary indexes, purge any records that couldn't be purged
  3744. during the page conversion phase. */
  3745. err = row_import_adjust_root_pages_of_secondary_indexes(
  3746. trx, table, cfg);
  3747. DBUG_EXECUTE_IF("ib_import_sec_root_adjust_failure",
  3748. err = DB_CORRUPTION;);
  3749. if (err != DB_SUCCESS) {
  3750. return(row_import_error(prebuilt, trx, err));
  3751. }
  3752. /* Ensure that the next available DB_ROW_ID is not smaller than
  3753. any DB_ROW_ID stored in the table. */
  3754. if (prebuilt->clust_index_was_generated) {
  3755. row_import_set_sys_max_row_id(prebuilt, table);
  3756. }
  3757. ib::info() << "Phase III - Flush changes to disk";
  3758. /* Ensure that all pages dirtied during the IMPORT make it to disk.
  3759. The only dirty pages generated should be from the pessimistic purge
  3760. of delete marked records that couldn't be purged in Phase I. */
  3761. while (buf_flush_list_space(prebuilt->table->space));
  3762. for (ulint count = 0; prebuilt->table->space->referenced(); count++) {
  3763. /* Issue a warning every 10.24 seconds, starting after
  3764. 2.56 seconds */
  3765. if ((count & 511) == 128) {
  3766. ib::warn() << "Waiting for flush to complete on "
  3767. << prebuilt->table->name;
  3768. }
  3769. os_thread_sleep(20000);
  3770. }
  3771. ib::info() << "Phase IV - Flush complete";
  3772. prebuilt->table->space->set_imported();
  3773. /* The dictionary latches will be released in in row_import_cleanup()
  3774. after the transaction commit, for both success and error. */
  3775. row_mysql_lock_data_dictionary(trx);
  3776. /* Update the root pages of the table's indexes. */
  3777. err = row_import_update_index_root(trx, table, false);
  3778. if (err != DB_SUCCESS) {
  3779. return(row_import_error(prebuilt, trx, err));
  3780. }
  3781. err = row_import_update_discarded_flag(trx, table->id, false);
  3782. if (err != DB_SUCCESS) {
  3783. return(row_import_error(prebuilt, trx, err));
  3784. }
  3785. table->file_unreadable = false;
  3786. table->flags2 &= ~DICT_TF2_DISCARDED & ((1U << DICT_TF2_BITS) - 1);
  3787. /* Set autoinc value read from .cfg file, if one was specified.
  3788. Otherwise, read the PAGE_ROOT_AUTO_INC and set it to table autoinc. */
  3789. row_import_autoinc(table, prebuilt, autoinc);
  3790. return(row_import_cleanup(prebuilt, trx, err));
  3791. }