You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

5626 lines
185 KiB

26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
Bug#18775 - Temporary table from alter table visible to other threads The intermediate (not temporary) files of the new table during ALTER TABLE was visible for SHOW TABLES. These intermediate files are copies of the original table with the changes done by ALTER TABLE. After all the data is copied over from the original table, these files are renamed to the original tables file names. So they are not temporary files. They persist after ALTER TABLE, but just with another name. Normal GRANT checking takes place for the intermediate table. Everyone who can see the original table (and hence the final table) can also see the intermediate table. But noone else. In 5.0 the intermediate files are invisible for SHOW TABLES because all file names beginning with "#sql" were suppressed. In 5.1 temporary files are created in TMPDIR, so that they don't appear in the database directories. Also in 5.1 a translation between table names and file names is done. The tmp_file_prefix on file level is now "@0023sql". The suppression of files starting with tmp_file_prefix is still in place, but still only files beginning with "#sql" were suppressed. I do now translate tmp_file_prefix from table name to file name before comparing it with the files in a directory. This suppresses the intermediate files again. No test case. The test case looks so that a reasonable big table is altered while a second thread runs SHOW TABLES. This in itself would be possible to do, but on slow machines it would add too much time to the test suite, while on fast machines the ALTER TABLE might have finished before SHOW TABLES looks at the directory. Even if there might be a good balance for todays machines, one day the test would become void as the intermediate table would not be seen even with the bug in place. I added a test script to the bug report. It can easily be changed so that it uses a table size that is appropriate for the test machine.
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
Bug#18775 - Temporary table from alter table visible to other threads The intermediate (not temporary) files of the new table during ALTER TABLE was visible for SHOW TABLES. These intermediate files are copies of the original table with the changes done by ALTER TABLE. After all the data is copied over from the original table, these files are renamed to the original tables file names. So they are not temporary files. They persist after ALTER TABLE, but just with another name. Normal GRANT checking takes place for the intermediate table. Everyone who can see the original table (and hence the final table) can also see the intermediate table. But noone else. In 5.0 the intermediate files are invisible for SHOW TABLES because all file names beginning with "#sql" were suppressed. In 5.1 temporary files are created in TMPDIR, so that they don't appear in the database directories. Also in 5.1 a translation between table names and file names is done. The tmp_file_prefix on file level is now "@0023sql". The suppression of files starting with tmp_file_prefix is still in place, but still only files beginning with "#sql" were suppressed. I do now translate tmp_file_prefix from table name to file name before comparing it with the files in a directory. This suppresses the intermediate files again. No test case. The test case looks so that a reasonable big table is altered while a second thread runs SHOW TABLES. This in itself would be possible to do, but on slow machines it would add too much time to the test suite, while on fast machines the ALTER TABLE might have finished before SHOW TABLES looks at the directory. Even if there might be a good balance for todays machines, one day the test would become void as the intermediate table would not be seen even with the bug in place. I added a test script to the bug report. It can easily be changed so that it uses a table size that is appropriate for the test machine.
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
Bug#18775 - Temporary table from alter table visible to other threads The intermediate (not temporary) files of the new table during ALTER TABLE was visible for SHOW TABLES. These intermediate files are copies of the original table with the changes done by ALTER TABLE. After all the data is copied over from the original table, these files are renamed to the original tables file names. So they are not temporary files. They persist after ALTER TABLE, but just with another name. Normal GRANT checking takes place for the intermediate table. Everyone who can see the original table (and hence the final table) can also see the intermediate table. But noone else. In 5.0 the intermediate files are invisible for SHOW TABLES because all file names beginning with "#sql" were suppressed. In 5.1 temporary files are created in TMPDIR, so that they don't appear in the database directories. Also in 5.1 a translation between table names and file names is done. The tmp_file_prefix on file level is now "@0023sql". The suppression of files starting with tmp_file_prefix is still in place, but still only files beginning with "#sql" were suppressed. I do now translate tmp_file_prefix from table name to file name before comparing it with the files in a directory. This suppresses the intermediate files again. No test case. The test case looks so that a reasonable big table is altered while a second thread runs SHOW TABLES. This in itself would be possible to do, but on slow machines it would add too much time to the test suite, while on fast machines the ALTER TABLE might have finished before SHOW TABLES looks at the directory. Even if there might be a good balance for todays machines, one day the test would become void as the intermediate table would not be seen even with the bug in place. I added a test script to the bug report. It can easily be changed so that it uses a table size that is appropriate for the test machine.
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
23 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
26 years ago
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
26 years ago
26 years ago
22 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
26 years ago
26 years ago
Bug#19025 4.1 mysqldump doesn't correctly dump "auto_increment = [int]" mysqldump / SHOW CREATE TABLE will show the NEXT available value for the PK, rather than the *first* one that was available (that named in the original CREATE TABLE ... AUTO_INCREMENT = ... statement). This should produce correct and robust behaviour for the obvious use cases -- when no data were inserted, then we'll produce a statement featuring the same value the original CREATE TABLE had; if we dump with values, INSERTing the values on the target machine should set the correct next_ID anyway (and if not, we'll still have our AUTO_INCREMENT = ... to do that). Lastly, just the CREATE statement (with no data) for a table that saw inserts would still result in a table that new values could safely be inserted to). There seems to be no robust way however to see whether the next_ID field is > 1 because it was set to something else with CREATE TABLE ... AUTO_INCREMENT = ..., or because there is an AUTO_INCREMENT column in the table (but no initial value was set with AUTO_INCREMENT = ...) and then one or more rows were INSERTed, counting up next_ID. This means that in both cases, we'll generate an AUTO_INCREMENT = ... clause in SHOW CREATE TABLE / mysqldump. As we also show info on, say, charsets even if the user did not explicitly give that info in their own CREATE TABLE, this shouldn't be an issue. As per above, the next_ID will be affected by any INSERTs that have taken place, though. This /should/ result in correct and robust behaviour, but it may look non-intuitive to some users if they CREATE TABLE ... AUTO_INCREMENT = 1000 and later (after some INSERTs) have SHOW CREATE TABLE give them a different value (say, CREATE TABLE ... AUTO_INCREMENT = 1006), so the docs should possibly feature a caveat to that effect. It's not very intuitive the way it works now (with the fix), but it's *correct*. We're not storing the original value anyway, if we wanted that, we'd have to change on-disk representation? If we do dump/load cycles with empty DBs, nothing will change. This changeset includes an additional test case that proves that tables with rows will create the same next_ID for AUTO_INCREMENT = ... across dump/restore cycles. Confirmed by support as likely solution for client's problem.
20 years ago
23 years ago
26 years ago
23 years ago
23 years ago
23 years ago
23 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
26 years ago
WL#2977 and WL#2712 global and session-level variable to set the binlog format (row/statement), and new binlog format called "mixed" (which is statement-based except if only row-based is correct, in this cset it means if UDF or UUID is used; more cases could be added in later 5.1 release): SET GLOBAL|SESSION BINLOG_FORMAT=row|statement|mixed|default; the global default is statement unless cluster is enabled (then it's row) as in 5.1-alpha. It's not possible to use SET on this variable if a session is currently in row-based mode and has open temporary tables (because CREATE TEMPORARY TABLE was not binlogged so temp table is not known on slave), or if NDB is enabled (because NDB does not support such change on-the-fly, though it will later), of if in a stored function (see below). The added tests test the possibility or impossibility to SET, their effects, and the mixed mode, including in prepared statements and in stored procedures and functions. Caveats: a) The mixed mode will not work for stored functions: in mixed mode, a stored function will always be binlogged as one call and in a statement-based way (e.g. INSERT VALUES(myfunc()) or SELECT myfunc()). b) for the same reason, changing the thread's binlog format inside a stored function is refused with an error message. c) the same problems apply to triggers; implementing b) for triggers will be done later (will ask Dmitri). Additionally, as the binlog format is now changeable by each user for his session, I remove the implication which was done at startup, where row-based automatically set log-bin-trust-routine-creators to 1 (not possible anymore as a user can now switch to stmt-based and do nasty things again), and automatically set --innodb-locks-unsafe-for-binlog to 1 (was anyway theoretically incorrect as it disabled phantom protection). Plus fixes for compiler warnings.
20 years ago
26 years ago
26 years ago
26 years ago
26 years ago
21 years ago
21 years ago
21 years ago
26 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
21 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
20 years ago
21 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
21 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
21 years ago
21 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
fix for bug#16642 (Events: No INFORMATION_SCHEMA.EVENTS table) post-review change - use pointer instead of copy on the stack. WL#1034 (Internal CRON) This patch adds INFORMATION_SCHEMA.EVENTS table with the following format: EVENT_CATALOG - MYSQL_TYPE_STRING (Always NULL) EVENT_SCHEMA - MYSQL_TYPE_STRING (the database) EVENT_NAME - MYSQL_TYPE_STRING (the name) DEFINER - MYSQL_TYPE_STRING (user@host) EVENT_BODY - MYSQL_TYPE_STRING (the body from mysql.event) EVENT_TYPE - MYSQL_TYPE_STRING ("ONE TIME" | "RECURRING") EXECUTE_AT - MYSQL_TYPE_TIMESTAMP (set for "ONE TIME" otherwise NULL) INTERVAL_VALUE - MYSQL_TYPE_LONG (set for RECURRING otherwise NULL) INTERVAL_FIELD - MYSQL_TYPE_STRING (set for RECURRING otherwise NULL) SQL_MODE - MYSQL_TYPE_STRING (for now NULL) STARTS - MYSQL_TYPE_TIMESTAMP (starts from mysql.event) ENDS - MYSQL_TYPE_TIMESTAMP (ends from mysql.event) STATUS - MYSQL_TYPE_STRING (ENABLED | DISABLED) ON_COMPLETION - MYSQL_TYPE_STRING (NOT PRESERVE | PRESERVE) CREATED - MYSQL_TYPE_TIMESTAMP LAST_ALTERED - MYSQL_TYPE_TIMESTAMP LAST_EXECUTED - MYSQL_TYPE_TIMESTAMP EVENT_COMMENT - MYSQL_TYPE_STRING SQL_MODE is NULL for now, because the value is still not stored in mysql.event . Support will be added as a fix for another bug. This patch also adds SHOW [FULL] EVENTS [FROM db] [LIKE pattern] 1. SHOW EVENTS shows always only the events on the same user, because the PK of mysql.event is (definer, db, name) several users may have event with the same name -> no information disclosure. 2. SHOW FULL EVENTS - shows the events (in the current db as SHOW EVENTS) of all users. The user has to have PROCESS privilege, if not then SHOW FULL EVENTS behave like SHOW EVENTS. 3. If [FROM db] is specified then this db is considered. 4. Event names can be filtered with LIKE pattern. SHOW EVENTS returns table with the following columns, which are subset of the data which is returned by SELECT * FROM I_S.EVENTS Db Name Definer Type Execute at Interval value Interval field Starts Ends Status
20 years ago
26 years ago
  1. /* Copyright (C) 2000-2004 MySQL AB
  2. This program is free software; you can redistribute it and/or modify
  3. it under the terms of the GNU General Public License as published by
  4. the Free Software Foundation; either version 2 of the License, or
  5. (at your option) any later version.
  6. This program is distributed in the hope that it will be useful,
  7. but WITHOUT ANY WARRANTY; without even the implied warranty of
  8. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
  9. GNU General Public License for more details.
  10. You should have received a copy of the GNU General Public License
  11. along with this program; if not, write to the Free Software
  12. Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
  13. /* Function with list databases, tables or fields */
  14. #include "mysql_priv.h"
  15. #include "sql_select.h" // For select_describe
  16. #include "sql_show.h"
  17. #include "repl_failsafe.h"
  18. #include "sp.h"
  19. #include "sp_head.h"
  20. #include "sql_trigger.h"
  21. #include "authors.h"
  22. #include "contributors.h"
  23. #include "events.h"
  24. #include "event_timed.h"
  25. #include <my_dir.h>
  26. #ifdef WITH_PARTITION_STORAGE_ENGINE
  27. #include "ha_partition.h"
  28. #endif
  29. enum enum_i_s_events_fields
  30. {
  31. ISE_EVENT_CATALOG= 0,
  32. ISE_EVENT_SCHEMA,
  33. ISE_EVENT_NAME,
  34. ISE_DEFINER,
  35. ISE_EVENT_BODY,
  36. ISE_EVENT_DEFINITION,
  37. ISE_EVENT_TYPE,
  38. ISE_EXECUTE_AT,
  39. ISE_INTERVAL_VALUE,
  40. ISE_INTERVAL_FIELD,
  41. ISE_SQL_MODE,
  42. ISE_STARTS,
  43. ISE_ENDS,
  44. ISE_STATUS,
  45. ISE_ON_COMPLETION,
  46. ISE_CREATED,
  47. ISE_LAST_ALTERED,
  48. ISE_LAST_EXECUTED,
  49. ISE_EVENT_COMMENT
  50. };
  51. static const char *grant_names[]={
  52. "select","insert","update","delete","create","drop","reload","shutdown",
  53. "process","file","grant","references","index","alter"};
  54. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  55. static TYPELIB grant_types = { sizeof(grant_names)/sizeof(char **),
  56. "grant_types",
  57. grant_names, NULL};
  58. #endif
  59. static void store_key_options(THD *thd, String *packet, TABLE *table,
  60. KEY *key_info);
  61. /***************************************************************************
  62. ** List all table types supported
  63. ***************************************************************************/
  64. static my_bool show_handlerton(THD *thd, st_plugin_int *plugin,
  65. void *arg)
  66. {
  67. handlerton *default_type= (handlerton *) arg;
  68. Protocol *protocol= thd->protocol;
  69. handlerton *hton= (handlerton *)plugin->data;
  70. if (!(hton->flags & HTON_HIDDEN))
  71. {
  72. protocol->prepare_for_resend();
  73. protocol->store(plugin->name.str, plugin->name.length,
  74. system_charset_info);
  75. const char *option_name= show_comp_option_name[(int) hton->state];
  76. if (hton->state == SHOW_OPTION_YES && default_type == hton)
  77. option_name= "DEFAULT";
  78. protocol->store(option_name, system_charset_info);
  79. protocol->store(plugin->plugin->descr, system_charset_info);
  80. protocol->store(hton->commit ? "YES" : "NO", system_charset_info);
  81. protocol->store(hton->prepare ? "YES" : "NO", system_charset_info);
  82. protocol->store(hton->savepoint_set ? "YES" : "NO", system_charset_info);
  83. return protocol->write() ? 1 : 0;
  84. }
  85. return 0;
  86. }
  87. bool mysqld_show_storage_engines(THD *thd)
  88. {
  89. List<Item> field_list;
  90. Protocol *protocol= thd->protocol;
  91. DBUG_ENTER("mysqld_show_storage_engines");
  92. field_list.push_back(new Item_empty_string("Engine",10));
  93. field_list.push_back(new Item_empty_string("Support",10));
  94. field_list.push_back(new Item_empty_string("Comment",80));
  95. field_list.push_back(new Item_empty_string("Transactions",3));
  96. field_list.push_back(new Item_empty_string("XA",3));
  97. field_list.push_back(new Item_empty_string("Savepoints",3));
  98. if (protocol->send_fields(&field_list,
  99. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  100. DBUG_RETURN(TRUE);
  101. if (plugin_foreach(thd, show_handlerton,
  102. MYSQL_STORAGE_ENGINE_PLUGIN, thd->variables.table_type))
  103. DBUG_RETURN(TRUE);
  104. send_eof(thd);
  105. DBUG_RETURN(FALSE);
  106. }
  107. static int make_version_string(char *buf, int buf_length, uint version)
  108. {
  109. return my_snprintf(buf, buf_length, "%d.%d", version>>8,version&0xff);
  110. }
  111. static my_bool show_plugins(THD *thd, st_plugin_int *plugin,
  112. void *arg)
  113. {
  114. TABLE *table= (TABLE*) arg;
  115. struct st_mysql_plugin *plug= plugin->plugin;
  116. Protocol *protocol= thd->protocol;
  117. CHARSET_INFO *cs= system_charset_info;
  118. char version_buf[20];
  119. restore_record(table, s->default_values);
  120. table->field[0]->store(plugin->name.str, plugin->name.length, cs);
  121. table->field[1]->store(version_buf,
  122. make_version_string(version_buf, sizeof(version_buf), plug->version),
  123. cs);
  124. switch (plugin->state)
  125. {
  126. /* case PLUGIN_IS_FREED: does not happen */
  127. case PLUGIN_IS_DELETED:
  128. table->field[2]->store(STRING_WITH_LEN("DELETED"), cs);
  129. break;
  130. case PLUGIN_IS_UNINITIALIZED:
  131. table->field[2]->store(STRING_WITH_LEN("INACTIVE"), cs);
  132. break;
  133. case PLUGIN_IS_READY:
  134. table->field[2]->store(STRING_WITH_LEN("ACTIVE"), cs);
  135. break;
  136. default:
  137. DBUG_ASSERT(0);
  138. }
  139. table->field[3]->store(plugin_type_names[plug->type].str,
  140. plugin_type_names[plug->type].length,
  141. cs);
  142. table->field[4]->store(version_buf,
  143. make_version_string(version_buf, sizeof(version_buf),
  144. *(uint *)plug->info), cs);
  145. if (plugin->plugin_dl)
  146. {
  147. table->field[5]->store(plugin->plugin_dl->dl.str,
  148. plugin->plugin_dl->dl.length, cs);
  149. table->field[5]->set_notnull();
  150. table->field[6]->store(version_buf,
  151. make_version_string(version_buf, sizeof(version_buf),
  152. plugin->plugin_dl->version),
  153. cs);
  154. table->field[6]->set_notnull();
  155. }
  156. else
  157. {
  158. table->field[5]->set_null();
  159. table->field[6]->set_null();
  160. }
  161. if (plug->author)
  162. {
  163. table->field[7]->store(plug->author, strlen(plug->author), cs);
  164. table->field[7]->set_notnull();
  165. }
  166. else
  167. table->field[7]->set_null();
  168. if (plug->descr)
  169. {
  170. table->field[8]->store(plug->descr, strlen(plug->descr), cs);
  171. table->field[8]->set_notnull();
  172. }
  173. else
  174. table->field[8]->set_null();
  175. return schema_table_store_record(thd, table);
  176. }
  177. int fill_plugins(THD *thd, TABLE_LIST *tables, COND *cond)
  178. {
  179. DBUG_ENTER("fill_plugins");
  180. TABLE *table= tables->table;
  181. if (plugin_foreach(thd, show_plugins, MYSQL_ANY_PLUGIN, table))
  182. DBUG_RETURN(1);
  183. DBUG_RETURN(0);
  184. }
  185. /***************************************************************************
  186. ** List all Authors.
  187. ** If you can update it, you get to be in it :)
  188. ***************************************************************************/
  189. bool mysqld_show_authors(THD *thd)
  190. {
  191. List<Item> field_list;
  192. Protocol *protocol= thd->protocol;
  193. DBUG_ENTER("mysqld_show_authors");
  194. field_list.push_back(new Item_empty_string("Name",40));
  195. field_list.push_back(new Item_empty_string("Location",40));
  196. field_list.push_back(new Item_empty_string("Comment",80));
  197. if (protocol->send_fields(&field_list,
  198. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  199. DBUG_RETURN(TRUE);
  200. show_table_authors_st *authors;
  201. for (authors= show_table_authors; authors->name; authors++)
  202. {
  203. protocol->prepare_for_resend();
  204. protocol->store(authors->name, system_charset_info);
  205. protocol->store(authors->location, system_charset_info);
  206. protocol->store(authors->comment, system_charset_info);
  207. if (protocol->write())
  208. DBUG_RETURN(TRUE);
  209. }
  210. send_eof(thd);
  211. DBUG_RETURN(FALSE);
  212. }
  213. /***************************************************************************
  214. ** List all Contributors.
  215. ** Please get permission before updating
  216. ***************************************************************************/
  217. bool mysqld_show_contributors(THD *thd)
  218. {
  219. List<Item> field_list;
  220. Protocol *protocol= thd->protocol;
  221. DBUG_ENTER("mysqld_show_contributors");
  222. field_list.push_back(new Item_empty_string("Name",40));
  223. field_list.push_back(new Item_empty_string("Location",40));
  224. field_list.push_back(new Item_empty_string("Comment",80));
  225. if (protocol->send_fields(&field_list,
  226. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  227. DBUG_RETURN(TRUE);
  228. show_table_contributors_st *contributors;
  229. for (contributors= show_table_contributors; contributors->name; contributors++)
  230. {
  231. protocol->prepare_for_resend();
  232. protocol->store(contributors->name, system_charset_info);
  233. protocol->store(contributors->location, system_charset_info);
  234. protocol->store(contributors->comment, system_charset_info);
  235. if (protocol->write())
  236. DBUG_RETURN(TRUE);
  237. }
  238. send_eof(thd);
  239. DBUG_RETURN(FALSE);
  240. }
  241. /***************************************************************************
  242. List all privileges supported
  243. ***************************************************************************/
  244. struct show_privileges_st {
  245. const char *privilege;
  246. const char *context;
  247. const char *comment;
  248. };
  249. static struct show_privileges_st sys_privileges[]=
  250. {
  251. {"Alter", "Tables", "To alter the table"},
  252. {"Alter routine", "Functions,Procedures", "To alter or drop stored functions/procedures"},
  253. {"Create", "Databases,Tables,Indexes", "To create new databases and tables"},
  254. {"Create routine","Functions,Procedures","To use CREATE FUNCTION/PROCEDURE"},
  255. {"Create temporary tables","Databases","To use CREATE TEMPORARY TABLE"},
  256. {"Create view", "Tables", "To create new views"},
  257. {"Create user", "Server Admin", "To create new users"},
  258. {"Delete", "Tables", "To delete existing rows"},
  259. {"Drop", "Databases,Tables", "To drop databases, tables, and views"},
  260. {"Event","Server Admin","To create, alter, drop and execute events"},
  261. {"Execute", "Functions,Procedures", "To execute stored routines"},
  262. {"File", "File access on server", "To read and write files on the server"},
  263. {"Grant option", "Databases,Tables,Functions,Procedures", "To give to other users those privileges you possess"},
  264. {"Index", "Tables", "To create or drop indexes"},
  265. {"Insert", "Tables", "To insert data into tables"},
  266. {"Lock tables","Databases","To use LOCK TABLES (together with SELECT privilege)"},
  267. {"Process", "Server Admin", "To view the plain text of currently executing queries"},
  268. {"References", "Databases,Tables", "To have references on tables"},
  269. {"Reload", "Server Admin", "To reload or refresh tables, logs and privileges"},
  270. {"Replication client","Server Admin","To ask where the slave or master servers are"},
  271. {"Replication slave","Server Admin","To read binary log events from the master"},
  272. {"Select", "Tables", "To retrieve rows from table"},
  273. {"Show databases","Server Admin","To see all databases with SHOW DATABASES"},
  274. {"Show view","Tables","To see views with SHOW CREATE VIEW"},
  275. {"Shutdown","Server Admin", "To shut down the server"},
  276. {"Super","Server Admin","To use KILL thread, SET GLOBAL, CHANGE MASTER, etc."},
  277. {"Trigger","Tables", "To use triggers"},
  278. {"Update", "Tables", "To update existing rows"},
  279. {"Usage","Server Admin","No privileges - allow connect only"},
  280. {NullS, NullS, NullS}
  281. };
  282. bool mysqld_show_privileges(THD *thd)
  283. {
  284. List<Item> field_list;
  285. Protocol *protocol= thd->protocol;
  286. DBUG_ENTER("mysqld_show_privileges");
  287. field_list.push_back(new Item_empty_string("Privilege",10));
  288. field_list.push_back(new Item_empty_string("Context",15));
  289. field_list.push_back(new Item_empty_string("Comment",NAME_LEN));
  290. if (protocol->send_fields(&field_list,
  291. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  292. DBUG_RETURN(TRUE);
  293. show_privileges_st *privilege= sys_privileges;
  294. for (privilege= sys_privileges; privilege->privilege ; privilege++)
  295. {
  296. protocol->prepare_for_resend();
  297. protocol->store(privilege->privilege, system_charset_info);
  298. protocol->store(privilege->context, system_charset_info);
  299. protocol->store(privilege->comment, system_charset_info);
  300. if (protocol->write())
  301. DBUG_RETURN(TRUE);
  302. }
  303. send_eof(thd);
  304. DBUG_RETURN(FALSE);
  305. }
  306. /***************************************************************************
  307. List all column types
  308. ***************************************************************************/
  309. struct show_column_type_st
  310. {
  311. const char *type;
  312. uint size;
  313. const char *min_value;
  314. const char *max_value;
  315. uint precision;
  316. uint scale;
  317. const char *nullable;
  318. const char *auto_increment;
  319. const char *unsigned_attr;
  320. const char *zerofill;
  321. const char *searchable;
  322. const char *case_sensitivity;
  323. const char *default_value;
  324. const char *comment;
  325. };
  326. /* TODO: Add remaning types */
  327. static struct show_column_type_st sys_column_types[]=
  328. {
  329. {"tinyint",
  330. 1, "-128", "127", 0, 0, "YES", "YES",
  331. "NO", "YES", "YES", "NO", "NULL,0",
  332. "A very small integer"},
  333. {"tinyint unsigned",
  334. 1, "0" , "255", 0, 0, "YES", "YES",
  335. "YES", "YES", "YES", "NO", "NULL,0",
  336. "A very small integer"},
  337. };
  338. bool mysqld_show_column_types(THD *thd)
  339. {
  340. List<Item> field_list;
  341. Protocol *protocol= thd->protocol;
  342. DBUG_ENTER("mysqld_show_column_types");
  343. field_list.push_back(new Item_empty_string("Type",30));
  344. field_list.push_back(new Item_int("Size",(longlong) 1,21));
  345. field_list.push_back(new Item_empty_string("Min_Value",20));
  346. field_list.push_back(new Item_empty_string("Max_Value",20));
  347. field_list.push_back(new Item_return_int("Prec", 4, MYSQL_TYPE_SHORT));
  348. field_list.push_back(new Item_return_int("Scale", 4, MYSQL_TYPE_SHORT));
  349. field_list.push_back(new Item_empty_string("Nullable",4));
  350. field_list.push_back(new Item_empty_string("Auto_Increment",4));
  351. field_list.push_back(new Item_empty_string("Unsigned",4));
  352. field_list.push_back(new Item_empty_string("Zerofill",4));
  353. field_list.push_back(new Item_empty_string("Searchable",4));
  354. field_list.push_back(new Item_empty_string("Case_Sensitive",4));
  355. field_list.push_back(new Item_empty_string("Default",NAME_LEN));
  356. field_list.push_back(new Item_empty_string("Comment",NAME_LEN));
  357. if (protocol->send_fields(&field_list,
  358. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  359. DBUG_RETURN(TRUE);
  360. /* TODO: Change the loop to not use 'i' */
  361. for (uint i=0; i < sizeof(sys_column_types)/sizeof(sys_column_types[0]); i++)
  362. {
  363. protocol->prepare_for_resend();
  364. protocol->store(sys_column_types[i].type, system_charset_info);
  365. protocol->store((ulonglong) sys_column_types[i].size);
  366. protocol->store(sys_column_types[i].min_value, system_charset_info);
  367. protocol->store(sys_column_types[i].max_value, system_charset_info);
  368. protocol->store_short((longlong) sys_column_types[i].precision);
  369. protocol->store_short((longlong) sys_column_types[i].scale);
  370. protocol->store(sys_column_types[i].nullable, system_charset_info);
  371. protocol->store(sys_column_types[i].auto_increment, system_charset_info);
  372. protocol->store(sys_column_types[i].unsigned_attr, system_charset_info);
  373. protocol->store(sys_column_types[i].zerofill, system_charset_info);
  374. protocol->store(sys_column_types[i].searchable, system_charset_info);
  375. protocol->store(sys_column_types[i].case_sensitivity, system_charset_info);
  376. protocol->store(sys_column_types[i].default_value, system_charset_info);
  377. protocol->store(sys_column_types[i].comment, system_charset_info);
  378. if (protocol->write())
  379. DBUG_RETURN(TRUE);
  380. }
  381. send_eof(thd);
  382. DBUG_RETURN(FALSE);
  383. }
  384. int
  385. mysql_find_files(THD *thd,List<char> *files, const char *db,const char *path,
  386. const char *wild, bool dir)
  387. {
  388. uint i;
  389. char *ext;
  390. MY_DIR *dirp;
  391. FILEINFO *file;
  392. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  393. uint col_access=thd->col_access;
  394. #endif
  395. TABLE_LIST table_list;
  396. char tbbuff[FN_REFLEN];
  397. DBUG_ENTER("mysql_find_files");
  398. if (wild && !wild[0])
  399. wild=0;
  400. bzero((char*) &table_list,sizeof(table_list));
  401. if (!(dirp = my_dir(path,MYF(dir ? MY_WANT_STAT : 0))))
  402. {
  403. if (my_errno == ENOENT)
  404. my_error(ER_BAD_DB_ERROR, MYF(ME_BELL+ME_WAITTANG), db);
  405. else
  406. my_error(ER_CANT_READ_DIR, MYF(ME_BELL+ME_WAITTANG), path, my_errno);
  407. DBUG_RETURN(-1);
  408. }
  409. VOID(tablename_to_filename(tmp_file_prefix, tbbuff, sizeof(tbbuff)));
  410. for (i=0 ; i < (uint) dirp->number_off_files ; i++)
  411. {
  412. char uname[NAME_LEN*3+1]; /* Unencoded name */
  413. file=dirp->dir_entry+i;
  414. if (dir)
  415. { /* Return databases */
  416. if ((file->name[0] == '.' &&
  417. ((file->name[1] == '.' && file->name[2] == '\0') ||
  418. file->name[1] == '\0')))
  419. continue; /* . or .. */
  420. #ifdef USE_SYMDIR
  421. char *ext;
  422. char buff[FN_REFLEN];
  423. if (my_use_symdir && !strcmp(ext=fn_ext(file->name), ".sym"))
  424. {
  425. /* Only show the sym file if it points to a directory */
  426. char *end;
  427. *ext=0; /* Remove extension */
  428. unpack_dirname(buff, file->name);
  429. end= strend(buff);
  430. if (end != buff && end[-1] == FN_LIBCHAR)
  431. end[-1]= 0; // Remove end FN_LIBCHAR
  432. if (!my_stat(buff, file->mystat, MYF(0)))
  433. continue;
  434. }
  435. #endif
  436. if (!MY_S_ISDIR(file->mystat->st_mode))
  437. continue;
  438. VOID(filename_to_tablename(file->name, uname, sizeof(uname)));
  439. if (wild && wild_compare(uname, wild, 0))
  440. continue;
  441. file->name= uname;
  442. }
  443. else
  444. {
  445. // Return only .frm files which aren't temp files.
  446. if (my_strcasecmp(system_charset_info, ext=fn_rext(file->name),reg_ext) ||
  447. is_prefix(file->name,tbbuff))
  448. continue;
  449. *ext=0;
  450. VOID(filename_to_tablename(file->name, uname, sizeof(uname)));
  451. file->name= uname;
  452. if (wild)
  453. {
  454. if (lower_case_table_names)
  455. {
  456. if (wild_case_compare(files_charset_info, file->name, wild))
  457. continue;
  458. }
  459. else if (wild_compare(file->name,wild,0))
  460. continue;
  461. }
  462. }
  463. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  464. /* Don't show tables where we don't have any privileges */
  465. if (db && !(col_access & TABLE_ACLS))
  466. {
  467. table_list.db= (char*) db;
  468. table_list.db_length= strlen(db);
  469. table_list.table_name= file->name;
  470. table_list.table_name_length= strlen(file->name);
  471. table_list.grant.privilege=col_access;
  472. if (check_grant(thd, TABLE_ACLS, &table_list, 1, 1, 1))
  473. continue;
  474. }
  475. #endif
  476. if (files->push_back(thd->strdup(file->name)))
  477. {
  478. my_dirend(dirp);
  479. DBUG_RETURN(-1);
  480. }
  481. }
  482. DBUG_PRINT("info",("found: %d files", files->elements));
  483. my_dirend(dirp);
  484. VOID(ha_find_files(thd,db,path,wild,dir,files));
  485. DBUG_RETURN(0);
  486. }
  487. bool
  488. mysqld_show_create(THD *thd, TABLE_LIST *table_list)
  489. {
  490. Protocol *protocol= thd->protocol;
  491. char buff[2048];
  492. String buffer(buff, sizeof(buff), system_charset_info);
  493. DBUG_ENTER("mysqld_show_create");
  494. DBUG_PRINT("enter",("db: %s table: %s",table_list->db,
  495. table_list->table_name));
  496. /* We want to preserve the tree for views. */
  497. thd->lex->view_prepare_mode= TRUE;
  498. /* Only one table for now, but VIEW can involve several tables */
  499. if (open_normal_and_derived_tables(thd, table_list, 0))
  500. {
  501. if (!table_list->view || thd->net.last_errno != ER_VIEW_INVALID)
  502. DBUG_RETURN(TRUE);
  503. /*
  504. Clear all messages with 'error' level status and
  505. issue a warning with 'warning' level status in
  506. case of invalid view and last error is ER_VIEW_INVALID
  507. */
  508. mysql_reset_errors(thd, true);
  509. thd->clear_error();
  510. push_warning_printf(thd,MYSQL_ERROR::WARN_LEVEL_WARN,
  511. ER_VIEW_INVALID,
  512. ER(ER_VIEW_INVALID),
  513. table_list->view_db.str,
  514. table_list->view_name.str);
  515. }
  516. /* TODO: add environment variables show when it become possible */
  517. if (thd->lex->only_view && !table_list->view)
  518. {
  519. my_error(ER_WRONG_OBJECT, MYF(0),
  520. table_list->db, table_list->table_name, "VIEW");
  521. DBUG_RETURN(TRUE);
  522. }
  523. buffer.length(0);
  524. if ((table_list->view ?
  525. view_store_create_info(thd, table_list, &buffer) :
  526. store_create_info(thd, table_list, &buffer, NULL)))
  527. DBUG_RETURN(TRUE);
  528. List<Item> field_list;
  529. if (table_list->view)
  530. {
  531. field_list.push_back(new Item_empty_string("View",NAME_LEN));
  532. field_list.push_back(new Item_empty_string("Create View",
  533. max(buffer.length(),1024)));
  534. }
  535. else
  536. {
  537. field_list.push_back(new Item_empty_string("Table",NAME_LEN));
  538. // 1024 is for not to confuse old clients
  539. field_list.push_back(new Item_empty_string("Create Table",
  540. max(buffer.length(),1024)));
  541. }
  542. if (protocol->send_fields(&field_list,
  543. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  544. DBUG_RETURN(TRUE);
  545. protocol->prepare_for_resend();
  546. if (table_list->view)
  547. protocol->store(table_list->view_name.str, system_charset_info);
  548. else
  549. {
  550. if (table_list->schema_table)
  551. protocol->store(table_list->schema_table->table_name,
  552. system_charset_info);
  553. else
  554. protocol->store(table_list->table->alias, system_charset_info);
  555. }
  556. protocol->store(buffer.ptr(), buffer.length(), buffer.charset());
  557. if (protocol->write())
  558. DBUG_RETURN(TRUE);
  559. send_eof(thd);
  560. DBUG_RETURN(FALSE);
  561. }
  562. bool mysqld_show_create_db(THD *thd, char *dbname,
  563. HA_CREATE_INFO *create_info)
  564. {
  565. Security_context *sctx= thd->security_ctx;
  566. int length;
  567. char path[FN_REFLEN];
  568. char buff[2048];
  569. String buffer(buff, sizeof(buff), system_charset_info);
  570. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  571. uint db_access;
  572. #endif
  573. bool found_libchar;
  574. HA_CREATE_INFO create;
  575. uint create_options = create_info ? create_info->options : 0;
  576. Protocol *protocol=thd->protocol;
  577. DBUG_ENTER("mysql_show_create_db");
  578. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  579. if (test_all_bits(sctx->master_access, DB_ACLS))
  580. db_access=DB_ACLS;
  581. else
  582. db_access= (acl_get(sctx->host, sctx->ip, sctx->priv_user, dbname, 0) |
  583. sctx->master_access);
  584. if (!(db_access & DB_ACLS) && (!grant_option || check_grant_db(thd,dbname)))
  585. {
  586. my_error(ER_DBACCESS_DENIED_ERROR, MYF(0),
  587. sctx->priv_user, sctx->host_or_ip, dbname);
  588. general_log_print(thd,COM_INIT_DB,ER(ER_DBACCESS_DENIED_ERROR),
  589. sctx->priv_user, sctx->host_or_ip, dbname);
  590. DBUG_RETURN(TRUE);
  591. }
  592. #endif
  593. if (!my_strcasecmp(system_charset_info, dbname,
  594. information_schema_name.str))
  595. {
  596. dbname= information_schema_name.str;
  597. create.default_table_charset= system_charset_info;
  598. }
  599. else
  600. {
  601. length= build_table_filename(path, sizeof(path), dbname, "", "");
  602. found_libchar= 0;
  603. if (length && path[length-1] == FN_LIBCHAR)
  604. {
  605. found_libchar= 1;
  606. path[length-1]=0; // remove ending '\'
  607. }
  608. if (access(path,F_OK))
  609. {
  610. my_error(ER_BAD_DB_ERROR, MYF(0), dbname);
  611. DBUG_RETURN(TRUE);
  612. }
  613. if (found_libchar)
  614. path[length-1]= FN_LIBCHAR;
  615. strmov(path+length, MY_DB_OPT_FILE);
  616. load_db_opt(thd, path, &create);
  617. }
  618. List<Item> field_list;
  619. field_list.push_back(new Item_empty_string("Database",NAME_LEN));
  620. field_list.push_back(new Item_empty_string("Create Database",1024));
  621. if (protocol->send_fields(&field_list,
  622. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  623. DBUG_RETURN(TRUE);
  624. protocol->prepare_for_resend();
  625. protocol->store(dbname, strlen(dbname), system_charset_info);
  626. buffer.length(0);
  627. buffer.append(STRING_WITH_LEN("CREATE DATABASE "));
  628. if (create_options & HA_LEX_CREATE_IF_NOT_EXISTS)
  629. buffer.append(STRING_WITH_LEN("/*!32312 IF NOT EXISTS*/ "));
  630. append_identifier(thd, &buffer, dbname, strlen(dbname));
  631. if (create.default_table_charset)
  632. {
  633. buffer.append(STRING_WITH_LEN(" /*!40100"));
  634. buffer.append(STRING_WITH_LEN(" DEFAULT CHARACTER SET "));
  635. buffer.append(create.default_table_charset->csname);
  636. if (!(create.default_table_charset->state & MY_CS_PRIMARY))
  637. {
  638. buffer.append(STRING_WITH_LEN(" COLLATE "));
  639. buffer.append(create.default_table_charset->name);
  640. }
  641. buffer.append(STRING_WITH_LEN(" */"));
  642. }
  643. protocol->store(buffer.ptr(), buffer.length(), buffer.charset());
  644. if (protocol->write())
  645. DBUG_RETURN(TRUE);
  646. send_eof(thd);
  647. DBUG_RETURN(FALSE);
  648. }
  649. /****************************************************************************
  650. Return only fields for API mysql_list_fields
  651. Use "show table wildcard" in mysql instead of this
  652. ****************************************************************************/
  653. void
  654. mysqld_list_fields(THD *thd, TABLE_LIST *table_list, const char *wild)
  655. {
  656. TABLE *table;
  657. DBUG_ENTER("mysqld_list_fields");
  658. DBUG_PRINT("enter",("table: %s",table_list->table_name));
  659. if (open_normal_and_derived_tables(thd, table_list, 0))
  660. DBUG_VOID_RETURN;
  661. table= table_list->table;
  662. List<Item> field_list;
  663. Field **ptr,*field;
  664. for (ptr=table->field ; (field= *ptr); ptr++)
  665. {
  666. if (!wild || !wild[0] ||
  667. !wild_case_compare(system_charset_info, field->field_name,wild))
  668. field_list.push_back(new Item_field(field));
  669. }
  670. restore_record(table, s->default_values); // Get empty record
  671. table->use_all_columns();
  672. if (thd->protocol->send_fields(&field_list, Protocol::SEND_DEFAULTS |
  673. Protocol::SEND_EOF))
  674. DBUG_VOID_RETURN;
  675. thd->protocol->flush();
  676. DBUG_VOID_RETURN;
  677. }
  678. int
  679. mysqld_dump_create_info(THD *thd, TABLE_LIST *table_list, int fd)
  680. {
  681. Protocol *protocol= thd->protocol;
  682. String *packet= protocol->storage_packet();
  683. DBUG_ENTER("mysqld_dump_create_info");
  684. DBUG_PRINT("enter",("table: %s",table_list->table->s->table_name.str));
  685. protocol->prepare_for_resend();
  686. if (store_create_info(thd, table_list, packet, NULL))
  687. DBUG_RETURN(-1);
  688. if (fd < 0)
  689. {
  690. if (protocol->write())
  691. DBUG_RETURN(-1);
  692. protocol->flush();
  693. }
  694. else
  695. {
  696. if (my_write(fd, (const byte*) packet->ptr(), packet->length(),
  697. MYF(MY_WME)))
  698. DBUG_RETURN(-1);
  699. }
  700. DBUG_RETURN(0);
  701. }
  702. /*
  703. Go through all character combinations and ensure that sql_lex.cc can
  704. parse it as an identifier.
  705. SYNOPSIS
  706. require_quotes()
  707. name attribute name
  708. name_length length of name
  709. RETURN
  710. # Pointer to conflicting character
  711. 0 No conflicting character
  712. */
  713. static const char *require_quotes(const char *name, uint name_length)
  714. {
  715. uint length;
  716. bool pure_digit= TRUE;
  717. const char *end= name + name_length;
  718. for (; name < end ; name++)
  719. {
  720. uchar chr= (uchar) *name;
  721. length= my_mbcharlen(system_charset_info, chr);
  722. if (length == 1 && !system_charset_info->ident_map[chr])
  723. return name;
  724. if (length == 1 && (chr < '0' || chr > '9'))
  725. pure_digit= FALSE;
  726. }
  727. if (pure_digit)
  728. return name;
  729. return 0;
  730. }
  731. /*
  732. Quote the given identifier if needed and append it to the target string.
  733. If the given identifier is empty, it will be quoted.
  734. SYNOPSIS
  735. append_identifier()
  736. thd thread handler
  737. packet target string
  738. name the identifier to be appended
  739. name_length length of the appending identifier
  740. */
  741. void
  742. append_identifier(THD *thd, String *packet, const char *name, uint length)
  743. {
  744. const char *name_end;
  745. char quote_char;
  746. int q= get_quote_char_for_identifier(thd, name, length);
  747. if (q == EOF)
  748. {
  749. packet->append(name, length, system_charset_info);
  750. return;
  751. }
  752. /*
  753. The identifier must be quoted as it includes a quote character or
  754. it's a keyword
  755. */
  756. VOID(packet->reserve(length*2 + 2));
  757. quote_char= (char) q;
  758. packet->append(&quote_char, 1, system_charset_info);
  759. for (name_end= name+length ; name < name_end ; name+= length)
  760. {
  761. uchar chr= (uchar) *name;
  762. length= my_mbcharlen(system_charset_info, chr);
  763. /*
  764. my_mbcharlen can retur 0 on a wrong multibyte
  765. sequence. It is possible when upgrading from 4.0,
  766. and identifier contains some accented characters.
  767. The manual says it does not work. So we'll just
  768. change length to 1 not to hang in the endless loop.
  769. */
  770. if (!length)
  771. length= 1;
  772. if (length == 1 && chr == (uchar) quote_char)
  773. packet->append(&quote_char, 1, system_charset_info);
  774. packet->append(name, length, packet->charset());
  775. }
  776. packet->append(&quote_char, 1, system_charset_info);
  777. }
  778. /*
  779. Get the quote character for displaying an identifier.
  780. SYNOPSIS
  781. get_quote_char_for_identifier()
  782. thd Thread handler
  783. name name to quote
  784. length length of name
  785. IMPLEMENTATION
  786. Force quoting in the following cases:
  787. - name is empty (for one, it is possible when we use this function for
  788. quoting user and host names for DEFINER clause);
  789. - name is a keyword;
  790. - name includes a special character;
  791. Otherwise identifier is quoted only if the option OPTION_QUOTE_SHOW_CREATE
  792. is set.
  793. RETURN
  794. EOF No quote character is needed
  795. # Quote character
  796. */
  797. int get_quote_char_for_identifier(THD *thd, const char *name, uint length)
  798. {
  799. if (length &&
  800. !is_keyword(name,length) &&
  801. !require_quotes(name, length) &&
  802. !(thd->options & OPTION_QUOTE_SHOW_CREATE))
  803. return EOF;
  804. if (thd->variables.sql_mode & MODE_ANSI_QUOTES)
  805. return '"';
  806. return '`';
  807. }
  808. /* Append directory name (if exists) to CREATE INFO */
  809. static void append_directory(THD *thd, String *packet, const char *dir_type,
  810. const char *filename)
  811. {
  812. if (filename && !(thd->variables.sql_mode & MODE_NO_DIR_IN_CREATE))
  813. {
  814. uint length= dirname_length(filename);
  815. packet->append(' ');
  816. packet->append(dir_type);
  817. packet->append(STRING_WITH_LEN(" DIRECTORY='"));
  818. #ifdef __WIN__
  819. /* Convert \ to / to be able to create table on unix */
  820. char *winfilename= (char*) thd->memdup(filename, length);
  821. char *pos, *end;
  822. for (pos= winfilename, end= pos+length ; pos < end ; pos++)
  823. {
  824. if (*pos == '\\')
  825. *pos = '/';
  826. }
  827. filename= winfilename;
  828. #endif
  829. packet->append(filename, length);
  830. packet->append('\'');
  831. }
  832. }
  833. #define LIST_PROCESS_HOST_LEN 64
  834. /*
  835. Build a CREATE TABLE statement for a table.
  836. SYNOPSIS
  837. store_create_info()
  838. thd The thread
  839. table_list A list containing one table to write statement
  840. for.
  841. packet Pointer to a string where statement will be
  842. written.
  843. create_info_arg Pointer to create information that can be used
  844. to tailor the format of the statement. Can be
  845. NULL, in which case only SQL_MODE is considered
  846. when building the statement.
  847. NOTE
  848. Currently always return 0, but might return error code in the
  849. future.
  850. RETURN
  851. 0 OK
  852. */
  853. int store_create_info(THD *thd, TABLE_LIST *table_list, String *packet,
  854. HA_CREATE_INFO *create_info_arg)
  855. {
  856. List<Item> field_list;
  857. char tmp[MAX_FIELD_WIDTH], *for_str, buff[128], *end, uname[NAME_LEN*3+1];
  858. const char *alias;
  859. String type(tmp, sizeof(tmp), system_charset_info);
  860. Field **ptr,*field;
  861. uint primary_key;
  862. KEY *key_info;
  863. TABLE *table= table_list->table;
  864. handler *file= table->file;
  865. TABLE_SHARE *share= table->s;
  866. HA_CREATE_INFO create_info;
  867. bool show_table_options= FALSE;
  868. bool foreign_db_mode= (thd->variables.sql_mode & (MODE_POSTGRESQL |
  869. MODE_ORACLE |
  870. MODE_MSSQL |
  871. MODE_DB2 |
  872. MODE_MAXDB |
  873. MODE_ANSI)) != 0;
  874. bool limited_mysql_mode= (thd->variables.sql_mode & (MODE_NO_FIELD_OPTIONS |
  875. MODE_MYSQL323 |
  876. MODE_MYSQL40)) != 0;
  877. my_bitmap_map *old_map;
  878. DBUG_ENTER("store_create_info");
  879. DBUG_PRINT("enter",("table: %s", table->s->table_name.str));
  880. restore_record(table, s->default_values); // Get empty record
  881. if (share->tmp_table)
  882. packet->append(STRING_WITH_LEN("CREATE TEMPORARY TABLE "));
  883. else
  884. packet->append(STRING_WITH_LEN("CREATE TABLE "));
  885. if (table_list->schema_table)
  886. alias= table_list->schema_table->table_name;
  887. else
  888. {
  889. if (lower_case_table_names == 2)
  890. alias= table->alias;
  891. else
  892. {
  893. alias= share->table_name.str;
  894. }
  895. }
  896. append_identifier(thd, packet, alias, strlen(alias));
  897. packet->append(STRING_WITH_LEN(" (\n"));
  898. /*
  899. We need this to get default values from the table
  900. We have to restore the read_set if we are called from insert in case
  901. of row based replication.
  902. */
  903. old_map= tmp_use_all_columns(table, table->read_set);
  904. for (ptr=table->field ; (field= *ptr); ptr++)
  905. {
  906. bool has_default;
  907. bool has_now_default;
  908. uint flags = field->flags;
  909. if (ptr != table->field)
  910. packet->append(STRING_WITH_LEN(",\n"));
  911. packet->append(STRING_WITH_LEN(" "));
  912. append_identifier(thd,packet,field->field_name, strlen(field->field_name));
  913. packet->append(' ');
  914. // check for surprises from the previous call to Field::sql_type()
  915. if (type.ptr() != tmp)
  916. type.set(tmp, sizeof(tmp), system_charset_info);
  917. else
  918. type.set_charset(system_charset_info);
  919. field->sql_type(type);
  920. packet->append(type.ptr(), type.length(), system_charset_info);
  921. if (field->has_charset() &&
  922. !(thd->variables.sql_mode & (MODE_MYSQL323 | MODE_MYSQL40)))
  923. {
  924. if (field->charset() != share->table_charset)
  925. {
  926. packet->append(STRING_WITH_LEN(" CHARACTER SET "));
  927. packet->append(field->charset()->csname);
  928. }
  929. /*
  930. For string types dump collation name only if
  931. collation is not primary for the given charset
  932. */
  933. if (!(field->charset()->state & MY_CS_PRIMARY))
  934. {
  935. packet->append(STRING_WITH_LEN(" COLLATE "));
  936. packet->append(field->charset()->name);
  937. }
  938. }
  939. if (flags & NOT_NULL_FLAG)
  940. packet->append(STRING_WITH_LEN(" NOT NULL"));
  941. else if (field->type() == FIELD_TYPE_TIMESTAMP)
  942. {
  943. /*
  944. TIMESTAMP field require explicit NULL flag, because unlike
  945. all other fields they are treated as NOT NULL by default.
  946. */
  947. packet->append(STRING_WITH_LEN(" NULL"));
  948. }
  949. /*
  950. Again we are using CURRENT_TIMESTAMP instead of NOW because it is
  951. more standard
  952. */
  953. has_now_default= table->timestamp_field == field &&
  954. field->unireg_check != Field::TIMESTAMP_UN_FIELD;
  955. has_default= (field->type() != FIELD_TYPE_BLOB &&
  956. !(field->flags & NO_DEFAULT_VALUE_FLAG) &&
  957. field->unireg_check != Field::NEXT_NUMBER &&
  958. !((thd->variables.sql_mode & (MODE_MYSQL323 | MODE_MYSQL40))
  959. && has_now_default));
  960. if (has_default)
  961. {
  962. packet->append(STRING_WITH_LEN(" DEFAULT "));
  963. if (has_now_default)
  964. packet->append(STRING_WITH_LEN("CURRENT_TIMESTAMP"));
  965. else if (!field->is_null())
  966. { // Not null by default
  967. type.set(tmp, sizeof(tmp), field->charset());
  968. field->val_str(&type);
  969. if (type.length())
  970. {
  971. String def_val;
  972. uint dummy_errors;
  973. /* convert to system_charset_info == utf8 */
  974. def_val.copy(type.ptr(), type.length(), field->charset(),
  975. system_charset_info, &dummy_errors);
  976. append_unescaped(packet, def_val.ptr(), def_val.length());
  977. }
  978. else
  979. packet->append(STRING_WITH_LEN("''"));
  980. }
  981. else if (field->maybe_null())
  982. packet->append(STRING_WITH_LEN("NULL")); // Null as default
  983. else
  984. packet->append(tmp);
  985. }
  986. if (!limited_mysql_mode && table->timestamp_field == field &&
  987. field->unireg_check != Field::TIMESTAMP_DN_FIELD)
  988. packet->append(STRING_WITH_LEN(" ON UPDATE CURRENT_TIMESTAMP"));
  989. if (field->unireg_check == Field::NEXT_NUMBER &&
  990. !(thd->variables.sql_mode & MODE_NO_FIELD_OPTIONS))
  991. packet->append(STRING_WITH_LEN(" AUTO_INCREMENT"));
  992. if (field->comment.length)
  993. {
  994. packet->append(STRING_WITH_LEN(" COMMENT "));
  995. append_unescaped(packet, field->comment.str, field->comment.length);
  996. }
  997. }
  998. key_info= table->key_info;
  999. bzero((char*) &create_info, sizeof(create_info));
  1000. file->update_create_info(&create_info);
  1001. primary_key= share->primary_key;
  1002. for (uint i=0 ; i < share->keys ; i++,key_info++)
  1003. {
  1004. KEY_PART_INFO *key_part= key_info->key_part;
  1005. bool found_primary=0;
  1006. packet->append(STRING_WITH_LEN(",\n "));
  1007. if (i == primary_key && !strcmp(key_info->name, primary_key_name))
  1008. {
  1009. found_primary=1;
  1010. /*
  1011. No space at end, because a space will be added after where the
  1012. identifier would go, but that is not added for primary key.
  1013. */
  1014. packet->append(STRING_WITH_LEN("PRIMARY KEY"));
  1015. }
  1016. else if (key_info->flags & HA_NOSAME)
  1017. packet->append(STRING_WITH_LEN("UNIQUE KEY "));
  1018. else if (key_info->flags & HA_FULLTEXT)
  1019. packet->append(STRING_WITH_LEN("FULLTEXT KEY "));
  1020. else if (key_info->flags & HA_SPATIAL)
  1021. packet->append(STRING_WITH_LEN("SPATIAL KEY "));
  1022. else
  1023. packet->append(STRING_WITH_LEN("KEY "));
  1024. if (!found_primary)
  1025. append_identifier(thd, packet, key_info->name, strlen(key_info->name));
  1026. packet->append(STRING_WITH_LEN(" ("));
  1027. for (uint j=0 ; j < key_info->key_parts ; j++,key_part++)
  1028. {
  1029. if (j)
  1030. packet->append(',');
  1031. if (key_part->field)
  1032. append_identifier(thd,packet,key_part->field->field_name,
  1033. strlen(key_part->field->field_name));
  1034. if (key_part->field &&
  1035. (key_part->length !=
  1036. table->field[key_part->fieldnr-1]->key_length() &&
  1037. !(key_info->flags & HA_FULLTEXT)))
  1038. {
  1039. char *end;
  1040. buff[0] = '(';
  1041. end= int10_to_str((long) key_part->length /
  1042. key_part->field->charset()->mbmaxlen,
  1043. buff + 1,10);
  1044. *end++ = ')';
  1045. packet->append(buff,(uint) (end-buff));
  1046. }
  1047. }
  1048. packet->append(')');
  1049. store_key_options(thd, packet, table, key_info);
  1050. if (key_info->parser)
  1051. {
  1052. packet->append(" WITH PARSER ", 13);
  1053. append_identifier(thd, packet, key_info->parser->name.str,
  1054. key_info->parser->name.length);
  1055. }
  1056. }
  1057. /*
  1058. Get possible foreign key definitions stored in InnoDB and append them
  1059. to the CREATE TABLE statement
  1060. */
  1061. if ((for_str= file->get_foreign_key_create_info()))
  1062. {
  1063. packet->append(for_str, strlen(for_str));
  1064. file->free_foreign_key_create_info(for_str);
  1065. }
  1066. packet->append(STRING_WITH_LEN("\n)"));
  1067. if (!(thd->variables.sql_mode & MODE_NO_TABLE_OPTIONS) && !foreign_db_mode)
  1068. {
  1069. show_table_options= TRUE;
  1070. /*
  1071. Get possible table space definitions and append them
  1072. to the CREATE TABLE statement
  1073. */
  1074. if ((for_str= file->get_tablespace_name(thd)))
  1075. {
  1076. packet->append(" TABLESPACE ");
  1077. packet->append(for_str, strlen(for_str));
  1078. packet->append(" STORAGE DISK");
  1079. my_free(for_str, MYF(0));
  1080. }
  1081. /*
  1082. IF check_create_info
  1083. THEN add ENGINE only if it was used when creating the table
  1084. */
  1085. if (!create_info_arg ||
  1086. (create_info_arg->used_fields & HA_CREATE_USED_ENGINE))
  1087. {
  1088. if (thd->variables.sql_mode & (MODE_MYSQL323 | MODE_MYSQL40))
  1089. packet->append(STRING_WITH_LEN(" TYPE="));
  1090. else
  1091. packet->append(STRING_WITH_LEN(" ENGINE="));
  1092. #ifdef WITH_PARTITION_STORAGE_ENGINE
  1093. if (table->part_info)
  1094. packet->append(ha_resolve_storage_engine_name(
  1095. table->part_info->default_engine_type));
  1096. else
  1097. packet->append(file->table_type());
  1098. #else
  1099. packet->append(file->table_type());
  1100. #endif
  1101. }
  1102. /*
  1103. Add AUTO_INCREMENT=... if there is an AUTO_INCREMENT column,
  1104. and NEXT_ID > 1 (the default). We must not print the clause
  1105. for engines that do not support this as it would break the
  1106. import of dumps, but as of this writing, the test for whether
  1107. AUTO_INCREMENT columns are allowed and wether AUTO_INCREMENT=...
  1108. is supported is identical, !(file->table_flags() & HA_NO_AUTO_INCREMENT))
  1109. Because of that, we do not explicitly test for the feature,
  1110. but may extrapolate its existence from that of an AUTO_INCREMENT column.
  1111. */
  1112. if(create_info.auto_increment_value > 1)
  1113. {
  1114. packet->append(" AUTO_INCREMENT=", 16);
  1115. end= longlong10_to_str(create_info.auto_increment_value, buff,10);
  1116. packet->append(buff, (uint) (end - buff));
  1117. }
  1118. if (share->table_charset &&
  1119. !(thd->variables.sql_mode & MODE_MYSQL323) &&
  1120. !(thd->variables.sql_mode & MODE_MYSQL40))
  1121. {
  1122. /*
  1123. IF check_create_info
  1124. THEN add DEFAULT CHARSET only if it was used when creating the table
  1125. */
  1126. if (!create_info_arg ||
  1127. (create_info_arg->used_fields & HA_CREATE_USED_DEFAULT_CHARSET))
  1128. {
  1129. packet->append(STRING_WITH_LEN(" DEFAULT CHARSET="));
  1130. packet->append(share->table_charset->csname);
  1131. if (!(share->table_charset->state & MY_CS_PRIMARY))
  1132. {
  1133. packet->append(STRING_WITH_LEN(" COLLATE="));
  1134. packet->append(table->s->table_charset->name);
  1135. }
  1136. }
  1137. }
  1138. if (share->min_rows)
  1139. {
  1140. packet->append(STRING_WITH_LEN(" MIN_ROWS="));
  1141. end= longlong10_to_str(share->min_rows, buff, 10);
  1142. packet->append(buff, (uint) (end- buff));
  1143. }
  1144. if (share->max_rows && !table_list->schema_table)
  1145. {
  1146. packet->append(STRING_WITH_LEN(" MAX_ROWS="));
  1147. end= longlong10_to_str(share->max_rows, buff, 10);
  1148. packet->append(buff, (uint) (end - buff));
  1149. }
  1150. if (share->avg_row_length)
  1151. {
  1152. packet->append(STRING_WITH_LEN(" AVG_ROW_LENGTH="));
  1153. end= longlong10_to_str(share->avg_row_length, buff,10);
  1154. packet->append(buff, (uint) (end - buff));
  1155. }
  1156. if (share->db_create_options & HA_OPTION_PACK_KEYS)
  1157. packet->append(STRING_WITH_LEN(" PACK_KEYS=1"));
  1158. if (share->db_create_options & HA_OPTION_NO_PACK_KEYS)
  1159. packet->append(STRING_WITH_LEN(" PACK_KEYS=0"));
  1160. if (share->db_create_options & HA_OPTION_CHECKSUM)
  1161. packet->append(STRING_WITH_LEN(" CHECKSUM=1"));
  1162. if (share->db_create_options & HA_OPTION_DELAY_KEY_WRITE)
  1163. packet->append(STRING_WITH_LEN(" DELAY_KEY_WRITE=1"));
  1164. if (share->row_type != ROW_TYPE_DEFAULT)
  1165. {
  1166. packet->append(STRING_WITH_LEN(" ROW_FORMAT="));
  1167. packet->append(ha_row_type[(uint) share->row_type]);
  1168. }
  1169. if (table->s->key_block_size)
  1170. {
  1171. packet->append(STRING_WITH_LEN(" KEY_BLOCK_SIZE="));
  1172. end= longlong10_to_str(table->s->key_block_size, buff, 10);
  1173. packet->append(buff, (uint) (end - buff));
  1174. }
  1175. table->file->append_create_info(packet);
  1176. if (share->comment && share->comment[0])
  1177. {
  1178. packet->append(STRING_WITH_LEN(" COMMENT="));
  1179. append_unescaped(packet, share->comment, strlen(share->comment));
  1180. }
  1181. if (share->connect_string.length)
  1182. {
  1183. packet->append(STRING_WITH_LEN(" CONNECTION="));
  1184. append_unescaped(packet, share->connect_string.str, share->connect_string.length);
  1185. }
  1186. append_directory(thd, packet, "DATA", create_info.data_file_name);
  1187. append_directory(thd, packet, "INDEX", create_info.index_file_name);
  1188. }
  1189. #ifdef WITH_PARTITION_STORAGE_ENGINE
  1190. {
  1191. /*
  1192. Partition syntax for CREATE TABLE is at the end of the syntax.
  1193. */
  1194. uint part_syntax_len;
  1195. char *part_syntax;
  1196. if (table->part_info &&
  1197. (!table->part_info->is_auto_partitioned) &&
  1198. ((part_syntax= generate_partition_syntax(table->part_info,
  1199. &part_syntax_len,
  1200. FALSE,
  1201. show_table_options))))
  1202. {
  1203. packet->append(STRING_WITH_LEN(" /*!50100"));
  1204. packet->append(part_syntax, part_syntax_len);
  1205. packet->append(STRING_WITH_LEN(" */"));
  1206. my_free(part_syntax, MYF(0));
  1207. }
  1208. }
  1209. #endif
  1210. tmp_restore_column_map(table->read_set, old_map);
  1211. DBUG_RETURN(0);
  1212. }
  1213. static void store_key_options(THD *thd, String *packet, TABLE *table,
  1214. KEY *key_info)
  1215. {
  1216. bool limited_mysql_mode= (thd->variables.sql_mode &
  1217. (MODE_NO_FIELD_OPTIONS | MODE_MYSQL323 |
  1218. MODE_MYSQL40)) != 0;
  1219. bool foreign_db_mode= (thd->variables.sql_mode & (MODE_POSTGRESQL |
  1220. MODE_ORACLE |
  1221. MODE_MSSQL |
  1222. MODE_DB2 |
  1223. MODE_MAXDB |
  1224. MODE_ANSI)) != 0;
  1225. char *end, buff[32];
  1226. if (!(thd->variables.sql_mode & MODE_NO_KEY_OPTIONS) &&
  1227. !limited_mysql_mode && !foreign_db_mode)
  1228. {
  1229. if (key_info->algorithm == HA_KEY_ALG_BTREE)
  1230. packet->append(STRING_WITH_LEN(" USING BTREE"));
  1231. if (key_info->algorithm == HA_KEY_ALG_HASH)
  1232. packet->append(STRING_WITH_LEN(" USING HASH"));
  1233. /* send USING only in non-default case: non-spatial rtree */
  1234. if ((key_info->algorithm == HA_KEY_ALG_RTREE) &&
  1235. !(key_info->flags & HA_SPATIAL))
  1236. packet->append(STRING_WITH_LEN(" USING RTREE"));
  1237. if ((key_info->flags & HA_USES_BLOCK_SIZE) &&
  1238. table->s->key_block_size != key_info->block_size)
  1239. {
  1240. packet->append(STRING_WITH_LEN(" KEY_BLOCK_SIZE="));
  1241. end= longlong10_to_str(key_info->block_size, buff, 10);
  1242. packet->append(buff, (uint) (end - buff));
  1243. }
  1244. }
  1245. }
  1246. void
  1247. view_store_options(THD *thd, TABLE_LIST *table, String *buff)
  1248. {
  1249. buff->append(STRING_WITH_LEN("ALGORITHM="));
  1250. switch ((int8)table->algorithm) {
  1251. case VIEW_ALGORITHM_UNDEFINED:
  1252. buff->append(STRING_WITH_LEN("UNDEFINED "));
  1253. break;
  1254. case VIEW_ALGORITHM_TMPTABLE:
  1255. buff->append(STRING_WITH_LEN("TEMPTABLE "));
  1256. break;
  1257. case VIEW_ALGORITHM_MERGE:
  1258. buff->append(STRING_WITH_LEN("MERGE "));
  1259. break;
  1260. default:
  1261. DBUG_ASSERT(0); // never should happen
  1262. }
  1263. append_definer(thd, buff, &table->definer.user, &table->definer.host);
  1264. if (table->view_suid)
  1265. buff->append(STRING_WITH_LEN("SQL SECURITY DEFINER "));
  1266. else
  1267. buff->append(STRING_WITH_LEN("SQL SECURITY INVOKER "));
  1268. }
  1269. /*
  1270. Append DEFINER clause to the given buffer.
  1271. SYNOPSIS
  1272. append_definer()
  1273. thd [in] thread handle
  1274. buffer [inout] buffer to hold DEFINER clause
  1275. definer_user [in] user name part of definer
  1276. definer_host [in] host name part of definer
  1277. */
  1278. void append_definer(THD *thd, String *buffer, const LEX_STRING *definer_user,
  1279. const LEX_STRING *definer_host)
  1280. {
  1281. buffer->append(STRING_WITH_LEN("DEFINER="));
  1282. append_identifier(thd, buffer, definer_user->str, definer_user->length);
  1283. buffer->append('@');
  1284. append_identifier(thd, buffer, definer_host->str, definer_host->length);
  1285. buffer->append(' ');
  1286. }
  1287. int
  1288. view_store_create_info(THD *thd, TABLE_LIST *table, String *buff)
  1289. {
  1290. my_bool foreign_db_mode= (thd->variables.sql_mode & (MODE_POSTGRESQL |
  1291. MODE_ORACLE |
  1292. MODE_MSSQL |
  1293. MODE_DB2 |
  1294. MODE_MAXDB |
  1295. MODE_ANSI)) != 0;
  1296. /*
  1297. Compact output format for view can be used
  1298. - if user has db of this view as current db
  1299. - if this view only references table inside it's own db
  1300. */
  1301. if (!thd->db || strcmp(thd->db, table->view_db.str))
  1302. table->compact_view_format= FALSE;
  1303. else
  1304. {
  1305. TABLE_LIST *tbl;
  1306. table->compact_view_format= TRUE;
  1307. for (tbl= thd->lex->query_tables;
  1308. tbl;
  1309. tbl= tbl->next_global)
  1310. {
  1311. if (strcmp(table->view_db.str, tbl->view ? tbl->view_db.str :tbl->db)!= 0)
  1312. {
  1313. table->compact_view_format= FALSE;
  1314. break;
  1315. }
  1316. }
  1317. }
  1318. buff->append(STRING_WITH_LEN("CREATE "));
  1319. if (!foreign_db_mode)
  1320. {
  1321. view_store_options(thd, table, buff);
  1322. }
  1323. buff->append(STRING_WITH_LEN("VIEW "));
  1324. if (!table->compact_view_format)
  1325. {
  1326. append_identifier(thd, buff, table->view_db.str, table->view_db.length);
  1327. buff->append('.');
  1328. }
  1329. append_identifier(thd, buff, table->view_name.str, table->view_name.length);
  1330. buff->append(STRING_WITH_LEN(" AS "));
  1331. /*
  1332. We can't just use table->query, because our SQL_MODE may trigger
  1333. a different syntax, like when ANSI_QUOTES is defined.
  1334. */
  1335. table->view->unit.print(buff);
  1336. if (table->with_check != VIEW_CHECK_NONE)
  1337. {
  1338. if (table->with_check == VIEW_CHECK_LOCAL)
  1339. buff->append(STRING_WITH_LEN(" WITH LOCAL CHECK OPTION"));
  1340. else
  1341. buff->append(STRING_WITH_LEN(" WITH CASCADED CHECK OPTION"));
  1342. }
  1343. return 0;
  1344. }
  1345. /****************************************************************************
  1346. Return info about all processes
  1347. returns for each thread: thread id, user, host, db, command, info
  1348. ****************************************************************************/
  1349. class thread_info :public ilink {
  1350. public:
  1351. static void *operator new(size_t size)
  1352. {
  1353. return (void*) sql_alloc((uint) size);
  1354. }
  1355. static void operator delete(void *ptr __attribute__((unused)),
  1356. size_t size __attribute__((unused)))
  1357. { TRASH(ptr, size); }
  1358. ulong thread_id;
  1359. time_t start_time;
  1360. uint command;
  1361. const char *user,*host,*db,*proc_info,*state_info;
  1362. char *query;
  1363. };
  1364. #ifdef HAVE_EXPLICIT_TEMPLATE_INSTANTIATION
  1365. template class I_List<thread_info>;
  1366. #endif
  1367. void mysqld_list_processes(THD *thd,const char *user, bool verbose)
  1368. {
  1369. Item *field;
  1370. List<Item> field_list;
  1371. I_List<thread_info> thread_infos;
  1372. ulong max_query_length= (verbose ? thd->variables.max_allowed_packet :
  1373. PROCESS_LIST_WIDTH);
  1374. Protocol *protocol= thd->protocol;
  1375. DBUG_ENTER("mysqld_list_processes");
  1376. field_list.push_back(new Item_int("Id",0,11));
  1377. field_list.push_back(new Item_empty_string("User",16));
  1378. field_list.push_back(new Item_empty_string("Host",LIST_PROCESS_HOST_LEN));
  1379. field_list.push_back(field=new Item_empty_string("db",NAME_LEN));
  1380. field->maybe_null=1;
  1381. field_list.push_back(new Item_empty_string("Command",16));
  1382. field_list.push_back(new Item_return_int("Time",7, FIELD_TYPE_LONG));
  1383. field_list.push_back(field=new Item_empty_string("State",30));
  1384. field->maybe_null=1;
  1385. field_list.push_back(field=new Item_empty_string("Info",max_query_length));
  1386. field->maybe_null=1;
  1387. if (protocol->send_fields(&field_list,
  1388. Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
  1389. DBUG_VOID_RETURN;
  1390. VOID(pthread_mutex_lock(&LOCK_thread_count)); // For unlink from list
  1391. if (!thd->killed)
  1392. {
  1393. I_List_iterator<THD> it(threads);
  1394. THD *tmp;
  1395. while ((tmp=it++))
  1396. {
  1397. Security_context *tmp_sctx= tmp->security_ctx;
  1398. struct st_my_thread_var *mysys_var;
  1399. if ((tmp->vio_ok() || tmp->system_thread) &&
  1400. (!user || (tmp_sctx->user && !strcmp(tmp_sctx->user, user))))
  1401. {
  1402. thread_info *thd_info= new thread_info;
  1403. thd_info->thread_id=tmp->thread_id;
  1404. thd_info->user= thd->strdup(tmp_sctx->user ? tmp_sctx->user :
  1405. (tmp->system_thread ?
  1406. "system user" : "unauthenticated user"));
  1407. if (tmp->peer_port && (tmp_sctx->host || tmp_sctx->ip) &&
  1408. thd->security_ctx->host_or_ip[0])
  1409. {
  1410. if ((thd_info->host= thd->alloc(LIST_PROCESS_HOST_LEN+1)))
  1411. my_snprintf((char *) thd_info->host, LIST_PROCESS_HOST_LEN,
  1412. "%s:%u", tmp_sctx->host_or_ip, tmp->peer_port);
  1413. }
  1414. else
  1415. thd_info->host= thd->strdup(tmp_sctx->host_or_ip);
  1416. if ((thd_info->db=tmp->db)) // Safe test
  1417. thd_info->db=thd->strdup(thd_info->db);
  1418. thd_info->command=(int) tmp->command;
  1419. if ((mysys_var= tmp->mysys_var))
  1420. pthread_mutex_lock(&mysys_var->mutex);
  1421. thd_info->proc_info= (char*) (tmp->killed == THD::KILL_CONNECTION? "Killed" : 0);
  1422. #ifndef EMBEDDED_LIBRARY
  1423. thd_info->state_info= (char*) (tmp->locked ? "Locked" :
  1424. tmp->net.reading_or_writing ?
  1425. (tmp->net.reading_or_writing == 2 ?
  1426. "Writing to net" :
  1427. thd_info->command == COM_SLEEP ? "" :
  1428. "Reading from net") :
  1429. tmp->proc_info ? tmp->proc_info :
  1430. tmp->mysys_var &&
  1431. tmp->mysys_var->current_cond ?
  1432. "Waiting on cond" : NullS);
  1433. #else
  1434. thd_info->state_info= (char*)"Writing to net";
  1435. #endif
  1436. if (mysys_var)
  1437. pthread_mutex_unlock(&mysys_var->mutex);
  1438. #if !defined(DONT_USE_THR_ALARM) && ! defined(SCO)
  1439. if (pthread_kill(tmp->real_id,0))
  1440. tmp->proc_info="*** DEAD ***"; // This shouldn't happen
  1441. #endif
  1442. #ifdef EXTRA_DEBUG
  1443. thd_info->start_time= tmp->time_after_lock;
  1444. #else
  1445. thd_info->start_time= tmp->start_time;
  1446. #endif
  1447. thd_info->query=0;
  1448. if (tmp->query)
  1449. {
  1450. /*
  1451. query_length is always set to 0 when we set query = NULL; see
  1452. the comment in sql_class.h why this prevents crashes in possible
  1453. races with query_length
  1454. */
  1455. uint length= min(max_query_length, tmp->query_length);
  1456. thd_info->query=(char*) thd->strmake(tmp->query,length);
  1457. }
  1458. thread_infos.append(thd_info);
  1459. }
  1460. }
  1461. }
  1462. VOID(pthread_mutex_unlock(&LOCK_thread_count));
  1463. thread_info *thd_info;
  1464. time_t now= time(0);
  1465. while ((thd_info=thread_infos.get()))
  1466. {
  1467. protocol->prepare_for_resend();
  1468. protocol->store((ulonglong) thd_info->thread_id);
  1469. protocol->store(thd_info->user, system_charset_info);
  1470. protocol->store(thd_info->host, system_charset_info);
  1471. protocol->store(thd_info->db, system_charset_info);
  1472. if (thd_info->proc_info)
  1473. protocol->store(thd_info->proc_info, system_charset_info);
  1474. else
  1475. protocol->store(command_name[thd_info->command].str, system_charset_info);
  1476. if (thd_info->start_time)
  1477. protocol->store((uint32) (now - thd_info->start_time));
  1478. else
  1479. protocol->store_null();
  1480. protocol->store(thd_info->state_info, system_charset_info);
  1481. protocol->store(thd_info->query, system_charset_info);
  1482. if (protocol->write())
  1483. break; /* purecov: inspected */
  1484. }
  1485. send_eof(thd);
  1486. DBUG_VOID_RETURN;
  1487. }
  1488. int fill_schema_processlist(THD* thd, TABLE_LIST* tables, COND* cond)
  1489. {
  1490. TABLE *table= tables->table;
  1491. CHARSET_INFO *cs= system_charset_info;
  1492. char *user;
  1493. time_t now= time(0);
  1494. DBUG_ENTER("fill_process_list");
  1495. user= thd->security_ctx->master_access & PROCESS_ACL ?
  1496. NullS : thd->security_ctx->priv_user;
  1497. VOID(pthread_mutex_lock(&LOCK_thread_count));
  1498. if (!thd->killed)
  1499. {
  1500. I_List_iterator<THD> it(threads);
  1501. THD* tmp;
  1502. while ((tmp= it++))
  1503. {
  1504. Security_context *tmp_sctx= tmp->security_ctx;
  1505. struct st_my_thread_var *mysys_var;
  1506. const char *val;
  1507. if ((!tmp->vio_ok() && !tmp->system_thread) ||
  1508. (user && (!tmp_sctx->user || strcmp(tmp_sctx->user, user))))
  1509. continue;
  1510. restore_record(table, s->default_values);
  1511. /* ID */
  1512. table->field[0]->store((longlong) tmp->thread_id, TRUE);
  1513. /* USER */
  1514. val= tmp_sctx->user ? tmp_sctx->user :
  1515. (tmp->system_thread ? "system user" : "unauthenticated user");
  1516. table->field[1]->store(val, strlen(val), cs);
  1517. /* HOST */
  1518. if (tmp->peer_port && (tmp_sctx->host || tmp_sctx->ip) &&
  1519. thd->security_ctx->host_or_ip[0])
  1520. {
  1521. char host[LIST_PROCESS_HOST_LEN + 1];
  1522. my_snprintf(host, LIST_PROCESS_HOST_LEN, "%s:%u",
  1523. tmp_sctx->host_or_ip, tmp->peer_port);
  1524. table->field[2]->store(host, strlen(host), cs);
  1525. }
  1526. else
  1527. table->field[2]->store(tmp_sctx->host_or_ip,
  1528. strlen(tmp_sctx->host_or_ip), cs);
  1529. /* DB */
  1530. if (tmp->db)
  1531. {
  1532. table->field[3]->store(tmp->db, strlen(tmp->db), cs);
  1533. table->field[3]->set_notnull();
  1534. }
  1535. if ((mysys_var= tmp->mysys_var))
  1536. pthread_mutex_lock(&mysys_var->mutex);
  1537. /* COMMAND */
  1538. if ((val= (char *) (tmp->killed == THD::KILL_CONNECTION? "Killed" : 0)))
  1539. table->field[4]->store(val, strlen(val), cs);
  1540. else
  1541. table->field[4]->store(command_name[tmp->command].str,
  1542. command_name[tmp->command].length, cs);
  1543. /* TIME */
  1544. table->field[5]->store((uint32)(tmp->start_time ?
  1545. now - tmp->start_time : 0), TRUE);
  1546. /* STATE */
  1547. #ifndef EMBEDDED_LIBRARY
  1548. val= (char*) (tmp->locked ? "Locked" :
  1549. tmp->net.reading_or_writing ?
  1550. (tmp->net.reading_or_writing == 2 ?
  1551. "Writing to net" :
  1552. tmp->command == COM_SLEEP ? "" :
  1553. "Reading from net") :
  1554. tmp->proc_info ? tmp->proc_info :
  1555. tmp->mysys_var &&
  1556. tmp->mysys_var->current_cond ?
  1557. "Waiting on cond" : NullS);
  1558. #else
  1559. val= (char *) "Writing to net";
  1560. #endif
  1561. if (val)
  1562. {
  1563. table->field[6]->store(val, strlen(val), cs);
  1564. table->field[6]->set_notnull();
  1565. }
  1566. if (mysys_var)
  1567. pthread_mutex_unlock(&mysys_var->mutex);
  1568. /* INFO */
  1569. if (tmp->query)
  1570. {
  1571. table->field[7]->store(tmp->query,
  1572. min(PROCESS_LIST_INFO_WIDTH,
  1573. tmp->query_length), cs);
  1574. table->field[7]->set_notnull();
  1575. }
  1576. if (schema_table_store_record(thd, table))
  1577. {
  1578. VOID(pthread_mutex_unlock(&LOCK_thread_count));
  1579. DBUG_RETURN(1);
  1580. }
  1581. }
  1582. }
  1583. VOID(pthread_mutex_unlock(&LOCK_thread_count));
  1584. DBUG_RETURN(0);
  1585. }
  1586. /*****************************************************************************
  1587. Status functions
  1588. *****************************************************************************/
  1589. static DYNAMIC_ARRAY all_status_vars;
  1590. static bool status_vars_inited= 0;
  1591. static int show_var_cmp(const void *var1, const void *var2)
  1592. {
  1593. return strcmp(((SHOW_VAR*)var1)->name, ((SHOW_VAR*)var2)->name);
  1594. }
  1595. /*
  1596. deletes all the SHOW_UNDEF elements from the array and calls
  1597. delete_dynamic() if it's completely empty.
  1598. */
  1599. static void shrink_var_array(DYNAMIC_ARRAY *array)
  1600. {
  1601. uint a,b;
  1602. SHOW_VAR *all= dynamic_element(array, 0, SHOW_VAR *);
  1603. for (a= b= 0; b < array->elements; b++)
  1604. if (all[b].type != SHOW_UNDEF)
  1605. all[a++]= all[b];
  1606. if (a)
  1607. {
  1608. bzero(all+a, sizeof(SHOW_VAR)); // writing NULL-element to the end
  1609. array->elements= a;
  1610. }
  1611. else // array is completely empty - delete it
  1612. delete_dynamic(array);
  1613. }
  1614. /*
  1615. Adds an array of SHOW_VAR entries to the output of SHOW STATUS
  1616. SYNOPSIS
  1617. add_status_vars(SHOW_VAR *list)
  1618. list - an array of SHOW_VAR entries to add to all_status_vars
  1619. the last entry must be {0,0,SHOW_UNDEF}
  1620. NOTE
  1621. The handling of all_status_vars[] is completely internal, it's allocated
  1622. automatically when something is added to it, and deleted completely when
  1623. the last entry is removed.
  1624. As a special optimization, if add_status_vars() is called before
  1625. init_status_vars(), it assumes "startup mode" - neither concurrent access
  1626. to the array nor SHOW STATUS are possible (thus it skips locks and qsort)
  1627. The last entry of the all_status_vars[] should always be {0,0,SHOW_UNDEF}
  1628. */
  1629. int add_status_vars(SHOW_VAR *list)
  1630. {
  1631. int res= 0;
  1632. if (status_vars_inited)
  1633. pthread_mutex_lock(&LOCK_status);
  1634. if (!all_status_vars.buffer && // array is not allocated yet - do it now
  1635. my_init_dynamic_array(&all_status_vars, sizeof(SHOW_VAR), 200, 20))
  1636. {
  1637. res= 1;
  1638. goto err;
  1639. }
  1640. while (list->name)
  1641. res|= insert_dynamic(&all_status_vars, (gptr)list++);
  1642. res|= insert_dynamic(&all_status_vars, (gptr)list); // appending NULL-element
  1643. all_status_vars.elements--; // but next insert_dynamic should overwite it
  1644. if (status_vars_inited)
  1645. sort_dynamic(&all_status_vars, show_var_cmp);
  1646. err:
  1647. if (status_vars_inited)
  1648. pthread_mutex_unlock(&LOCK_status);
  1649. return res;
  1650. }
  1651. /*
  1652. Make all_status_vars[] usable for SHOW STATUS
  1653. NOTE
  1654. See add_status_vars(). Before init_status_vars() call, add_status_vars()
  1655. works in a special fast "startup" mode. Thus init_status_vars()
  1656. should be called as late as possible but before enabling multi-threading.
  1657. */
  1658. void init_status_vars()
  1659. {
  1660. status_vars_inited=1;
  1661. sort_dynamic(&all_status_vars, show_var_cmp);
  1662. }
  1663. /*
  1664. catch-all cleanup function, cleans up everything no matter what
  1665. DESCRIPTION
  1666. This function is not strictly required if all add_to_status/
  1667. remove_status_vars are properly paired, but it's a safety measure that
  1668. deletes everything from the all_status_vars[] even if some
  1669. remove_status_vars were forgotten
  1670. */
  1671. void free_status_vars()
  1672. {
  1673. delete_dynamic(&all_status_vars);
  1674. }
  1675. /*
  1676. Removes an array of SHOW_VAR entries from the output of SHOW STATUS
  1677. SYNOPSIS
  1678. remove_status_vars(SHOW_VAR *list)
  1679. list - an array of SHOW_VAR entries to remove to all_status_vars
  1680. the last entry must be {0,0,SHOW_UNDEF}
  1681. NOTE
  1682. there's lots of room for optimizing this, especially in non-sorted mode,
  1683. but nobody cares - it may be called only in case of failed plugin
  1684. initialization in the mysqld startup.
  1685. */
  1686. void remove_status_vars(SHOW_VAR *list)
  1687. {
  1688. if (status_vars_inited)
  1689. {
  1690. pthread_mutex_lock(&LOCK_status);
  1691. SHOW_VAR *all= dynamic_element(&all_status_vars, 0, SHOW_VAR *);
  1692. int a= 0, b= all_status_vars.elements, c= (a+b)/2;
  1693. for (; list->name; list++)
  1694. {
  1695. int res= 0;
  1696. for (a= 0, b= all_status_vars.elements; b-a > 1; c= (a+b)/2)
  1697. {
  1698. res= show_var_cmp(list, all+c);
  1699. if (res < 0)
  1700. b= c;
  1701. else if (res > 0)
  1702. a= c;
  1703. else
  1704. break;
  1705. }
  1706. if (res == 0)
  1707. all[c].type= SHOW_UNDEF;
  1708. }
  1709. shrink_var_array(&all_status_vars);
  1710. pthread_mutex_unlock(&LOCK_status);
  1711. }
  1712. else
  1713. {
  1714. SHOW_VAR *all= dynamic_element(&all_status_vars, 0, SHOW_VAR *);
  1715. uint i;
  1716. for (; list->name; list++)
  1717. {
  1718. for (i= 0; i < all_status_vars.elements; i++)
  1719. {
  1720. if (show_var_cmp(list, all+i))
  1721. continue;
  1722. all[i].type= SHOW_UNDEF;
  1723. break;
  1724. }
  1725. }
  1726. shrink_var_array(&all_status_vars);
  1727. }
  1728. }
  1729. static bool show_status_array(THD *thd, const char *wild,
  1730. SHOW_VAR *variables,
  1731. enum enum_var_type value_type,
  1732. struct system_status_var *status_var,
  1733. const char *prefix, TABLE *table)
  1734. {
  1735. char buff[SHOW_VAR_FUNC_BUFF_SIZE], *prefix_end;
  1736. /* the variable name should not be longer then 80 characters */
  1737. char name_buffer[80];
  1738. int len;
  1739. LEX_STRING null_lex_str;
  1740. SHOW_VAR tmp, *var;
  1741. DBUG_ENTER("show_status_array");
  1742. null_lex_str.str= 0; // For sys_var->value_ptr()
  1743. null_lex_str.length= 0;
  1744. prefix_end=strnmov(name_buffer, prefix, sizeof(name_buffer)-1);
  1745. if (*prefix)
  1746. *prefix_end++= '_';
  1747. len=name_buffer + sizeof(name_buffer) - prefix_end;
  1748. for (; variables->name; variables++)
  1749. {
  1750. strnmov(prefix_end, variables->name, len);
  1751. name_buffer[sizeof(name_buffer)-1]=0; /* Safety */
  1752. /*
  1753. if var->type is SHOW_FUNC, call the function.
  1754. Repeat as necessary, if new var is again SHOW_FUNC
  1755. */
  1756. for (var=variables; var->type == SHOW_FUNC; var= &tmp)
  1757. ((mysql_show_var_func)(var->value))(thd, &tmp, buff);
  1758. SHOW_TYPE show_type=var->type;
  1759. if (show_type == SHOW_ARRAY)
  1760. {
  1761. show_status_array(thd, wild, (SHOW_VAR *) var->value,
  1762. value_type, status_var, name_buffer, table);
  1763. }
  1764. else
  1765. {
  1766. if (!(wild && wild[0] && wild_case_compare(system_charset_info,
  1767. name_buffer, wild)))
  1768. {
  1769. char *value=var->value;
  1770. const char *pos, *end; // We assign a lot of const's
  1771. long nr;
  1772. if (show_type == SHOW_SYS)
  1773. {
  1774. show_type= ((sys_var*) value)->type();
  1775. value= (char*) ((sys_var*) value)->value_ptr(thd, value_type,
  1776. &null_lex_str);
  1777. }
  1778. pos= end= buff;
  1779. /*
  1780. note that value may be == buff. All SHOW_xxx code below
  1781. should still work in this case
  1782. */
  1783. switch (show_type) {
  1784. case SHOW_DOUBLE_STATUS:
  1785. {
  1786. value= ((char *) status_var + (ulong) value);
  1787. end= buff + sprintf(buff, "%f", *(double*) value);
  1788. break;
  1789. }
  1790. case SHOW_LONG_STATUS:
  1791. value= ((char *) status_var + (ulong) value);
  1792. /* fall through */
  1793. case SHOW_LONG:
  1794. case SHOW_LONG_NOFLUSH: // the difference lies in refresh_status()
  1795. end= int10_to_str(*(long*) value, buff, 10);
  1796. break;
  1797. case SHOW_LONGLONG:
  1798. end= longlong10_to_str(*(longlong*) value, buff, 10);
  1799. break;
  1800. case SHOW_HA_ROWS:
  1801. end= longlong10_to_str((longlong) *(ha_rows*) value, buff, 10);
  1802. break;
  1803. case SHOW_BOOL:
  1804. end= strmov(buff, *(bool*) value ? "ON" : "OFF");
  1805. break;
  1806. case SHOW_MY_BOOL:
  1807. end= strmov(buff, *(my_bool*) value ? "ON" : "OFF");
  1808. break;
  1809. case SHOW_INT:
  1810. end= int10_to_str((long) *(uint32*) value, buff, 10);
  1811. break;
  1812. case SHOW_HAVE:
  1813. {
  1814. SHOW_COMP_OPTION tmp= *(SHOW_COMP_OPTION*) value;
  1815. pos= show_comp_option_name[(int) tmp];
  1816. end= strend(pos);
  1817. break;
  1818. }
  1819. case SHOW_CHAR:
  1820. {
  1821. if (!(pos= value))
  1822. pos= "";
  1823. end= strend(pos);
  1824. break;
  1825. }
  1826. case SHOW_CHAR_PTR:
  1827. {
  1828. if (!(pos= *(char**) value))
  1829. pos= "";
  1830. end= strend(pos);
  1831. break;
  1832. }
  1833. case SHOW_KEY_CACHE_LONG:
  1834. value= (char*) dflt_key_cache + (ulong)value;
  1835. end= int10_to_str(*(long*) value, buff, 10);
  1836. break;
  1837. case SHOW_KEY_CACHE_LONGLONG:
  1838. value= (char*) dflt_key_cache + (ulong)value;
  1839. end= longlong10_to_str(*(longlong*) value, buff, 10);
  1840. break;
  1841. case SHOW_UNDEF:
  1842. break; // Return empty string
  1843. case SHOW_SYS: // Cannot happen
  1844. default:
  1845. DBUG_ASSERT(0);
  1846. break;
  1847. }
  1848. restore_record(table, s->default_values);
  1849. table->field[0]->store(name_buffer, strlen(name_buffer),
  1850. system_charset_info);
  1851. table->field[1]->store(pos, (uint32) (end - pos), system_charset_info);
  1852. if (schema_table_store_record(thd, table))
  1853. DBUG_RETURN(TRUE);
  1854. }
  1855. }
  1856. }
  1857. DBUG_RETURN(FALSE);
  1858. }
  1859. /* collect status for all running threads */
  1860. void calc_sum_of_all_status(STATUS_VAR *to)
  1861. {
  1862. DBUG_ENTER("calc_sum_of_all_status");
  1863. /* Ensure that thread id not killed during loop */
  1864. VOID(pthread_mutex_lock(&LOCK_thread_count)); // For unlink from list
  1865. I_List_iterator<THD> it(threads);
  1866. THD *tmp;
  1867. /* Get global values as base */
  1868. *to= global_status_var;
  1869. /* Add to this status from existing threads */
  1870. while ((tmp= it++))
  1871. add_to_status(to, &tmp->status_var);
  1872. VOID(pthread_mutex_unlock(&LOCK_thread_count));
  1873. DBUG_VOID_RETURN;
  1874. }
  1875. LEX_STRING *make_lex_string(THD *thd, LEX_STRING *lex_str,
  1876. const char* str, uint length,
  1877. bool allocate_lex_string)
  1878. {
  1879. MEM_ROOT *mem= thd->mem_root;
  1880. if (allocate_lex_string)
  1881. if (!(lex_str= (LEX_STRING *)thd->alloc(sizeof(LEX_STRING))))
  1882. return 0;
  1883. lex_str->str= strmake_root(mem, str, length);
  1884. lex_str->length= length;
  1885. return lex_str;
  1886. }
  1887. /* INFORMATION_SCHEMA name */
  1888. LEX_STRING information_schema_name= {(char*)"information_schema", 18};
  1889. /* This is only used internally, but we need it here as a forward reference */
  1890. extern ST_SCHEMA_TABLE schema_tables[];
  1891. typedef struct st_index_field_values
  1892. {
  1893. const char *db_value, *table_value;
  1894. } INDEX_FIELD_VALUES;
  1895. /*
  1896. Store record to I_S table, convert HEAP table
  1897. to MyISAM if necessary
  1898. SYNOPSIS
  1899. schema_table_store_record()
  1900. thd thread handler
  1901. table Information schema table to be updated
  1902. RETURN
  1903. 0 success
  1904. 1 error
  1905. */
  1906. bool schema_table_store_record(THD *thd, TABLE *table)
  1907. {
  1908. int error;
  1909. if ((error= table->file->ha_write_row(table->record[0])))
  1910. {
  1911. if (create_myisam_from_heap(thd, table,
  1912. table->pos_in_table_list->schema_table_param,
  1913. error, 0))
  1914. return 1;
  1915. }
  1916. return 0;
  1917. }
  1918. void get_index_field_values(LEX *lex, INDEX_FIELD_VALUES *index_field_values)
  1919. {
  1920. const char *wild= lex->wild ? lex->wild->ptr() : NullS;
  1921. switch (lex->sql_command) {
  1922. case SQLCOM_SHOW_DATABASES:
  1923. index_field_values->db_value= wild;
  1924. break;
  1925. case SQLCOM_SHOW_TABLES:
  1926. case SQLCOM_SHOW_TABLE_STATUS:
  1927. case SQLCOM_SHOW_TRIGGERS:
  1928. case SQLCOM_SHOW_EVENTS:
  1929. index_field_values->db_value= lex->select_lex.db;
  1930. index_field_values->table_value= wild;
  1931. break;
  1932. default:
  1933. index_field_values->db_value= NullS;
  1934. index_field_values->table_value= NullS;
  1935. break;
  1936. }
  1937. }
  1938. int make_table_list(THD *thd, SELECT_LEX *sel,
  1939. char *db, char *table)
  1940. {
  1941. Table_ident *table_ident;
  1942. LEX_STRING ident_db, ident_table;
  1943. ident_db.str= db;
  1944. ident_db.length= strlen(db);
  1945. ident_table.str= table;
  1946. ident_table.length= strlen(table);
  1947. table_ident= new Table_ident(thd, ident_db, ident_table, 1);
  1948. sel->init_query();
  1949. if (!sel->add_table_to_list(thd, table_ident, 0, 0, TL_READ,
  1950. (List<String> *) 0, (List<String> *) 0))
  1951. return 1;
  1952. return 0;
  1953. }
  1954. bool uses_only_table_name_fields(Item *item, TABLE_LIST *table)
  1955. {
  1956. if (item->type() == Item::FUNC_ITEM)
  1957. {
  1958. Item_func *item_func= (Item_func*)item;
  1959. Item **child;
  1960. Item **item_end= (item_func->arguments()) + item_func->argument_count();
  1961. for (child= item_func->arguments(); child != item_end; child++)
  1962. {
  1963. if (!uses_only_table_name_fields(*child, table))
  1964. return 0;
  1965. }
  1966. }
  1967. else if (item->type() == Item::FIELD_ITEM)
  1968. {
  1969. Item_field *item_field= (Item_field*)item;
  1970. CHARSET_INFO *cs= system_charset_info;
  1971. ST_SCHEMA_TABLE *schema_table= table->schema_table;
  1972. ST_FIELD_INFO *field_info= schema_table->fields_info;
  1973. const char *field_name1= schema_table->idx_field1 >= 0 ? field_info[schema_table->idx_field1].field_name : "";
  1974. const char *field_name2= schema_table->idx_field2 >= 0 ? field_info[schema_table->idx_field2].field_name : "";
  1975. if (table->table != item_field->field->table ||
  1976. (cs->coll->strnncollsp(cs, (uchar *) field_name1, strlen(field_name1),
  1977. (uchar *) item_field->field_name,
  1978. strlen(item_field->field_name), 0) &&
  1979. cs->coll->strnncollsp(cs, (uchar *) field_name2, strlen(field_name2),
  1980. (uchar *) item_field->field_name,
  1981. strlen(item_field->field_name), 0)))
  1982. return 0;
  1983. }
  1984. else if (item->type() == Item::REF_ITEM)
  1985. return uses_only_table_name_fields(item->real_item(), table);
  1986. if (item->type() == Item::SUBSELECT_ITEM &&
  1987. !item->const_item())
  1988. return 0;
  1989. return 1;
  1990. }
  1991. static COND * make_cond_for_info_schema(COND *cond, TABLE_LIST *table)
  1992. {
  1993. if (!cond)
  1994. return (COND*) 0;
  1995. if (cond->type() == Item::COND_ITEM)
  1996. {
  1997. if (((Item_cond*) cond)->functype() == Item_func::COND_AND_FUNC)
  1998. {
  1999. /* Create new top level AND item */
  2000. Item_cond_and *new_cond=new Item_cond_and;
  2001. if (!new_cond)
  2002. return (COND*) 0;
  2003. List_iterator<Item> li(*((Item_cond*) cond)->argument_list());
  2004. Item *item;
  2005. while ((item=li++))
  2006. {
  2007. Item *fix= make_cond_for_info_schema(item, table);
  2008. if (fix)
  2009. new_cond->argument_list()->push_back(fix);
  2010. }
  2011. switch (new_cond->argument_list()->elements) {
  2012. case 0:
  2013. return (COND*) 0;
  2014. case 1:
  2015. return new_cond->argument_list()->head();
  2016. default:
  2017. new_cond->quick_fix_field();
  2018. return new_cond;
  2019. }
  2020. }
  2021. else
  2022. { // Or list
  2023. Item_cond_or *new_cond=new Item_cond_or;
  2024. if (!new_cond)
  2025. return (COND*) 0;
  2026. List_iterator<Item> li(*((Item_cond*) cond)->argument_list());
  2027. Item *item;
  2028. while ((item=li++))
  2029. {
  2030. Item *fix=make_cond_for_info_schema(item, table);
  2031. if (!fix)
  2032. return (COND*) 0;
  2033. new_cond->argument_list()->push_back(fix);
  2034. }
  2035. new_cond->quick_fix_field();
  2036. new_cond->top_level_item();
  2037. return new_cond;
  2038. }
  2039. }
  2040. if (!uses_only_table_name_fields(cond, table))
  2041. return (COND*) 0;
  2042. return cond;
  2043. }
  2044. enum enum_schema_tables get_schema_table_idx(ST_SCHEMA_TABLE *schema_table)
  2045. {
  2046. return (enum enum_schema_tables) (schema_table - &schema_tables[0]);
  2047. }
  2048. /*
  2049. Create db names list. Information schema name always is first in list
  2050. SYNOPSIS
  2051. make_db_list()
  2052. thd thread handler
  2053. files list of db names
  2054. wild wild string
  2055. idx_field_vals idx_field_vals->db_name contains db name or
  2056. wild string
  2057. with_i_schema returns 1 if we added 'IS' name to list
  2058. otherwise returns 0
  2059. is_wild_value if value is 1 then idx_field_vals->db_name is
  2060. wild string otherwise it's db name;
  2061. RETURN
  2062. 1 error
  2063. 0 success
  2064. */
  2065. int make_db_list(THD *thd, List<char> *files,
  2066. INDEX_FIELD_VALUES *idx_field_vals,
  2067. bool *with_i_schema, bool is_wild_value)
  2068. {
  2069. LEX *lex= thd->lex;
  2070. *with_i_schema= 0;
  2071. get_index_field_values(lex, idx_field_vals);
  2072. if (is_wild_value)
  2073. {
  2074. /*
  2075. This part of code is only for SHOW DATABASES command.
  2076. idx_field_vals->db_value can be 0 when we don't use
  2077. LIKE clause (see also get_index_field_values() function)
  2078. */
  2079. if (!idx_field_vals->db_value ||
  2080. !wild_case_compare(system_charset_info,
  2081. information_schema_name.str,
  2082. idx_field_vals->db_value))
  2083. {
  2084. *with_i_schema= 1;
  2085. if (files->push_back(thd->strdup(information_schema_name.str)))
  2086. return 1;
  2087. }
  2088. return mysql_find_files(thd, files, NullS, mysql_data_home,
  2089. idx_field_vals->db_value, 1);
  2090. }
  2091. /*
  2092. This part of code is for SHOW TABLES, SHOW TABLE STATUS commands.
  2093. idx_field_vals->db_value can't be 0 (see get_index_field_values()
  2094. function).
  2095. */
  2096. if (sql_command_flags[lex->sql_command] & CF_STATUS_COMMAND)
  2097. {
  2098. if (!my_strcasecmp(system_charset_info, information_schema_name.str,
  2099. idx_field_vals->db_value))
  2100. {
  2101. *with_i_schema= 1;
  2102. return files->push_back(thd->strdup(information_schema_name.str));
  2103. }
  2104. return files->push_back(thd->strdup(idx_field_vals->db_value));
  2105. }
  2106. /*
  2107. Create list of existing databases. It is used in case
  2108. of select from information schema table
  2109. */
  2110. if (files->push_back(thd->strdup(information_schema_name.str)))
  2111. return 1;
  2112. *with_i_schema= 1;
  2113. return mysql_find_files(thd, files, NullS, mysql_data_home, NullS, 1);
  2114. }
  2115. int schema_tables_add(THD *thd, List<char> *files, const char *wild)
  2116. {
  2117. ST_SCHEMA_TABLE *tmp_schema_table= schema_tables;
  2118. for (; tmp_schema_table->table_name; tmp_schema_table++)
  2119. {
  2120. if (tmp_schema_table->hidden)
  2121. continue;
  2122. if (wild)
  2123. {
  2124. if (lower_case_table_names)
  2125. {
  2126. if (wild_case_compare(files_charset_info,
  2127. tmp_schema_table->table_name,
  2128. wild))
  2129. continue;
  2130. }
  2131. else if (wild_compare(tmp_schema_table->table_name, wild, 0))
  2132. continue;
  2133. }
  2134. if (files->push_back(thd->strdup(tmp_schema_table->table_name)))
  2135. return 1;
  2136. }
  2137. return 0;
  2138. }
  2139. int get_all_tables(THD *thd, TABLE_LIST *tables, COND *cond)
  2140. {
  2141. LEX *lex= thd->lex;
  2142. TABLE *table= tables->table;
  2143. SELECT_LEX *select_lex= &lex->select_lex;
  2144. SELECT_LEX *old_all_select_lex= lex->all_selects_list;
  2145. enum_sql_command save_sql_command= lex->sql_command;
  2146. SELECT_LEX *lsel= tables->schema_select_lex;
  2147. ST_SCHEMA_TABLE *schema_table= tables->schema_table;
  2148. SELECT_LEX sel;
  2149. INDEX_FIELD_VALUES idx_field_vals;
  2150. char path[FN_REFLEN], *end, *base_name, *orig_base_name, *file_name;
  2151. uint len;
  2152. bool with_i_schema;
  2153. enum enum_schema_tables schema_table_idx;
  2154. List<char> bases;
  2155. List_iterator_fast<char> it(bases);
  2156. COND *partial_cond;
  2157. Security_context *sctx= thd->security_ctx;
  2158. uint derived_tables= lex->derived_tables;
  2159. int error= 1;
  2160. enum legacy_db_type not_used;
  2161. Open_tables_state open_tables_state_backup;
  2162. bool save_view_prepare_mode= lex->view_prepare_mode;
  2163. Query_tables_list query_tables_list_backup;
  2164. lex->view_prepare_mode= TRUE;
  2165. DBUG_ENTER("get_all_tables");
  2166. LINT_INIT(end);
  2167. LINT_INIT(len);
  2168. lex->reset_n_backup_query_tables_list(&query_tables_list_backup);
  2169. /*
  2170. We should not introduce deadlocks even if we already have some
  2171. tables open and locked, since we won't lock tables which we will
  2172. open and will ignore possible name-locks for these tables.
  2173. */
  2174. thd->reset_n_backup_open_tables_state(&open_tables_state_backup);
  2175. if (lsel)
  2176. {
  2177. TABLE_LIST *show_table_list= (TABLE_LIST*) lsel->table_list.first;
  2178. bool res;
  2179. lex->all_selects_list= lsel;
  2180. /*
  2181. Restore thd->temporary_tables to be able to process
  2182. temporary tables(only for 'show index' & 'show columns').
  2183. This should be changed when processing of temporary tables for
  2184. I_S tables will be done.
  2185. */
  2186. thd->temporary_tables= open_tables_state_backup.temporary_tables;
  2187. /*
  2188. Let us set fake sql_command so views won't try to merge
  2189. themselves into main statement. If we don't do this,
  2190. SELECT * from information_schema.xxxx will cause problems.
  2191. SQLCOM_SHOW_FIELDS is used because it satisfies 'only_view_structure()'
  2192. */
  2193. lex->sql_command= SQLCOM_SHOW_FIELDS;
  2194. res= open_normal_and_derived_tables(thd, show_table_list,
  2195. MYSQL_LOCK_IGNORE_FLUSH);
  2196. lex->sql_command= save_sql_command;
  2197. /*
  2198. get_all_tables() returns 1 on failure and 0 on success thus
  2199. return only these and not the result code of ::process_table()
  2200. We should use show_table_list->alias instead of
  2201. show_table_list->table_name because table_name
  2202. could be changed during opening of I_S tables. It's safe
  2203. to use alias because alias contains original table name
  2204. in this case(this part of code is used only for
  2205. 'show columns' & 'show statistics' commands).
  2206. */
  2207. error= test(schema_table->process_table(thd, show_table_list,
  2208. table, res,
  2209. (show_table_list->view ?
  2210. show_table_list->view_db.str :
  2211. show_table_list->db),
  2212. show_table_list->alias));
  2213. thd->temporary_tables= 0;
  2214. close_tables_for_reopen(thd, &show_table_list);
  2215. goto err;
  2216. }
  2217. schema_table_idx= get_schema_table_idx(schema_table);
  2218. if (make_db_list(thd, &bases, &idx_field_vals,
  2219. &with_i_schema, 0))
  2220. goto err;
  2221. partial_cond= make_cond_for_info_schema(cond, tables);
  2222. it.rewind(); /* To get access to new elements in basis list */
  2223. /*
  2224. Below we generate error for non existing database.
  2225. (to save old behaviour for SHOW TABLES FROM db)
  2226. */
  2227. while ((orig_base_name= base_name= it++) ||
  2228. ((sql_command_flags[save_sql_command] & CF_SHOW_TABLE_COMMAND) &&
  2229. (base_name= select_lex->db) && !bases.elements))
  2230. {
  2231. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  2232. if (!check_access(thd,SELECT_ACL, base_name,
  2233. &thd->col_access, 0, 1, with_i_schema) ||
  2234. sctx->master_access & (DB_ACLS | SHOW_DB_ACL) ||
  2235. acl_get(sctx->host, sctx->ip, sctx->priv_user, base_name,0) ||
  2236. (grant_option && !check_grant_db(thd, base_name)))
  2237. #endif
  2238. {
  2239. List<char> files;
  2240. if (with_i_schema) // information schema table names
  2241. {
  2242. if (schema_tables_add(thd, &files, idx_field_vals.table_value))
  2243. goto err;
  2244. }
  2245. else
  2246. {
  2247. len= build_table_filename(path, sizeof(path), base_name, "", "");
  2248. end= path + len;
  2249. len= FN_LEN - len;
  2250. if (mysql_find_files(thd, &files, base_name,
  2251. path, idx_field_vals.table_value, 0))
  2252. goto err;
  2253. if (lower_case_table_names)
  2254. orig_base_name= thd->strdup(base_name);
  2255. }
  2256. List_iterator_fast<char> it_files(files);
  2257. while ((file_name= it_files++))
  2258. {
  2259. restore_record(table, s->default_values);
  2260. table->field[schema_table->idx_field1]->
  2261. store(base_name, strlen(base_name), system_charset_info);
  2262. table->field[schema_table->idx_field2]->
  2263. store(file_name, strlen(file_name),system_charset_info);
  2264. if (!partial_cond || partial_cond->val_int())
  2265. {
  2266. if (schema_table_idx == SCH_TABLE_NAMES)
  2267. {
  2268. if (lex->verbose ||
  2269. (sql_command_flags[save_sql_command] & CF_STATUS_COMMAND) == 0)
  2270. {
  2271. if (with_i_schema)
  2272. {
  2273. table->field[3]->store(STRING_WITH_LEN("SYSTEM VIEW"),
  2274. system_charset_info);
  2275. }
  2276. else
  2277. {
  2278. my_snprintf(end, len, "/%s%s", file_name, reg_ext);
  2279. switch (mysql_frm_type(thd, path, &not_used)) {
  2280. case FRMTYPE_ERROR:
  2281. table->field[3]->store(STRING_WITH_LEN("ERROR"),
  2282. system_charset_info);
  2283. break;
  2284. case FRMTYPE_TABLE:
  2285. table->field[3]->store(STRING_WITH_LEN("BASE TABLE"),
  2286. system_charset_info);
  2287. break;
  2288. case FRMTYPE_VIEW:
  2289. table->field[3]->store(STRING_WITH_LEN("VIEW"),
  2290. system_charset_info);
  2291. break;
  2292. default:
  2293. DBUG_ASSERT(0);
  2294. }
  2295. }
  2296. }
  2297. if (schema_table_store_record(thd, table))
  2298. goto err;
  2299. }
  2300. else
  2301. {
  2302. int res;
  2303. /*
  2304. Set the parent lex of 'sel' because it is needed by
  2305. sel.init_query() which is called inside make_table_list.
  2306. */
  2307. sel.parent_lex= lex;
  2308. if (make_table_list(thd, &sel, base_name, file_name))
  2309. goto err;
  2310. TABLE_LIST *show_table_list= (TABLE_LIST*) sel.table_list.first;
  2311. lex->all_selects_list= &sel;
  2312. lex->derived_tables= 0;
  2313. lex->sql_command= SQLCOM_SHOW_FIELDS;
  2314. res= open_normal_and_derived_tables(thd, show_table_list,
  2315. MYSQL_LOCK_IGNORE_FLUSH);
  2316. lex->sql_command= save_sql_command;
  2317. /*
  2318. We should use show_table_list->alias instead of
  2319. show_table_list->table_name because table_name
  2320. could be changed during opening of I_S tables. It's safe
  2321. to use alias because alias contains original table name
  2322. in this case.
  2323. */
  2324. res= schema_table->process_table(thd, show_table_list, table,
  2325. res, orig_base_name,
  2326. show_table_list->alias);
  2327. close_tables_for_reopen(thd, &show_table_list);
  2328. DBUG_ASSERT(!lex->query_tables_own_last);
  2329. if (res)
  2330. goto err;
  2331. }
  2332. }
  2333. }
  2334. /*
  2335. If we have information schema its always the first table and only
  2336. the first table. Reset for other tables.
  2337. */
  2338. with_i_schema= 0;
  2339. }
  2340. }
  2341. error= 0;
  2342. err:
  2343. thd->restore_backup_open_tables_state(&open_tables_state_backup);
  2344. lex->restore_backup_query_tables_list(&query_tables_list_backup);
  2345. lex->derived_tables= derived_tables;
  2346. lex->all_selects_list= old_all_select_lex;
  2347. lex->view_prepare_mode= save_view_prepare_mode;
  2348. lex->sql_command= save_sql_command;
  2349. DBUG_RETURN(error);
  2350. }
  2351. bool store_schema_shemata(THD* thd, TABLE *table, const char *db_name,
  2352. CHARSET_INFO *cs)
  2353. {
  2354. restore_record(table, s->default_values);
  2355. table->field[1]->store(db_name, strlen(db_name), system_charset_info);
  2356. table->field[2]->store(cs->csname, strlen(cs->csname), system_charset_info);
  2357. table->field[3]->store(cs->name, strlen(cs->name), system_charset_info);
  2358. return schema_table_store_record(thd, table);
  2359. }
  2360. int fill_schema_shemata(THD *thd, TABLE_LIST *tables, COND *cond)
  2361. {
  2362. char path[FN_REFLEN];
  2363. bool found_libchar;
  2364. INDEX_FIELD_VALUES idx_field_vals;
  2365. List<char> files;
  2366. char *file_name;
  2367. uint length;
  2368. bool with_i_schema;
  2369. HA_CREATE_INFO create;
  2370. TABLE *table= tables->table;
  2371. Security_context *sctx= thd->security_ctx;
  2372. DBUG_ENTER("fill_schema_shemata");
  2373. if (make_db_list(thd, &files, &idx_field_vals,
  2374. &with_i_schema, 1))
  2375. DBUG_RETURN(1);
  2376. List_iterator_fast<char> it(files);
  2377. while ((file_name=it++))
  2378. {
  2379. if (with_i_schema) // information schema name is always first in list
  2380. {
  2381. if (store_schema_shemata(thd, table, file_name,
  2382. system_charset_info))
  2383. DBUG_RETURN(1);
  2384. with_i_schema= 0;
  2385. continue;
  2386. }
  2387. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  2388. if (sctx->master_access & (DB_ACLS | SHOW_DB_ACL) ||
  2389. acl_get(sctx->host, sctx->ip, sctx->priv_user, file_name,0) ||
  2390. (grant_option && !check_grant_db(thd, file_name)))
  2391. #endif
  2392. {
  2393. length= build_table_filename(path, sizeof(path), file_name, "", "");
  2394. found_libchar= 0;
  2395. if (length && path[length-1] == FN_LIBCHAR)
  2396. {
  2397. found_libchar= 1;
  2398. path[length-1]=0; // remove ending '\'
  2399. }
  2400. if (found_libchar)
  2401. path[length-1]= FN_LIBCHAR;
  2402. strmov(path+length, MY_DB_OPT_FILE);
  2403. load_db_opt(thd, path, &create);
  2404. if (store_schema_shemata(thd, table, file_name,
  2405. create.default_table_charset))
  2406. DBUG_RETURN(1);
  2407. }
  2408. }
  2409. DBUG_RETURN(0);
  2410. }
  2411. static int get_schema_tables_record(THD *thd, struct st_table_list *tables,
  2412. TABLE *table, bool res,
  2413. const char *base_name,
  2414. const char *file_name)
  2415. {
  2416. const char *tmp_buff;
  2417. TIME time;
  2418. CHARSET_INFO *cs= system_charset_info;
  2419. DBUG_ENTER("get_schema_tables_record");
  2420. restore_record(table, s->default_values);
  2421. table->field[1]->store(base_name, strlen(base_name), cs);
  2422. table->field[2]->store(file_name, strlen(file_name), cs);
  2423. if (res)
  2424. {
  2425. /*
  2426. there was errors during opening tables
  2427. */
  2428. const char *error= thd->net.last_error;
  2429. if (tables->view)
  2430. table->field[3]->store(STRING_WITH_LEN("VIEW"), cs);
  2431. else if (tables->schema_table)
  2432. table->field[3]->store(STRING_WITH_LEN("SYSTEM VIEW"), cs);
  2433. else
  2434. table->field[3]->store(STRING_WITH_LEN("BASE TABLE"), cs);
  2435. table->field[20]->store(error, strlen(error), cs);
  2436. thd->clear_error();
  2437. }
  2438. else if (tables->view)
  2439. {
  2440. table->field[3]->store(STRING_WITH_LEN("VIEW"), cs);
  2441. table->field[20]->store(STRING_WITH_LEN("VIEW"), cs);
  2442. }
  2443. else
  2444. {
  2445. TABLE *show_table= tables->table;
  2446. TABLE_SHARE *share= show_table->s;
  2447. handler *file= show_table->file;
  2448. file->info(HA_STATUS_VARIABLE | HA_STATUS_TIME | HA_STATUS_AUTO |
  2449. HA_STATUS_NO_LOCK);
  2450. if (share->tmp_table == SYSTEM_TMP_TABLE)
  2451. table->field[3]->store(STRING_WITH_LEN("SYSTEM VIEW"), cs);
  2452. else if (share->tmp_table)
  2453. table->field[3]->store(STRING_WITH_LEN("LOCAL TEMPORARY"), cs);
  2454. else
  2455. table->field[3]->store(STRING_WITH_LEN("BASE TABLE"), cs);
  2456. for (int i= 4; i < 20; i++)
  2457. {
  2458. if (i == 7 || (i > 12 && i < 17) || i == 18)
  2459. continue;
  2460. table->field[i]->set_notnull();
  2461. }
  2462. tmp_buff= file->table_type();
  2463. table->field[4]->store(tmp_buff, strlen(tmp_buff), cs);
  2464. table->field[5]->store((longlong) share->frm_version, TRUE);
  2465. enum row_type row_type = file->get_row_type();
  2466. switch (row_type) {
  2467. case ROW_TYPE_NOT_USED:
  2468. case ROW_TYPE_DEFAULT:
  2469. tmp_buff= ((share->db_options_in_use &
  2470. HA_OPTION_COMPRESS_RECORD) ? "Compressed" :
  2471. (share->db_options_in_use & HA_OPTION_PACK_RECORD) ?
  2472. "Dynamic" : "Fixed");
  2473. break;
  2474. case ROW_TYPE_FIXED:
  2475. tmp_buff= "Fixed";
  2476. break;
  2477. case ROW_TYPE_DYNAMIC:
  2478. tmp_buff= "Dynamic";
  2479. break;
  2480. case ROW_TYPE_COMPRESSED:
  2481. tmp_buff= "Compressed";
  2482. break;
  2483. case ROW_TYPE_REDUNDANT:
  2484. tmp_buff= "Redundant";
  2485. break;
  2486. case ROW_TYPE_COMPACT:
  2487. tmp_buff= "Compact";
  2488. break;
  2489. case ROW_TYPE_PAGES:
  2490. tmp_buff= "Paged";
  2491. break;
  2492. }
  2493. table->field[6]->store(tmp_buff, strlen(tmp_buff), cs);
  2494. if (!tables->schema_table)
  2495. {
  2496. table->field[7]->store((longlong) file->stats.records, TRUE);
  2497. table->field[7]->set_notnull();
  2498. }
  2499. table->field[8]->store((longlong) file->stats.mean_rec_length, TRUE);
  2500. table->field[9]->store((longlong) file->stats.data_file_length, TRUE);
  2501. if (file->stats.max_data_file_length)
  2502. {
  2503. table->field[10]->store((longlong) file->stats.max_data_file_length,
  2504. TRUE);
  2505. }
  2506. table->field[11]->store((longlong) file->stats.index_file_length, TRUE);
  2507. table->field[12]->store((longlong) file->stats.delete_length, TRUE);
  2508. if (show_table->found_next_number_field)
  2509. {
  2510. table->field[13]->store((longlong) file->stats.auto_increment_value,
  2511. TRUE);
  2512. table->field[13]->set_notnull();
  2513. }
  2514. if (file->stats.create_time)
  2515. {
  2516. thd->variables.time_zone->gmt_sec_to_TIME(&time,
  2517. file->stats.create_time);
  2518. table->field[14]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  2519. table->field[14]->set_notnull();
  2520. }
  2521. if (file->stats.update_time)
  2522. {
  2523. thd->variables.time_zone->gmt_sec_to_TIME(&time,
  2524. file->stats.update_time);
  2525. table->field[15]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  2526. table->field[15]->set_notnull();
  2527. }
  2528. if (file->stats.check_time)
  2529. {
  2530. thd->variables.time_zone->gmt_sec_to_TIME(&time, file->stats.check_time);
  2531. table->field[16]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  2532. table->field[16]->set_notnull();
  2533. }
  2534. tmp_buff= (share->table_charset ?
  2535. share->table_charset->name : "default");
  2536. table->field[17]->store(tmp_buff, strlen(tmp_buff), cs);
  2537. if (file->ha_table_flags() & (ulong) HA_HAS_CHECKSUM)
  2538. {
  2539. table->field[18]->store((longlong) file->checksum(), TRUE);
  2540. table->field[18]->set_notnull();
  2541. }
  2542. char option_buff[350],*ptr;
  2543. ptr=option_buff;
  2544. if (share->min_rows)
  2545. {
  2546. ptr=strmov(ptr," min_rows=");
  2547. ptr=longlong10_to_str(share->min_rows,ptr,10);
  2548. }
  2549. if (share->max_rows)
  2550. {
  2551. ptr=strmov(ptr," max_rows=");
  2552. ptr=longlong10_to_str(share->max_rows,ptr,10);
  2553. }
  2554. if (share->avg_row_length)
  2555. {
  2556. ptr=strmov(ptr," avg_row_length=");
  2557. ptr=longlong10_to_str(share->avg_row_length,ptr,10);
  2558. }
  2559. if (share->db_create_options & HA_OPTION_PACK_KEYS)
  2560. ptr=strmov(ptr," pack_keys=1");
  2561. if (share->db_create_options & HA_OPTION_NO_PACK_KEYS)
  2562. ptr=strmov(ptr," pack_keys=0");
  2563. if (share->db_create_options & HA_OPTION_CHECKSUM)
  2564. ptr=strmov(ptr," checksum=1");
  2565. if (share->db_create_options & HA_OPTION_DELAY_KEY_WRITE)
  2566. ptr=strmov(ptr," delay_key_write=1");
  2567. if (share->row_type != ROW_TYPE_DEFAULT)
  2568. ptr=strxmov(ptr, " row_format=",
  2569. ha_row_type[(uint) share->row_type],
  2570. NullS);
  2571. #ifdef WITH_PARTITION_STORAGE_ENGINE
  2572. if (show_table->s->db_type == &partition_hton &&
  2573. show_table->part_info != NULL &&
  2574. show_table->part_info->no_parts > 0)
  2575. ptr= strmov(ptr, " partitioned");
  2576. #endif
  2577. table->field[19]->store(option_buff+1,
  2578. (ptr == option_buff ? 0 :
  2579. (uint) (ptr-option_buff)-1), cs);
  2580. {
  2581. char *comment;
  2582. comment= show_table->file->update_table_comment(share->comment);
  2583. if (comment)
  2584. {
  2585. table->field[20]->store(comment, strlen(comment), cs);
  2586. if (comment != share->comment)
  2587. my_free(comment, MYF(0));
  2588. }
  2589. }
  2590. }
  2591. DBUG_RETURN(schema_table_store_record(thd, table));
  2592. }
  2593. static int get_schema_column_record(THD *thd, struct st_table_list *tables,
  2594. TABLE *table, bool res,
  2595. const char *base_name,
  2596. const char *file_name)
  2597. {
  2598. LEX *lex= thd->lex;
  2599. const char *wild= lex->wild ? lex->wild->ptr() : NullS;
  2600. CHARSET_INFO *cs= system_charset_info;
  2601. TABLE *show_table;
  2602. handler *file;
  2603. Field **ptr,*field;
  2604. int count;
  2605. uint base_name_length, file_name_length;
  2606. DBUG_ENTER("get_schema_column_record");
  2607. if (res)
  2608. {
  2609. if (lex->sql_command != SQLCOM_SHOW_FIELDS)
  2610. {
  2611. /*
  2612. I.e. we are in SELECT FROM INFORMATION_SCHEMA.COLUMS
  2613. rather than in SHOW COLUMNS
  2614. */
  2615. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  2616. thd->net.last_errno, thd->net.last_error);
  2617. thd->clear_error();
  2618. res= 0;
  2619. }
  2620. DBUG_RETURN(res);
  2621. }
  2622. show_table= tables->table;
  2623. file= show_table->file;
  2624. count= 0;
  2625. file->info(HA_STATUS_VARIABLE | HA_STATUS_NO_LOCK);
  2626. restore_record(show_table, s->default_values);
  2627. base_name_length= strlen(base_name);
  2628. file_name_length= strlen(file_name);
  2629. show_table->use_all_columns(); // Required for default
  2630. for (ptr=show_table->field; (field= *ptr) ; ptr++)
  2631. {
  2632. const char *tmp_buff;
  2633. byte *pos;
  2634. bool is_blob;
  2635. uint flags=field->flags;
  2636. char tmp[MAX_FIELD_WIDTH];
  2637. char tmp1[MAX_FIELD_WIDTH];
  2638. String type(tmp,sizeof(tmp), system_charset_info);
  2639. char *end;
  2640. int decimals, field_length;
  2641. if (wild && wild[0] &&
  2642. wild_case_compare(system_charset_info, field->field_name,wild))
  2643. continue;
  2644. flags= field->flags;
  2645. count++;
  2646. /* Get default row, with all NULL fields set to NULL */
  2647. restore_record(table, s->default_values);
  2648. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  2649. uint col_access;
  2650. check_access(thd,SELECT_ACL | EXTRA_ACL, base_name,
  2651. &tables->grant.privilege, 0, 0, test(tables->schema_table));
  2652. col_access= get_column_grant(thd, &tables->grant,
  2653. base_name, file_name,
  2654. field->field_name) & COL_ACLS;
  2655. if (lex->sql_command != SQLCOM_SHOW_FIELDS &&
  2656. !tables->schema_table && !col_access)
  2657. continue;
  2658. end= tmp;
  2659. for (uint bitnr=0; col_access ; col_access>>=1,bitnr++)
  2660. {
  2661. if (col_access & 1)
  2662. {
  2663. *end++=',';
  2664. end=strmov(end,grant_types.type_names[bitnr]);
  2665. }
  2666. }
  2667. table->field[17]->store(tmp+1,end == tmp ? 0 : (uint) (end-tmp-1), cs);
  2668. #endif
  2669. table->field[1]->store(base_name, base_name_length, cs);
  2670. table->field[2]->store(file_name, file_name_length, cs);
  2671. table->field[3]->store(field->field_name, strlen(field->field_name),
  2672. cs);
  2673. table->field[4]->store((longlong) count, TRUE);
  2674. field->sql_type(type);
  2675. table->field[14]->store(type.ptr(), type.length(), cs);
  2676. tmp_buff= strchr(type.ptr(), '(');
  2677. table->field[7]->store(type.ptr(),
  2678. (tmp_buff ? tmp_buff - type.ptr() :
  2679. type.length()), cs);
  2680. if (show_table->timestamp_field == field &&
  2681. field->unireg_check != Field::TIMESTAMP_UN_FIELD)
  2682. {
  2683. table->field[5]->store(STRING_WITH_LEN("CURRENT_TIMESTAMP"), cs);
  2684. table->field[5]->set_notnull();
  2685. }
  2686. else if (field->unireg_check != Field::NEXT_NUMBER &&
  2687. !field->is_null() &&
  2688. !(field->flags & NO_DEFAULT_VALUE_FLAG))
  2689. {
  2690. String def(tmp1,sizeof(tmp1), cs);
  2691. type.set(tmp, sizeof(tmp), field->charset());
  2692. field->val_str(&type);
  2693. uint dummy_errors;
  2694. def.copy(type.ptr(), type.length(), type.charset(), cs, &dummy_errors);
  2695. table->field[5]->store(def.ptr(), def.length(), def.charset());
  2696. table->field[5]->set_notnull();
  2697. }
  2698. else if (field->unireg_check == Field::NEXT_NUMBER ||
  2699. lex->sql_command != SQLCOM_SHOW_FIELDS ||
  2700. field->maybe_null())
  2701. table->field[5]->set_null(); // Null as default
  2702. else
  2703. {
  2704. table->field[5]->store("",0, cs);
  2705. table->field[5]->set_notnull();
  2706. }
  2707. pos=(byte*) ((flags & NOT_NULL_FLAG) &&
  2708. field->type() != FIELD_TYPE_TIMESTAMP ?
  2709. "NO" : "YES");
  2710. table->field[6]->store((const char*) pos,
  2711. strlen((const char*) pos), cs);
  2712. is_blob= (field->type() == FIELD_TYPE_BLOB);
  2713. if (field->has_charset() || is_blob ||
  2714. field->real_type() == MYSQL_TYPE_VARCHAR || // For varbinary type
  2715. field->real_type() == MYSQL_TYPE_STRING) // For binary type
  2716. {
  2717. uint32 octet_max_length= field->max_length();
  2718. if (is_blob && octet_max_length != (uint32) 4294967295U)
  2719. octet_max_length /= field->charset()->mbmaxlen;
  2720. longlong char_max_len= is_blob ?
  2721. (longlong) octet_max_length / field->charset()->mbminlen :
  2722. (longlong) octet_max_length / field->charset()->mbmaxlen;
  2723. table->field[8]->store(char_max_len, TRUE);
  2724. table->field[8]->set_notnull();
  2725. table->field[9]->store((longlong) octet_max_length, TRUE);
  2726. table->field[9]->set_notnull();
  2727. }
  2728. /*
  2729. Calculate field_length and decimals.
  2730. They are set to -1 if they should not be set (we should return NULL)
  2731. */
  2732. decimals= field->decimals();
  2733. switch (field->type()) {
  2734. case FIELD_TYPE_NEWDECIMAL:
  2735. field_length= ((Field_new_decimal*) field)->precision;
  2736. break;
  2737. case FIELD_TYPE_DECIMAL:
  2738. field_length= field->field_length - (decimals ? 2 : 1);
  2739. break;
  2740. case FIELD_TYPE_TINY:
  2741. case FIELD_TYPE_SHORT:
  2742. case FIELD_TYPE_LONG:
  2743. case FIELD_TYPE_LONGLONG:
  2744. case FIELD_TYPE_INT24:
  2745. field_length= field->max_length() - 1;
  2746. break;
  2747. case FIELD_TYPE_BIT:
  2748. field_length= field->max_length();
  2749. decimals= -1; // return NULL
  2750. break;
  2751. case FIELD_TYPE_FLOAT:
  2752. case FIELD_TYPE_DOUBLE:
  2753. field_length= field->field_length;
  2754. if (decimals == NOT_FIXED_DEC)
  2755. decimals= -1; // return NULL
  2756. break;
  2757. default:
  2758. field_length= decimals= -1;
  2759. break;
  2760. }
  2761. if (field_length >= 0)
  2762. {
  2763. table->field[10]->store((longlong) field_length, TRUE);
  2764. table->field[10]->set_notnull();
  2765. }
  2766. if (decimals >= 0)
  2767. {
  2768. table->field[11]->store((longlong) decimals, TRUE);
  2769. table->field[11]->set_notnull();
  2770. }
  2771. if (field->has_charset())
  2772. {
  2773. pos=(byte*) field->charset()->csname;
  2774. table->field[12]->store((const char*) pos,
  2775. strlen((const char*) pos), cs);
  2776. table->field[12]->set_notnull();
  2777. pos=(byte*) field->charset()->name;
  2778. table->field[13]->store((const char*) pos,
  2779. strlen((const char*) pos), cs);
  2780. table->field[13]->set_notnull();
  2781. }
  2782. pos=(byte*) ((field->flags & PRI_KEY_FLAG) ? "PRI" :
  2783. (field->flags & UNIQUE_KEY_FLAG) ? "UNI" :
  2784. (field->flags & MULTIPLE_KEY_FLAG) ? "MUL":"");
  2785. table->field[15]->store((const char*) pos,
  2786. strlen((const char*) pos), cs);
  2787. end= tmp;
  2788. if (field->unireg_check == Field::NEXT_NUMBER)
  2789. end=strmov(tmp,"auto_increment");
  2790. table->field[16]->store(tmp, (uint) (end-tmp), cs);
  2791. table->field[18]->store(field->comment.str, field->comment.length, cs);
  2792. if (schema_table_store_record(thd, table))
  2793. DBUG_RETURN(1);
  2794. }
  2795. DBUG_RETURN(0);
  2796. }
  2797. int fill_schema_charsets(THD *thd, TABLE_LIST *tables, COND *cond)
  2798. {
  2799. CHARSET_INFO **cs;
  2800. const char *wild= thd->lex->wild ? thd->lex->wild->ptr() : NullS;
  2801. TABLE *table= tables->table;
  2802. CHARSET_INFO *scs= system_charset_info;
  2803. for (cs= all_charsets ; cs < all_charsets+255 ; cs++)
  2804. {
  2805. CHARSET_INFO *tmp_cs= cs[0];
  2806. if (tmp_cs && (tmp_cs->state & MY_CS_PRIMARY) &&
  2807. (tmp_cs->state & MY_CS_AVAILABLE) &&
  2808. !(tmp_cs->state & MY_CS_HIDDEN) &&
  2809. !(wild && wild[0] &&
  2810. wild_case_compare(scs, tmp_cs->csname,wild)))
  2811. {
  2812. const char *comment;
  2813. restore_record(table, s->default_values);
  2814. table->field[0]->store(tmp_cs->csname, strlen(tmp_cs->csname), scs);
  2815. table->field[1]->store(tmp_cs->name, strlen(tmp_cs->name), scs);
  2816. comment= tmp_cs->comment ? tmp_cs->comment : "";
  2817. table->field[2]->store(comment, strlen(comment), scs);
  2818. table->field[3]->store((longlong) tmp_cs->mbmaxlen, TRUE);
  2819. if (schema_table_store_record(thd, table))
  2820. return 1;
  2821. }
  2822. }
  2823. return 0;
  2824. }
  2825. static my_bool iter_schema_engines(THD *thd, st_plugin_int *plugin,
  2826. void *ptable)
  2827. {
  2828. TABLE *table= (TABLE *) ptable;
  2829. handlerton *hton= (handlerton *)plugin->data;
  2830. const char *wild= thd->lex->wild ? thd->lex->wild->ptr() : NullS;
  2831. CHARSET_INFO *scs= system_charset_info;
  2832. DBUG_ENTER("iter_schema_engines");
  2833. if (!(hton->flags & HTON_HIDDEN))
  2834. {
  2835. if (!(wild && wild[0] &&
  2836. wild_case_compare(scs, plugin->name.str,wild)))
  2837. {
  2838. LEX_STRING state[2]= {{(char*) STRING_WITH_LEN("ENABLED")},
  2839. {(char*) STRING_WITH_LEN("DISABLED")}};
  2840. LEX_STRING yesno[2]= {{(char*) STRING_WITH_LEN("NO")},
  2841. {(char*) STRING_WITH_LEN("YES")}};
  2842. LEX_STRING *tmp;
  2843. restore_record(table, s->default_values);
  2844. table->field[0]->store(plugin->name.str, plugin->name.length, scs);
  2845. tmp= &state[test(hton->state)];
  2846. table->field[1]->store(tmp->str, tmp->length, scs);
  2847. table->field[2]->store(plugin->plugin->descr,
  2848. strlen(plugin->plugin->descr), scs);
  2849. tmp= &yesno[test(hton->commit)];
  2850. table->field[3]->store(tmp->str, tmp->length, scs);
  2851. tmp= &yesno[test(hton->prepare)];
  2852. table->field[4]->store(tmp->str, tmp->length, scs);
  2853. tmp= &yesno[test(hton->savepoint_set)];
  2854. table->field[5]->store(tmp->str, tmp->length, scs);
  2855. if (schema_table_store_record(thd, table))
  2856. DBUG_RETURN(1);
  2857. }
  2858. }
  2859. DBUG_RETURN(0);
  2860. }
  2861. int fill_schema_engines(THD *thd, TABLE_LIST *tables, COND *cond)
  2862. {
  2863. return plugin_foreach(thd, iter_schema_engines,
  2864. MYSQL_STORAGE_ENGINE_PLUGIN, tables->table);
  2865. }
  2866. int fill_schema_collation(THD *thd, TABLE_LIST *tables, COND *cond)
  2867. {
  2868. CHARSET_INFO **cs;
  2869. const char *wild= thd->lex->wild ? thd->lex->wild->ptr() : NullS;
  2870. TABLE *table= tables->table;
  2871. CHARSET_INFO *scs= system_charset_info;
  2872. for (cs= all_charsets ; cs < all_charsets+255 ; cs++ )
  2873. {
  2874. CHARSET_INFO **cl;
  2875. CHARSET_INFO *tmp_cs= cs[0];
  2876. if (!tmp_cs || !(tmp_cs->state & MY_CS_AVAILABLE) ||
  2877. (tmp_cs->state & MY_CS_HIDDEN) ||
  2878. !(tmp_cs->state & MY_CS_PRIMARY))
  2879. continue;
  2880. for (cl= all_charsets; cl < all_charsets+255 ;cl ++)
  2881. {
  2882. CHARSET_INFO *tmp_cl= cl[0];
  2883. if (!tmp_cl || !(tmp_cl->state & MY_CS_AVAILABLE) ||
  2884. !my_charset_same(tmp_cs, tmp_cl))
  2885. continue;
  2886. if (!(wild && wild[0] &&
  2887. wild_case_compare(scs, tmp_cl->name,wild)))
  2888. {
  2889. const char *tmp_buff;
  2890. restore_record(table, s->default_values);
  2891. table->field[0]->store(tmp_cl->name, strlen(tmp_cl->name), scs);
  2892. table->field[1]->store(tmp_cl->csname , strlen(tmp_cl->csname), scs);
  2893. table->field[2]->store((longlong) tmp_cl->number, TRUE);
  2894. tmp_buff= (tmp_cl->state & MY_CS_PRIMARY) ? "Yes" : "";
  2895. table->field[3]->store(tmp_buff, strlen(tmp_buff), scs);
  2896. tmp_buff= (tmp_cl->state & MY_CS_COMPILED)? "Yes" : "";
  2897. table->field[4]->store(tmp_buff, strlen(tmp_buff), scs);
  2898. table->field[5]->store((longlong) tmp_cl->strxfrm_multiply, TRUE);
  2899. if (schema_table_store_record(thd, table))
  2900. return 1;
  2901. }
  2902. }
  2903. }
  2904. return 0;
  2905. }
  2906. int fill_schema_coll_charset_app(THD *thd, TABLE_LIST *tables, COND *cond)
  2907. {
  2908. CHARSET_INFO **cs;
  2909. TABLE *table= tables->table;
  2910. CHARSET_INFO *scs= system_charset_info;
  2911. for (cs= all_charsets ; cs < all_charsets+255 ; cs++ )
  2912. {
  2913. CHARSET_INFO **cl;
  2914. CHARSET_INFO *tmp_cs= cs[0];
  2915. if (!tmp_cs || !(tmp_cs->state & MY_CS_AVAILABLE) ||
  2916. !(tmp_cs->state & MY_CS_PRIMARY))
  2917. continue;
  2918. for (cl= all_charsets; cl < all_charsets+255 ;cl ++)
  2919. {
  2920. CHARSET_INFO *tmp_cl= cl[0];
  2921. if (!tmp_cl || !(tmp_cl->state & MY_CS_AVAILABLE) ||
  2922. !my_charset_same(tmp_cs,tmp_cl))
  2923. continue;
  2924. restore_record(table, s->default_values);
  2925. table->field[0]->store(tmp_cl->name, strlen(tmp_cl->name), scs);
  2926. table->field[1]->store(tmp_cl->csname , strlen(tmp_cl->csname), scs);
  2927. if (schema_table_store_record(thd, table))
  2928. return 1;
  2929. }
  2930. }
  2931. return 0;
  2932. }
  2933. bool store_schema_proc(THD *thd, TABLE *table, TABLE *proc_table,
  2934. const char *wild, bool full_access, const char *sp_user)
  2935. {
  2936. String tmp_string;
  2937. String sp_db, sp_name, definer;
  2938. TIME time;
  2939. LEX *lex= thd->lex;
  2940. CHARSET_INFO *cs= system_charset_info;
  2941. get_field(thd->mem_root, proc_table->field[0], &sp_db);
  2942. get_field(thd->mem_root, proc_table->field[1], &sp_name);
  2943. get_field(thd->mem_root, proc_table->field[11], &definer);
  2944. if (!full_access)
  2945. full_access= !strcmp(sp_user, definer.ptr());
  2946. if (!full_access && check_some_routine_access(thd, sp_db.ptr(),
  2947. sp_name.ptr(),
  2948. proc_table->field[2]->
  2949. val_int() ==
  2950. TYPE_ENUM_PROCEDURE))
  2951. return 0;
  2952. if (lex->sql_command == SQLCOM_SHOW_STATUS_PROC &&
  2953. proc_table->field[2]->val_int() == TYPE_ENUM_PROCEDURE ||
  2954. lex->sql_command == SQLCOM_SHOW_STATUS_FUNC &&
  2955. proc_table->field[2]->val_int() == TYPE_ENUM_FUNCTION ||
  2956. (sql_command_flags[lex->sql_command] & CF_STATUS_COMMAND) == 0)
  2957. {
  2958. restore_record(table, s->default_values);
  2959. if (!wild || !wild[0] || !wild_compare(sp_name.ptr(), wild, 0))
  2960. {
  2961. int enum_idx= proc_table->field[5]->val_int();
  2962. table->field[3]->store(sp_name.ptr(), sp_name.length(), cs);
  2963. get_field(thd->mem_root, proc_table->field[3], &tmp_string);
  2964. table->field[0]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2965. table->field[2]->store(sp_db.ptr(), sp_db.length(), cs);
  2966. get_field(thd->mem_root, proc_table->field[2], &tmp_string);
  2967. table->field[4]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2968. if (proc_table->field[2]->val_int() == TYPE_ENUM_FUNCTION)
  2969. {
  2970. get_field(thd->mem_root, proc_table->field[9], &tmp_string);
  2971. table->field[5]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2972. table->field[5]->set_notnull();
  2973. }
  2974. if (full_access)
  2975. {
  2976. get_field(thd->mem_root, proc_table->field[10], &tmp_string);
  2977. table->field[7]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2978. }
  2979. table->field[6]->store(STRING_WITH_LEN("SQL"), cs);
  2980. table->field[10]->store(STRING_WITH_LEN("SQL"), cs);
  2981. get_field(thd->mem_root, proc_table->field[6], &tmp_string);
  2982. table->field[11]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2983. table->field[12]->store(sp_data_access_name[enum_idx].str,
  2984. sp_data_access_name[enum_idx].length , cs);
  2985. get_field(thd->mem_root, proc_table->field[7], &tmp_string);
  2986. table->field[14]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2987. bzero((char *)&time, sizeof(time));
  2988. ((Field_timestamp *) proc_table->field[12])->get_time(&time);
  2989. table->field[15]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  2990. bzero((char *)&time, sizeof(time));
  2991. ((Field_timestamp *) proc_table->field[13])->get_time(&time);
  2992. table->field[16]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  2993. get_field(thd->mem_root, proc_table->field[14], &tmp_string);
  2994. table->field[17]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2995. get_field(thd->mem_root, proc_table->field[15], &tmp_string);
  2996. table->field[18]->store(tmp_string.ptr(), tmp_string.length(), cs);
  2997. table->field[19]->store(definer.ptr(), definer.length(), cs);
  2998. return schema_table_store_record(thd, table);
  2999. }
  3000. }
  3001. return 0;
  3002. }
  3003. int fill_schema_proc(THD *thd, TABLE_LIST *tables, COND *cond)
  3004. {
  3005. TABLE *proc_table;
  3006. TABLE_LIST proc_tables;
  3007. const char *wild= thd->lex->wild ? thd->lex->wild->ptr() : NullS;
  3008. int res= 0;
  3009. TABLE *table= tables->table;
  3010. bool full_access;
  3011. char definer[USER_HOST_BUFF_SIZE];
  3012. Open_tables_state open_tables_state_backup;
  3013. DBUG_ENTER("fill_schema_proc");
  3014. strxmov(definer, thd->security_ctx->priv_user, "@",
  3015. thd->security_ctx->priv_host, NullS);
  3016. /* We use this TABLE_LIST instance only for checking of privileges. */
  3017. bzero((char*) &proc_tables,sizeof(proc_tables));
  3018. proc_tables.db= (char*) "mysql";
  3019. proc_tables.db_length= 5;
  3020. proc_tables.table_name= proc_tables.alias= (char*) "proc";
  3021. proc_tables.table_name_length= 4;
  3022. proc_tables.lock_type= TL_READ;
  3023. full_access= !check_table_access(thd, SELECT_ACL, &proc_tables, 1);
  3024. if (!(proc_table= open_proc_table_for_read(thd, &open_tables_state_backup)))
  3025. {
  3026. DBUG_RETURN(1);
  3027. }
  3028. proc_table->file->ha_index_init(0, 1);
  3029. if ((res= proc_table->file->index_first(proc_table->record[0])))
  3030. {
  3031. res= (res == HA_ERR_END_OF_FILE) ? 0 : 1;
  3032. goto err;
  3033. }
  3034. if (store_schema_proc(thd, table, proc_table, wild, full_access, definer))
  3035. {
  3036. res= 1;
  3037. goto err;
  3038. }
  3039. while (!proc_table->file->index_next(proc_table->record[0]))
  3040. {
  3041. if (store_schema_proc(thd, table, proc_table, wild, full_access, definer))
  3042. {
  3043. res= 1;
  3044. goto err;
  3045. }
  3046. }
  3047. err:
  3048. proc_table->file->ha_index_end();
  3049. close_proc_table(thd, &open_tables_state_backup);
  3050. DBUG_RETURN(res);
  3051. }
  3052. static int get_schema_stat_record(THD *thd, struct st_table_list *tables,
  3053. TABLE *table, bool res,
  3054. const char *base_name,
  3055. const char *file_name)
  3056. {
  3057. CHARSET_INFO *cs= system_charset_info;
  3058. DBUG_ENTER("get_schema_stat_record");
  3059. if (res)
  3060. {
  3061. if (thd->lex->sql_command != SQLCOM_SHOW_KEYS)
  3062. {
  3063. /*
  3064. I.e. we are in SELECT FROM INFORMATION_SCHEMA.STATISTICS
  3065. rather than in SHOW KEYS
  3066. */
  3067. if (!tables->view)
  3068. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  3069. thd->net.last_errno, thd->net.last_error);
  3070. thd->clear_error();
  3071. res= 0;
  3072. }
  3073. DBUG_RETURN(res);
  3074. }
  3075. else if (!tables->view)
  3076. {
  3077. TABLE *show_table= tables->table;
  3078. KEY *key_info=show_table->key_info;
  3079. show_table->file->info(HA_STATUS_VARIABLE |
  3080. HA_STATUS_NO_LOCK |
  3081. HA_STATUS_TIME);
  3082. for (uint i=0 ; i < show_table->s->keys ; i++,key_info++)
  3083. {
  3084. KEY_PART_INFO *key_part= key_info->key_part;
  3085. const char *str;
  3086. for (uint j=0 ; j < key_info->key_parts ; j++,key_part++)
  3087. {
  3088. restore_record(table, s->default_values);
  3089. table->field[1]->store(base_name, strlen(base_name), cs);
  3090. table->field[2]->store(file_name, strlen(file_name), cs);
  3091. table->field[3]->store((longlong) ((key_info->flags &
  3092. HA_NOSAME) ? 0 : 1), TRUE);
  3093. table->field[4]->store(base_name, strlen(base_name), cs);
  3094. table->field[5]->store(key_info->name, strlen(key_info->name), cs);
  3095. table->field[6]->store((longlong) (j+1), TRUE);
  3096. str=(key_part->field ? key_part->field->field_name :
  3097. "?unknown field?");
  3098. table->field[7]->store(str, strlen(str), cs);
  3099. if (show_table->file->index_flags(i, j, 0) & HA_READ_ORDER)
  3100. {
  3101. table->field[8]->store(((key_part->key_part_flag &
  3102. HA_REVERSE_SORT) ?
  3103. "D" : "A"), 1, cs);
  3104. table->field[8]->set_notnull();
  3105. }
  3106. KEY *key=show_table->key_info+i;
  3107. if (key->rec_per_key[j])
  3108. {
  3109. ha_rows records=(show_table->file->stats.records /
  3110. key->rec_per_key[j]);
  3111. table->field[9]->store((longlong) records, TRUE);
  3112. table->field[9]->set_notnull();
  3113. }
  3114. if (!(key_info->flags & HA_FULLTEXT) &&
  3115. (key_part->field &&
  3116. key_part->length !=
  3117. show_table->field[key_part->fieldnr-1]->key_length()))
  3118. {
  3119. table->field[10]->store((longlong) key_part->length /
  3120. key_part->field->charset()->mbmaxlen, TRUE);
  3121. table->field[10]->set_notnull();
  3122. }
  3123. uint flags= key_part->field ? key_part->field->flags : 0;
  3124. const char *pos=(char*) ((flags & NOT_NULL_FLAG) ? "" : "YES");
  3125. table->field[12]->store(pos, strlen(pos), cs);
  3126. pos= show_table->file->index_type(i);
  3127. table->field[13]->store(pos, strlen(pos), cs);
  3128. if (!show_table->s->keys_in_use.is_set(i))
  3129. table->field[14]->store(STRING_WITH_LEN("disabled"), cs);
  3130. else
  3131. table->field[14]->store("", 0, cs);
  3132. table->field[14]->set_notnull();
  3133. if (schema_table_store_record(thd, table))
  3134. DBUG_RETURN(1);
  3135. }
  3136. }
  3137. }
  3138. DBUG_RETURN(res);
  3139. }
  3140. static int get_schema_views_record(THD *thd, struct st_table_list *tables,
  3141. TABLE *table, bool res,
  3142. const char *base_name,
  3143. const char *file_name)
  3144. {
  3145. CHARSET_INFO *cs= system_charset_info;
  3146. DBUG_ENTER("get_schema_views_record");
  3147. char definer[USER_HOST_BUFF_SIZE];
  3148. uint definer_len;
  3149. if (tables->view)
  3150. {
  3151. Security_context *sctx= thd->security_ctx;
  3152. ulong grant= SHOW_VIEW_ACL;
  3153. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  3154. char *save_table_name= tables->table_name;
  3155. if (!my_strcasecmp(system_charset_info, tables->definer.user.str,
  3156. sctx->priv_user) &&
  3157. !my_strcasecmp(system_charset_info, tables->definer.host.str,
  3158. sctx->priv_host))
  3159. grant= SHOW_VIEW_ACL;
  3160. else
  3161. {
  3162. tables->table_name= tables->view_name.str;
  3163. if (check_access(thd, SHOW_VIEW_ACL , base_name,
  3164. &tables->grant.privilege, 0, 1,
  3165. test(tables->schema_table)))
  3166. grant= get_table_grant(thd, tables);
  3167. else
  3168. grant= tables->grant.privilege;
  3169. }
  3170. tables->table_name= save_table_name;
  3171. #endif
  3172. restore_record(table, s->default_values);
  3173. table->field[1]->store(tables->view_db.str, tables->view_db.length, cs);
  3174. table->field[2]->store(tables->view_name.str, tables->view_name.length, cs);
  3175. if (grant & SHOW_VIEW_ACL)
  3176. table->field[3]->store(tables->query.str, tables->query.length, cs);
  3177. if (tables->with_check != VIEW_CHECK_NONE)
  3178. {
  3179. if (tables->with_check == VIEW_CHECK_LOCAL)
  3180. table->field[4]->store(STRING_WITH_LEN("LOCAL"), cs);
  3181. else
  3182. table->field[4]->store(STRING_WITH_LEN("CASCADED"), cs);
  3183. }
  3184. else
  3185. table->field[4]->store(STRING_WITH_LEN("NONE"), cs);
  3186. if (tables->updatable_view)
  3187. table->field[5]->store(STRING_WITH_LEN("YES"), cs);
  3188. else
  3189. table->field[5]->store(STRING_WITH_LEN("NO"), cs);
  3190. definer_len= (strxmov(definer, tables->definer.user.str, "@",
  3191. tables->definer.host.str, NullS) - definer);
  3192. table->field[6]->store(definer, definer_len, cs);
  3193. if (tables->view_suid)
  3194. table->field[7]->store(STRING_WITH_LEN("DEFINER"), cs);
  3195. else
  3196. table->field[7]->store(STRING_WITH_LEN("INVOKER"), cs);
  3197. if (schema_table_store_record(thd, table))
  3198. DBUG_RETURN(1);
  3199. if (res)
  3200. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  3201. thd->net.last_errno, thd->net.last_error);
  3202. }
  3203. if (res)
  3204. thd->clear_error();
  3205. DBUG_RETURN(0);
  3206. }
  3207. bool store_constraints(THD *thd, TABLE *table, const char *db,
  3208. const char *tname, const char *key_name,
  3209. uint key_len, const char *con_type, uint con_len)
  3210. {
  3211. CHARSET_INFO *cs= system_charset_info;
  3212. restore_record(table, s->default_values);
  3213. table->field[1]->store(db, strlen(db), cs);
  3214. table->field[2]->store(key_name, key_len, cs);
  3215. table->field[3]->store(db, strlen(db), cs);
  3216. table->field[4]->store(tname, strlen(tname), cs);
  3217. table->field[5]->store(con_type, con_len, cs);
  3218. return schema_table_store_record(thd, table);
  3219. }
  3220. static int get_schema_constraints_record(THD *thd, struct st_table_list *tables,
  3221. TABLE *table, bool res,
  3222. const char *base_name,
  3223. const char *file_name)
  3224. {
  3225. DBUG_ENTER("get_schema_constraints_record");
  3226. if (res)
  3227. {
  3228. if (!tables->view)
  3229. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  3230. thd->net.last_errno, thd->net.last_error);
  3231. thd->clear_error();
  3232. DBUG_RETURN(0);
  3233. }
  3234. else if (!tables->view)
  3235. {
  3236. List<FOREIGN_KEY_INFO> f_key_list;
  3237. TABLE *show_table= tables->table;
  3238. KEY *key_info=show_table->key_info;
  3239. uint primary_key= show_table->s->primary_key;
  3240. show_table->file->info(HA_STATUS_VARIABLE |
  3241. HA_STATUS_NO_LOCK |
  3242. HA_STATUS_TIME);
  3243. for (uint i=0 ; i < show_table->s->keys ; i++, key_info++)
  3244. {
  3245. if (i != primary_key && !(key_info->flags & HA_NOSAME))
  3246. continue;
  3247. if (i == primary_key && !strcmp(key_info->name, primary_key_name))
  3248. {
  3249. if (store_constraints(thd, table, base_name, file_name, key_info->name,
  3250. strlen(key_info->name),
  3251. STRING_WITH_LEN("PRIMARY KEY")))
  3252. DBUG_RETURN(1);
  3253. }
  3254. else if (key_info->flags & HA_NOSAME)
  3255. {
  3256. if (store_constraints(thd, table, base_name, file_name, key_info->name,
  3257. strlen(key_info->name),
  3258. STRING_WITH_LEN("UNIQUE")))
  3259. DBUG_RETURN(1);
  3260. }
  3261. }
  3262. show_table->file->get_foreign_key_list(thd, &f_key_list);
  3263. FOREIGN_KEY_INFO *f_key_info;
  3264. List_iterator_fast<FOREIGN_KEY_INFO> it(f_key_list);
  3265. while ((f_key_info=it++))
  3266. {
  3267. if (store_constraints(thd, table, base_name, file_name,
  3268. f_key_info->forein_id->str,
  3269. strlen(f_key_info->forein_id->str),
  3270. "FOREIGN KEY", 11))
  3271. DBUG_RETURN(1);
  3272. }
  3273. }
  3274. DBUG_RETURN(res);
  3275. }
  3276. static bool store_trigger(THD *thd, TABLE *table, const char *db,
  3277. const char *tname, LEX_STRING *trigger_name,
  3278. enum trg_event_type event,
  3279. enum trg_action_time_type timing,
  3280. LEX_STRING *trigger_stmt,
  3281. ulong sql_mode,
  3282. LEX_STRING *definer_buffer)
  3283. {
  3284. CHARSET_INFO *cs= system_charset_info;
  3285. byte *sql_mode_str;
  3286. ulong sql_mode_len;
  3287. restore_record(table, s->default_values);
  3288. table->field[1]->store(db, strlen(db), cs);
  3289. table->field[2]->store(trigger_name->str, trigger_name->length, cs);
  3290. table->field[3]->store(trg_event_type_names[event].str,
  3291. trg_event_type_names[event].length, cs);
  3292. table->field[5]->store(db, strlen(db), cs);
  3293. table->field[6]->store(tname, strlen(tname), cs);
  3294. table->field[9]->store(trigger_stmt->str, trigger_stmt->length, cs);
  3295. table->field[10]->store(STRING_WITH_LEN("ROW"), cs);
  3296. table->field[11]->store(trg_action_time_type_names[timing].str,
  3297. trg_action_time_type_names[timing].length, cs);
  3298. table->field[14]->store(STRING_WITH_LEN("OLD"), cs);
  3299. table->field[15]->store(STRING_WITH_LEN("NEW"), cs);
  3300. sql_mode_str=
  3301. sys_var_thd_sql_mode::symbolic_mode_representation(thd,
  3302. sql_mode,
  3303. &sql_mode_len);
  3304. table->field[17]->store((const char*)sql_mode_str, sql_mode_len, cs);
  3305. table->field[18]->store((const char *)definer_buffer->str, definer_buffer->length, cs);
  3306. return schema_table_store_record(thd, table);
  3307. }
  3308. static int get_schema_triggers_record(THD *thd, struct st_table_list *tables,
  3309. TABLE *table, bool res,
  3310. const char *base_name,
  3311. const char *file_name)
  3312. {
  3313. DBUG_ENTER("get_schema_triggers_record");
  3314. /*
  3315. res can be non zero value when processed table is a view or
  3316. error happened during opening of processed table.
  3317. */
  3318. if (res)
  3319. {
  3320. if (!tables->view)
  3321. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  3322. thd->net.last_errno, thd->net.last_error);
  3323. thd->clear_error();
  3324. DBUG_RETURN(0);
  3325. }
  3326. if (!tables->view && tables->table->triggers)
  3327. {
  3328. Table_triggers_list *triggers= tables->table->triggers;
  3329. int event, timing;
  3330. for (event= 0; event < (int)TRG_EVENT_MAX; event++)
  3331. {
  3332. for (timing= 0; timing < (int)TRG_ACTION_MAX; timing++)
  3333. {
  3334. LEX_STRING trigger_name;
  3335. LEX_STRING trigger_stmt;
  3336. ulong sql_mode;
  3337. char definer_holder[USER_HOST_BUFF_SIZE];
  3338. LEX_STRING definer_buffer;
  3339. definer_buffer.str= definer_holder;
  3340. if (triggers->get_trigger_info(thd, (enum trg_event_type) event,
  3341. (enum trg_action_time_type)timing,
  3342. &trigger_name, &trigger_stmt,
  3343. &sql_mode,
  3344. &definer_buffer))
  3345. continue;
  3346. if (store_trigger(thd, table, base_name, file_name, &trigger_name,
  3347. (enum trg_event_type) event,
  3348. (enum trg_action_time_type) timing, &trigger_stmt,
  3349. sql_mode,
  3350. &definer_buffer))
  3351. DBUG_RETURN(1);
  3352. }
  3353. }
  3354. }
  3355. DBUG_RETURN(0);
  3356. }
  3357. void store_key_column_usage(TABLE *table, const char*db, const char *tname,
  3358. const char *key_name, uint key_len,
  3359. const char *con_type, uint con_len, longlong idx)
  3360. {
  3361. CHARSET_INFO *cs= system_charset_info;
  3362. table->field[1]->store(db, strlen(db), cs);
  3363. table->field[2]->store(key_name, key_len, cs);
  3364. table->field[4]->store(db, strlen(db), cs);
  3365. table->field[5]->store(tname, strlen(tname), cs);
  3366. table->field[6]->store(con_type, con_len, cs);
  3367. table->field[7]->store((longlong) idx, TRUE);
  3368. }
  3369. static int get_schema_key_column_usage_record(THD *thd,
  3370. struct st_table_list *tables,
  3371. TABLE *table, bool res,
  3372. const char *base_name,
  3373. const char *file_name)
  3374. {
  3375. DBUG_ENTER("get_schema_key_column_usage_record");
  3376. if (res)
  3377. {
  3378. if (!tables->view)
  3379. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  3380. thd->net.last_errno, thd->net.last_error);
  3381. thd->clear_error();
  3382. DBUG_RETURN(0);
  3383. }
  3384. else if (!tables->view)
  3385. {
  3386. List<FOREIGN_KEY_INFO> f_key_list;
  3387. TABLE *show_table= tables->table;
  3388. KEY *key_info=show_table->key_info;
  3389. uint primary_key= show_table->s->primary_key;
  3390. show_table->file->info(HA_STATUS_VARIABLE |
  3391. HA_STATUS_NO_LOCK |
  3392. HA_STATUS_TIME);
  3393. for (uint i=0 ; i < show_table->s->keys ; i++, key_info++)
  3394. {
  3395. if (i != primary_key && !(key_info->flags & HA_NOSAME))
  3396. continue;
  3397. uint f_idx= 0;
  3398. KEY_PART_INFO *key_part= key_info->key_part;
  3399. for (uint j=0 ; j < key_info->key_parts ; j++,key_part++)
  3400. {
  3401. if (key_part->field)
  3402. {
  3403. f_idx++;
  3404. restore_record(table, s->default_values);
  3405. store_key_column_usage(table, base_name, file_name,
  3406. key_info->name,
  3407. strlen(key_info->name),
  3408. key_part->field->field_name,
  3409. strlen(key_part->field->field_name),
  3410. (longlong) f_idx);
  3411. if (schema_table_store_record(thd, table))
  3412. DBUG_RETURN(1);
  3413. }
  3414. }
  3415. }
  3416. show_table->file->get_foreign_key_list(thd, &f_key_list);
  3417. FOREIGN_KEY_INFO *f_key_info;
  3418. List_iterator_fast<FOREIGN_KEY_INFO> it(f_key_list);
  3419. while ((f_key_info= it++))
  3420. {
  3421. LEX_STRING *f_info;
  3422. LEX_STRING *r_info;
  3423. List_iterator_fast<LEX_STRING> it(f_key_info->foreign_fields),
  3424. it1(f_key_info->referenced_fields);
  3425. uint f_idx= 0;
  3426. while ((f_info= it++))
  3427. {
  3428. r_info= it1++;
  3429. f_idx++;
  3430. restore_record(table, s->default_values);
  3431. store_key_column_usage(table, base_name, file_name,
  3432. f_key_info->forein_id->str,
  3433. f_key_info->forein_id->length,
  3434. f_info->str, f_info->length,
  3435. (longlong) f_idx);
  3436. table->field[8]->store((longlong) f_idx, TRUE);
  3437. table->field[8]->set_notnull();
  3438. table->field[9]->store(f_key_info->referenced_db->str,
  3439. f_key_info->referenced_db->length,
  3440. system_charset_info);
  3441. table->field[9]->set_notnull();
  3442. table->field[10]->store(f_key_info->referenced_table->str,
  3443. f_key_info->referenced_table->length,
  3444. system_charset_info);
  3445. table->field[10]->set_notnull();
  3446. table->field[11]->store(r_info->str, r_info->length,
  3447. system_charset_info);
  3448. table->field[11]->set_notnull();
  3449. if (schema_table_store_record(thd, table))
  3450. DBUG_RETURN(1);
  3451. }
  3452. }
  3453. }
  3454. DBUG_RETURN(res);
  3455. }
  3456. static void collect_partition_expr(List<char> &field_list, String *str)
  3457. {
  3458. List_iterator<char> part_it(field_list);
  3459. ulong no_fields= field_list.elements;
  3460. const char *field_str;
  3461. str->length(0);
  3462. while ((field_str= part_it++))
  3463. {
  3464. str->append(field_str);
  3465. if (--no_fields != 0)
  3466. str->append(",");
  3467. }
  3468. return;
  3469. }
  3470. static void store_schema_partitions_record(THD *thd, TABLE *table,
  3471. partition_element *part_elem,
  3472. handler *file, uint part_id)
  3473. {
  3474. CHARSET_INFO *cs= system_charset_info;
  3475. PARTITION_INFO stat_info;
  3476. TIME time;
  3477. file->get_dynamic_partition_info(&stat_info, part_id);
  3478. table->field[12]->store((longlong) stat_info.records, TRUE);
  3479. table->field[13]->store((longlong) stat_info.mean_rec_length, TRUE);
  3480. table->field[14]->store((longlong) stat_info.data_file_length, TRUE);
  3481. if (stat_info.max_data_file_length)
  3482. {
  3483. table->field[15]->store((longlong) stat_info.max_data_file_length, TRUE);
  3484. table->field[15]->set_notnull();
  3485. }
  3486. table->field[16]->store((longlong) stat_info.index_file_length, TRUE);
  3487. table->field[17]->store((longlong) stat_info.delete_length, TRUE);
  3488. if (stat_info.create_time)
  3489. {
  3490. thd->variables.time_zone->gmt_sec_to_TIME(&time,
  3491. stat_info.create_time);
  3492. table->field[18]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  3493. table->field[18]->set_notnull();
  3494. }
  3495. if (stat_info.update_time)
  3496. {
  3497. thd->variables.time_zone->gmt_sec_to_TIME(&time,
  3498. stat_info.update_time);
  3499. table->field[19]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  3500. table->field[19]->set_notnull();
  3501. }
  3502. if (stat_info.check_time)
  3503. {
  3504. thd->variables.time_zone->gmt_sec_to_TIME(&time, stat_info.check_time);
  3505. table->field[20]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  3506. table->field[20]->set_notnull();
  3507. }
  3508. if (file->ha_table_flags() & (ulong) HA_HAS_CHECKSUM)
  3509. {
  3510. table->field[21]->store((longlong) stat_info.check_sum, TRUE);
  3511. table->field[21]->set_notnull();
  3512. }
  3513. if (part_elem)
  3514. {
  3515. if (part_elem->part_comment)
  3516. table->field[22]->store(part_elem->part_comment,
  3517. strlen(part_elem->part_comment), cs);
  3518. else
  3519. table->field[22]->store(STRING_WITH_LEN("default"), cs);
  3520. if (part_elem->nodegroup_id != UNDEF_NODEGROUP)
  3521. table->field[23]->store((longlong) part_elem->nodegroup_id, TRUE);
  3522. else
  3523. table->field[23]->store(STRING_WITH_LEN("default"), cs);
  3524. if (part_elem->tablespace_name)
  3525. table->field[24]->store(part_elem->tablespace_name,
  3526. strlen(part_elem->tablespace_name), cs);
  3527. else
  3528. table->field[24]->store(STRING_WITH_LEN("default"), cs);
  3529. }
  3530. return;
  3531. }
  3532. static int get_schema_partitions_record(THD *thd, struct st_table_list *tables,
  3533. TABLE *table, bool res,
  3534. const char *base_name,
  3535. const char *file_name)
  3536. {
  3537. CHARSET_INFO *cs= system_charset_info;
  3538. char buff[61];
  3539. String tmp_res(buff, sizeof(buff), cs);
  3540. String tmp_str;
  3541. TIME time;
  3542. TABLE *show_table= tables->table;
  3543. handler *file;
  3544. #ifdef WITH_PARTITION_STORAGE_ENGINE
  3545. partition_info *part_info;
  3546. #endif
  3547. DBUG_ENTER("get_schema_partitions_record");
  3548. if (res)
  3549. {
  3550. if (!tables->view)
  3551. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  3552. thd->net.last_errno, thd->net.last_error);
  3553. thd->clear_error();
  3554. DBUG_RETURN(0);
  3555. }
  3556. file= show_table->file;
  3557. #ifdef WITH_PARTITION_STORAGE_ENGINE
  3558. part_info= show_table->part_info;
  3559. if (part_info)
  3560. {
  3561. partition_element *part_elem;
  3562. List_iterator<partition_element> part_it(part_info->partitions);
  3563. uint part_pos= 0, part_id= 0;
  3564. uint no_parts= part_info->no_parts;
  3565. handler *part_file;
  3566. restore_record(table, s->default_values);
  3567. table->field[1]->store(base_name, strlen(base_name), cs);
  3568. table->field[2]->store(file_name, strlen(file_name), cs);
  3569. /* Partition method*/
  3570. switch (part_info->part_type) {
  3571. case RANGE_PARTITION:
  3572. table->field[7]->store(partition_keywords[PKW_RANGE].str,
  3573. partition_keywords[PKW_RANGE].length, cs);
  3574. break;
  3575. case LIST_PARTITION:
  3576. table->field[7]->store(partition_keywords[PKW_LIST].str,
  3577. partition_keywords[PKW_LIST].length, cs);
  3578. break;
  3579. case HASH_PARTITION:
  3580. tmp_res.length(0);
  3581. if (part_info->linear_hash_ind)
  3582. tmp_res.append(partition_keywords[PKW_LINEAR].str,
  3583. partition_keywords[PKW_LINEAR].length);
  3584. if (part_info->list_of_part_fields)
  3585. tmp_res.append(partition_keywords[PKW_KEY].str,
  3586. partition_keywords[PKW_KEY].length);
  3587. else
  3588. tmp_res.append(partition_keywords[PKW_HASH].str,
  3589. partition_keywords[PKW_HASH].length);
  3590. table->field[7]->store(tmp_res.ptr(), tmp_res.length(), cs);
  3591. break;
  3592. default:
  3593. DBUG_ASSERT(0);
  3594. current_thd->fatal_error();
  3595. DBUG_RETURN(1);
  3596. }
  3597. table->field[7]->set_notnull();
  3598. /* Partition expression */
  3599. if (part_info->part_expr)
  3600. {
  3601. table->field[9]->store(part_info->part_func_string,
  3602. part_info->part_func_len, cs);
  3603. }
  3604. else if (part_info->list_of_part_fields)
  3605. {
  3606. collect_partition_expr(part_info->part_field_list, &tmp_str);
  3607. table->field[9]->store(tmp_str.ptr(), tmp_str.length(), cs);
  3608. }
  3609. table->field[9]->set_notnull();
  3610. if (part_info->is_sub_partitioned())
  3611. {
  3612. /* Subpartition method */
  3613. tmp_res.length(0);
  3614. if (part_info->linear_hash_ind)
  3615. tmp_res.append(partition_keywords[PKW_LINEAR].str,
  3616. partition_keywords[PKW_LINEAR].length);
  3617. if (part_info->list_of_subpart_fields)
  3618. tmp_res.append(partition_keywords[PKW_KEY].str,
  3619. partition_keywords[PKW_KEY].length);
  3620. else
  3621. tmp_res.append(partition_keywords[PKW_HASH].str,
  3622. partition_keywords[PKW_HASH].length);
  3623. table->field[8]->store(tmp_res.ptr(), tmp_res.length(), cs);
  3624. table->field[8]->set_notnull();
  3625. /* Subpartition expression */
  3626. if (part_info->subpart_expr)
  3627. {
  3628. table->field[10]->store(part_info->subpart_func_string,
  3629. part_info->subpart_func_len, cs);
  3630. }
  3631. else if (part_info->list_of_subpart_fields)
  3632. {
  3633. collect_partition_expr(part_info->subpart_field_list, &tmp_str);
  3634. table->field[10]->store(tmp_str.ptr(), tmp_str.length(), cs);
  3635. }
  3636. table->field[10]->set_notnull();
  3637. }
  3638. while ((part_elem= part_it++))
  3639. {
  3640. table->field[3]->store(part_elem->partition_name,
  3641. strlen(part_elem->partition_name), cs);
  3642. table->field[3]->set_notnull();
  3643. /* PARTITION_ORDINAL_POSITION */
  3644. table->field[5]->store((longlong) ++part_pos, TRUE);
  3645. table->field[5]->set_notnull();
  3646. /* Partition description */
  3647. if (part_info->part_type == RANGE_PARTITION)
  3648. {
  3649. if (part_elem->range_value != LONGLONG_MAX)
  3650. table->field[11]->store((longlong) part_elem->range_value, FALSE);
  3651. else
  3652. table->field[11]->store(partition_keywords[PKW_MAXVALUE].str,
  3653. partition_keywords[PKW_MAXVALUE].length, cs);
  3654. table->field[11]->set_notnull();
  3655. }
  3656. else if (part_info->part_type == LIST_PARTITION)
  3657. {
  3658. List_iterator<part_elem_value> list_val_it(part_elem->list_val_list);
  3659. part_elem_value *list_value;
  3660. uint no_items= part_elem->list_val_list.elements;
  3661. tmp_str.length(0);
  3662. tmp_res.length(0);
  3663. if (part_elem->has_null_value)
  3664. {
  3665. tmp_str.append("NULL");
  3666. if (no_items > 0)
  3667. tmp_str.append(",");
  3668. }
  3669. while ((list_value= list_val_it++))
  3670. {
  3671. if (!list_value->unsigned_flag)
  3672. tmp_res.set(list_value->value, cs);
  3673. else
  3674. tmp_res.set((ulonglong)list_value->value, cs);
  3675. tmp_str.append(tmp_res);
  3676. if (--no_items != 0)
  3677. tmp_str.append(",");
  3678. };
  3679. table->field[11]->store(tmp_str.ptr(), tmp_str.length(), cs);
  3680. table->field[11]->set_notnull();
  3681. }
  3682. if (part_elem->subpartitions.elements)
  3683. {
  3684. List_iterator<partition_element> sub_it(part_elem->subpartitions);
  3685. partition_element *subpart_elem;
  3686. uint subpart_pos= 0;
  3687. while ((subpart_elem= sub_it++))
  3688. {
  3689. table->field[4]->store(subpart_elem->partition_name,
  3690. strlen(subpart_elem->partition_name), cs);
  3691. table->field[4]->set_notnull();
  3692. /* SUBPARTITION_ORDINAL_POSITION */
  3693. table->field[6]->store((longlong) ++subpart_pos, TRUE);
  3694. table->field[6]->set_notnull();
  3695. store_schema_partitions_record(thd, table, subpart_elem,
  3696. file, part_id);
  3697. part_id++;
  3698. if(schema_table_store_record(thd, table))
  3699. DBUG_RETURN(1);
  3700. }
  3701. }
  3702. else
  3703. {
  3704. store_schema_partitions_record(thd, table, part_elem,
  3705. file, part_id);
  3706. part_id++;
  3707. if(schema_table_store_record(thd, table))
  3708. DBUG_RETURN(1);
  3709. }
  3710. }
  3711. DBUG_RETURN(0);
  3712. }
  3713. else
  3714. #endif
  3715. {
  3716. store_schema_partitions_record(thd, table, 0, file, 0);
  3717. if(schema_table_store_record(thd, table))
  3718. DBUG_RETURN(1);
  3719. }
  3720. DBUG_RETURN(0);
  3721. }
  3722. static interval_type get_real_interval_type(interval_type i_type)
  3723. {
  3724. switch (i_type) {
  3725. case INTERVAL_YEAR:
  3726. return INTERVAL_YEAR;
  3727. case INTERVAL_QUARTER:
  3728. case INTERVAL_YEAR_MONTH:
  3729. case INTERVAL_MONTH:
  3730. return INTERVAL_MONTH;
  3731. case INTERVAL_WEEK:
  3732. case INTERVAL_DAY:
  3733. return INTERVAL_DAY;
  3734. case INTERVAL_DAY_HOUR:
  3735. case INTERVAL_HOUR:
  3736. return INTERVAL_HOUR;
  3737. case INTERVAL_DAY_MINUTE:
  3738. case INTERVAL_HOUR_MINUTE:
  3739. case INTERVAL_MINUTE:
  3740. return INTERVAL_MINUTE;
  3741. case INTERVAL_DAY_SECOND:
  3742. case INTERVAL_HOUR_SECOND:
  3743. case INTERVAL_MINUTE_SECOND:
  3744. case INTERVAL_SECOND:
  3745. return INTERVAL_SECOND;
  3746. case INTERVAL_DAY_MICROSECOND:
  3747. case INTERVAL_HOUR_MICROSECOND:
  3748. case INTERVAL_MINUTE_MICROSECOND:
  3749. case INTERVAL_SECOND_MICROSECOND:
  3750. case INTERVAL_MICROSECOND:
  3751. return INTERVAL_MICROSECOND;
  3752. case INTERVAL_LAST:
  3753. DBUG_ASSERT(0);
  3754. }
  3755. DBUG_ASSERT(0);
  3756. return INTERVAL_SECOND;
  3757. }
  3758. extern LEX_STRING interval_type_to_name[];
  3759. /*
  3760. Loads an event from mysql.event and copies it's data to a row of
  3761. I_S.EVENTS
  3762. Synopsis
  3763. copy_event_to_schema_table()
  3764. thd Thread
  3765. sch_table The schema table (information_schema.event)
  3766. event_table The event table to use for loading (mysql.event).
  3767. Returns
  3768. 0 OK
  3769. 1 Error
  3770. */
  3771. static int
  3772. copy_event_to_schema_table(THD *thd, TABLE *sch_table, TABLE *event_table)
  3773. {
  3774. const char *wild= thd->lex->wild ? thd->lex->wild->ptr() : NullS;
  3775. CHARSET_INFO *scs= system_charset_info;
  3776. TIME time;
  3777. Event_timed et;
  3778. DBUG_ENTER("fill_events_copy_to_schema_tab");
  3779. restore_record(sch_table, s->default_values);
  3780. if (et.load_from_row(thd->mem_root, event_table))
  3781. {
  3782. my_error(ER_CANNOT_LOAD_FROM_TABLE, MYF(0));
  3783. DBUG_RETURN(1);
  3784. }
  3785. if (!(!wild || !wild[0] || !wild_compare(et.name.str, wild, 0)))
  3786. DBUG_RETURN(0);
  3787. /*
  3788. Skip events in schemas one does not have access to. The check is
  3789. optimized. It's guaranteed in case of SHOW EVENTS that the user
  3790. has access.
  3791. */
  3792. if (thd->lex->sql_command != SQLCOM_SHOW_EVENTS &&
  3793. check_access(thd, EVENT_ACL, et.dbname.str, 0, 0, 1,
  3794. is_schema_db(et.dbname.str)))
  3795. DBUG_RETURN(0);
  3796. /* ->field[0] is EVENT_CATALOG and is by default NULL */
  3797. sch_table->field[ISE_EVENT_SCHEMA]->
  3798. store(et.dbname.str, et.dbname.length,scs);
  3799. sch_table->field[ISE_EVENT_NAME]->
  3800. store(et.name.str, et.name.length, scs);
  3801. sch_table->field[ISE_DEFINER]->
  3802. store(et.definer.str, et.definer.length, scs);
  3803. sch_table->field[ISE_EVENT_BODY]->
  3804. store(STRING_WITH_LEN("SQL"), scs);
  3805. sch_table->field[ISE_EVENT_DEFINITION]->
  3806. store(et.body.str, et.body.length, scs);
  3807. /* SQL_MODE */
  3808. {
  3809. byte *sql_mode_str;
  3810. ulong sql_mode_len= 0;
  3811. sql_mode_str=
  3812. sys_var_thd_sql_mode::symbolic_mode_representation(thd, et.sql_mode,
  3813. &sql_mode_len);
  3814. sch_table->field[ISE_SQL_MODE]->
  3815. store((const char*)sql_mode_str, sql_mode_len, scs);
  3816. }
  3817. if (et.expression)
  3818. {
  3819. String show_str;
  3820. /* type */
  3821. sch_table->field[ISE_EVENT_TYPE]->store(STRING_WITH_LEN("RECURRING"), scs);
  3822. if (Events::reconstruct_interval_expression(&show_str, et.interval,
  3823. et.expression))
  3824. DBUG_RETURN(1);
  3825. sch_table->field[ISE_INTERVAL_VALUE]->set_notnull();
  3826. sch_table->field[ISE_INTERVAL_VALUE]->
  3827. store(show_str.ptr(), show_str.length(), scs);
  3828. LEX_STRING *ival= &interval_type_to_name[et.interval];
  3829. sch_table->field[ISE_INTERVAL_FIELD]->set_notnull();
  3830. sch_table->field[ISE_INTERVAL_FIELD]->store(ival->str, ival->length, scs);
  3831. /* starts & ends . STARTS is always set - see sql_yacc.yy */
  3832. sch_table->field[ISE_STARTS]->set_notnull();
  3833. sch_table->field[ISE_STARTS]->
  3834. store_time(&et.starts, MYSQL_TIMESTAMP_DATETIME);
  3835. if (!et.ends_null)
  3836. {
  3837. sch_table->field[ISE_ENDS]->set_notnull();
  3838. sch_table->field[ISE_ENDS]->
  3839. store_time(&et.ends, MYSQL_TIMESTAMP_DATETIME);
  3840. }
  3841. }
  3842. else
  3843. {
  3844. /* type */
  3845. sch_table->field[ISE_EVENT_TYPE]->store(STRING_WITH_LEN("ONE TIME"), scs);
  3846. sch_table->field[ISE_EXECUTE_AT]->set_notnull();
  3847. sch_table->field[ISE_EXECUTE_AT]->
  3848. store_time(&et.execute_at, MYSQL_TIMESTAMP_DATETIME);
  3849. }
  3850. /* status */
  3851. if (et.status == Event_timed::ENABLED)
  3852. sch_table->field[ISE_STATUS]->store(STRING_WITH_LEN("ENABLED"), scs);
  3853. else
  3854. sch_table->field[ISE_STATUS]->store(STRING_WITH_LEN("DISABLED"), scs);
  3855. /* on_completion */
  3856. if (et.on_completion == Event_timed::ON_COMPLETION_DROP)
  3857. sch_table->field[ISE_ON_COMPLETION]->
  3858. store(STRING_WITH_LEN("NOT PRESERVE"), scs);
  3859. else
  3860. sch_table->field[ISE_ON_COMPLETION]->
  3861. store(STRING_WITH_LEN("PRESERVE"), scs);
  3862. int not_used=0;
  3863. number_to_datetime(et.created, &time, 0, &not_used);
  3864. DBUG_ASSERT(not_used==0);
  3865. sch_table->field[ISE_CREATED]->store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  3866. number_to_datetime(et.modified, &time, 0, &not_used);
  3867. DBUG_ASSERT(not_used==0);
  3868. sch_table->field[ISE_LAST_ALTERED]->
  3869. store_time(&time, MYSQL_TIMESTAMP_DATETIME);
  3870. if (et.last_executed.year)
  3871. {
  3872. sch_table->field[ISE_LAST_EXECUTED]->set_notnull();
  3873. sch_table->field[ISE_LAST_EXECUTED]->
  3874. store_time(&et.last_executed, MYSQL_TIMESTAMP_DATETIME);
  3875. }
  3876. sch_table->field[ISE_EVENT_COMMENT]->
  3877. store(et.comment.str, et.comment.length, scs);
  3878. if (schema_table_store_record(thd, sch_table))
  3879. DBUG_RETURN(1);
  3880. DBUG_RETURN(0);
  3881. }
  3882. /*
  3883. Performs an index scan of event_table (mysql.event) and fills schema_table.
  3884. Synopsis
  3885. events_table_index_read_for_db()
  3886. thd Thread
  3887. schema_table The I_S.EVENTS table
  3888. event_table The event table to use for loading (mysql.event)
  3889. Returns
  3890. 0 OK
  3891. 1 Error
  3892. */
  3893. static
  3894. int events_table_index_read_for_db(THD *thd, TABLE *schema_table,
  3895. TABLE *event_table)
  3896. {
  3897. int ret=0;
  3898. CHARSET_INFO *scs= system_charset_info;
  3899. KEY *key_info;
  3900. uint key_len;
  3901. byte *key_buf= NULL;
  3902. LINT_INIT(key_buf);
  3903. DBUG_ENTER("schema_events_do_index_scan");
  3904. DBUG_PRINT("info", ("Using prefix scanning on PK"));
  3905. event_table->file->ha_index_init(0, 1);
  3906. event_table->field[Events::FIELD_DB]->
  3907. store(thd->lex->select_lex.db, strlen(thd->lex->select_lex.db), scs);
  3908. key_info= event_table->key_info;
  3909. key_len= key_info->key_part[0].store_length;
  3910. if (!(key_buf= (byte *)alloc_root(thd->mem_root, key_len)))
  3911. {
  3912. ret= 1;
  3913. /* don't send error, it would be done by sql_alloc_error_handler() */
  3914. }
  3915. else
  3916. {
  3917. key_copy(key_buf, event_table->record[0], key_info, key_len);
  3918. if (!(ret= event_table->file->index_read(event_table->record[0], key_buf,
  3919. key_len, HA_READ_PREFIX)))
  3920. {
  3921. DBUG_PRINT("info",("Found rows. Let's retrieve them. ret=%d", ret));
  3922. do
  3923. {
  3924. ret= copy_event_to_schema_table(thd, schema_table, event_table);
  3925. if (ret == 0)
  3926. ret= event_table->file->index_next_same(event_table->record[0],
  3927. key_buf, key_len);
  3928. } while (ret == 0);
  3929. }
  3930. DBUG_PRINT("info", ("Scan finished. ret=%d", ret));
  3931. }
  3932. event_table->file->ha_index_end();
  3933. /* ret is guaranteed to be != 0 */
  3934. if (ret == HA_ERR_END_OF_FILE || ret == HA_ERR_KEY_NOT_FOUND)
  3935. DBUG_RETURN(0);
  3936. DBUG_RETURN(1);
  3937. }
  3938. /*
  3939. Performs a table scan of event_table (mysql.event) and fills schema_table.
  3940. Synopsis
  3941. events_table_scan_all()
  3942. thd Thread
  3943. schema_table The I_S.EVENTS in memory table
  3944. event_table The event table to use for loading.
  3945. Returns
  3946. 0 OK
  3947. 1 Error
  3948. */
  3949. static
  3950. int events_table_scan_all(THD *thd, TABLE *schema_table,
  3951. TABLE *event_table)
  3952. {
  3953. int ret;
  3954. READ_RECORD read_record_info;
  3955. DBUG_ENTER("schema_events_do_table_scan");
  3956. init_read_record(&read_record_info, thd, event_table, NULL, 1, 0);
  3957. /*
  3958. rr_sequential, in read_record(), returns 137==HA_ERR_END_OF_FILE,
  3959. but rr_handle_error returns -1 for that reason. Thus, read_record()
  3960. returns -1 eventually.
  3961. */
  3962. do
  3963. {
  3964. ret= read_record_info.read_record(&read_record_info);
  3965. if (ret == 0)
  3966. ret= copy_event_to_schema_table(thd, schema_table, event_table);
  3967. }
  3968. while (ret == 0);
  3969. DBUG_PRINT("info", ("Scan finished. ret=%d", ret));
  3970. end_read_record(&read_record_info);
  3971. /* ret is guaranteed to be != 0 */
  3972. DBUG_RETURN(ret == -1? 0:1);
  3973. }
  3974. /*
  3975. Fills I_S.EVENTS with data loaded from mysql.event. Also used by
  3976. SHOW EVENTS
  3977. Synopsis
  3978. fill_schema_events()
  3979. thd Thread
  3980. tables The schema table
  3981. cond Unused
  3982. Returns
  3983. 0 OK
  3984. 1 Error
  3985. */
  3986. int fill_schema_events(THD *thd, TABLE_LIST *tables, COND * /* cond */)
  3987. {
  3988. TABLE *schema_table= tables->table;
  3989. TABLE *event_table= NULL;
  3990. Open_tables_state backup;
  3991. int ret= 0;
  3992. DBUG_ENTER("fill_schema_events");
  3993. /*
  3994. If it's SHOW EVENTS then thd->lex->select_lex.db is guaranteed not to
  3995. be NULL. Let's do an assert anyway.
  3996. */
  3997. if (thd->lex->sql_command == SQLCOM_SHOW_EVENTS)
  3998. {
  3999. DBUG_ASSERT(thd->lex->select_lex.db);
  4000. if (check_access(thd, EVENT_ACL, thd->lex->select_lex.db, 0, 0, 0,
  4001. is_schema_db(thd->lex->select_lex.db)))
  4002. DBUG_RETURN(1);
  4003. }
  4004. DBUG_PRINT("info",("db=%s", thd->lex->select_lex.db?
  4005. thd->lex->select_lex.db:"(null)"));
  4006. thd->reset_n_backup_open_tables_state(&backup);
  4007. if (Events::open_event_table(thd, TL_READ, &event_table))
  4008. {
  4009. sql_print_error("Table mysql.event is damaged.");
  4010. thd->restore_backup_open_tables_state(&backup);
  4011. DBUG_RETURN(1);
  4012. }
  4013. /*
  4014. 1. SELECT I_S => use table scan. I_S.EVENTS does not guarantee order
  4015. thus we won't order it. OTOH, SHOW EVENTS will be
  4016. ordered.
  4017. 2. SHOW EVENTS => PRIMARY KEY with prefix scanning on (db)
  4018. Reasoning: Events are per schema, therefore a scan over an index
  4019. will save use from doing a table scan and comparing
  4020. every single row's `db` with the schema which we show.
  4021. */
  4022. if (thd->lex->sql_command == SQLCOM_SHOW_EVENTS)
  4023. ret= events_table_index_read_for_db(thd, schema_table, event_table);
  4024. else
  4025. ret= events_table_scan_all(thd, schema_table, event_table);
  4026. close_thread_tables(thd);
  4027. thd->restore_backup_open_tables_state(&backup);
  4028. DBUG_PRINT("info", ("Return code=%d", ret));
  4029. DBUG_RETURN(ret);
  4030. }
  4031. int fill_open_tables(THD *thd, TABLE_LIST *tables, COND *cond)
  4032. {
  4033. DBUG_ENTER("fill_open_tables");
  4034. const char *wild= thd->lex->wild ? thd->lex->wild->ptr() : NullS;
  4035. TABLE *table= tables->table;
  4036. CHARSET_INFO *cs= system_charset_info;
  4037. OPEN_TABLE_LIST *open_list;
  4038. if (!(open_list=list_open_tables(thd,thd->lex->select_lex.db, wild))
  4039. && thd->is_fatal_error)
  4040. DBUG_RETURN(1);
  4041. for (; open_list ; open_list=open_list->next)
  4042. {
  4043. restore_record(table, s->default_values);
  4044. table->field[0]->store(open_list->db, strlen(open_list->db), cs);
  4045. table->field[1]->store(open_list->table, strlen(open_list->table), cs);
  4046. table->field[2]->store((longlong) open_list->in_use, TRUE);
  4047. table->field[3]->store((longlong) open_list->locked, TRUE);
  4048. if (schema_table_store_record(thd, table))
  4049. DBUG_RETURN(1);
  4050. }
  4051. DBUG_RETURN(0);
  4052. }
  4053. int fill_variables(THD *thd, TABLE_LIST *tables, COND *cond)
  4054. {
  4055. DBUG_ENTER("fill_variables");
  4056. int res= 0;
  4057. LEX *lex= thd->lex;
  4058. const char *wild= lex->wild ? lex->wild->ptr() : NullS;
  4059. pthread_mutex_lock(&LOCK_global_system_variables);
  4060. res= show_status_array(thd, wild, init_vars,
  4061. lex->option_type, 0, "", tables->table);
  4062. pthread_mutex_unlock(&LOCK_global_system_variables);
  4063. DBUG_RETURN(res);
  4064. }
  4065. int fill_status(THD *thd, TABLE_LIST *tables, COND *cond)
  4066. {
  4067. DBUG_ENTER("fill_status");
  4068. LEX *lex= thd->lex;
  4069. const char *wild= lex->wild ? lex->wild->ptr() : NullS;
  4070. int res= 0;
  4071. STATUS_VAR tmp;
  4072. pthread_mutex_lock(&LOCK_status);
  4073. if (lex->option_type == OPT_GLOBAL)
  4074. calc_sum_of_all_status(&tmp);
  4075. res= show_status_array(thd, wild,
  4076. (SHOW_VAR *)all_status_vars.buffer,
  4077. OPT_GLOBAL,
  4078. (lex->option_type == OPT_GLOBAL ?
  4079. &tmp: thd->initial_status_var), "",tables->table);
  4080. pthread_mutex_unlock(&LOCK_status);
  4081. DBUG_RETURN(res);
  4082. }
  4083. /*
  4084. Fill and store records into I_S.referential_constraints table
  4085. SYNOPSIS
  4086. get_referential_constraints_record()
  4087. thd thread handle
  4088. tables table list struct(processed table)
  4089. table I_S table
  4090. res 1 means the error during opening of the processed table
  4091. 0 means processed table is opened without error
  4092. base_name db name
  4093. file_name table name
  4094. RETURN
  4095. 0 ok
  4096. # error
  4097. */
  4098. static int
  4099. get_referential_constraints_record(THD *thd, struct st_table_list *tables,
  4100. TABLE *table, bool res,
  4101. const char *base_name, const char *file_name)
  4102. {
  4103. CHARSET_INFO *cs= system_charset_info;
  4104. DBUG_ENTER("get_referential_constraints_record");
  4105. if (res)
  4106. {
  4107. if (!tables->view)
  4108. push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
  4109. thd->net.last_errno, thd->net.last_error);
  4110. thd->clear_error();
  4111. DBUG_RETURN(0);
  4112. }
  4113. if (!tables->view)
  4114. {
  4115. List<FOREIGN_KEY_INFO> f_key_list;
  4116. TABLE *show_table= tables->table;
  4117. show_table->file->info(HA_STATUS_VARIABLE |
  4118. HA_STATUS_NO_LOCK |
  4119. HA_STATUS_TIME);
  4120. show_table->file->get_foreign_key_list(thd, &f_key_list);
  4121. FOREIGN_KEY_INFO *f_key_info;
  4122. List_iterator_fast<FOREIGN_KEY_INFO> it(f_key_list);
  4123. while ((f_key_info= it++))
  4124. {
  4125. restore_record(table, s->default_values);
  4126. table->field[1]->store(base_name, strlen(base_name), cs);
  4127. table->field[9]->store(file_name, strlen(file_name), cs);
  4128. table->field[2]->store(f_key_info->forein_id->str,
  4129. f_key_info->forein_id->length, cs);
  4130. table->field[4]->store(f_key_info->referenced_db->str,
  4131. f_key_info->referenced_db->length, cs);
  4132. table->field[5]->store(f_key_info->referenced_table->str,
  4133. f_key_info->referenced_table->length, cs);
  4134. table->field[6]->store(STRING_WITH_LEN("NONE"), cs);
  4135. table->field[7]->store(f_key_info->update_method->str,
  4136. f_key_info->update_method->length, cs);
  4137. table->field[8]->store(f_key_info->delete_method->str,
  4138. f_key_info->delete_method->length, cs);
  4139. if (schema_table_store_record(thd, table))
  4140. DBUG_RETURN(1);
  4141. }
  4142. }
  4143. DBUG_RETURN(0);
  4144. }
  4145. /*
  4146. Find schema_tables elment by name
  4147. SYNOPSIS
  4148. find_schema_table()
  4149. thd thread handler
  4150. table_name table name
  4151. RETURN
  4152. 0 table not found
  4153. # pointer to 'shema_tables' element
  4154. */
  4155. ST_SCHEMA_TABLE *find_schema_table(THD *thd, const char* table_name)
  4156. {
  4157. ST_SCHEMA_TABLE *schema_table= schema_tables;
  4158. for (; schema_table->table_name; schema_table++)
  4159. {
  4160. if (!my_strcasecmp(system_charset_info,
  4161. schema_table->table_name,
  4162. table_name))
  4163. return schema_table;
  4164. }
  4165. return 0;
  4166. }
  4167. ST_SCHEMA_TABLE *get_schema_table(enum enum_schema_tables schema_table_idx)
  4168. {
  4169. return &schema_tables[schema_table_idx];
  4170. }
  4171. /*
  4172. Create information_schema table using schema_table data
  4173. SYNOPSIS
  4174. create_schema_table()
  4175. thd thread handler
  4176. schema_table pointer to 'shema_tables' element
  4177. RETURN
  4178. # Pointer to created table
  4179. 0 Can't create table
  4180. */
  4181. TABLE *create_schema_table(THD *thd, TABLE_LIST *table_list)
  4182. {
  4183. int field_count= 0;
  4184. Item *item;
  4185. TABLE *table;
  4186. List<Item> field_list;
  4187. ST_SCHEMA_TABLE *schema_table= table_list->schema_table;
  4188. ST_FIELD_INFO *fields_info= schema_table->fields_info;
  4189. CHARSET_INFO *cs= system_charset_info;
  4190. DBUG_ENTER("create_schema_table");
  4191. for (; fields_info->field_name; fields_info++)
  4192. {
  4193. switch (fields_info->field_type) {
  4194. case MYSQL_TYPE_LONG:
  4195. if (!(item= new Item_int(fields_info->field_name,
  4196. fields_info->value,
  4197. fields_info->field_length)))
  4198. {
  4199. DBUG_RETURN(0);
  4200. }
  4201. break;
  4202. case MYSQL_TYPE_TIMESTAMP:
  4203. if (!(item=new Item_datetime(fields_info->field_name)))
  4204. {
  4205. DBUG_RETURN(0);
  4206. }
  4207. break;
  4208. default:
  4209. /* this should be changed when Item_empty_string is fixed(in 4.1) */
  4210. if (!(item= new Item_empty_string("", 0, cs)))
  4211. {
  4212. DBUG_RETURN(0);
  4213. }
  4214. item->max_length= fields_info->field_length * cs->mbmaxlen;
  4215. item->set_name(fields_info->field_name,
  4216. strlen(fields_info->field_name), cs);
  4217. break;
  4218. }
  4219. field_list.push_back(item);
  4220. item->maybe_null= fields_info->maybe_null;
  4221. field_count++;
  4222. }
  4223. TMP_TABLE_PARAM *tmp_table_param =
  4224. (TMP_TABLE_PARAM*) (thd->alloc(sizeof(TMP_TABLE_PARAM)));
  4225. tmp_table_param->init();
  4226. tmp_table_param->table_charset= cs;
  4227. tmp_table_param->field_count= field_count;
  4228. tmp_table_param->schema_table= 1;
  4229. SELECT_LEX *select_lex= thd->lex->current_select;
  4230. if (!(table= create_tmp_table(thd, tmp_table_param,
  4231. field_list, (ORDER*) 0, 0, 0,
  4232. (select_lex->options | thd->options |
  4233. TMP_TABLE_ALL_COLUMNS),
  4234. HA_POS_ERROR, table_list->alias)))
  4235. DBUG_RETURN(0);
  4236. table_list->schema_table_param= tmp_table_param;
  4237. DBUG_RETURN(table);
  4238. }
  4239. /*
  4240. For old SHOW compatibility. It is used when
  4241. old SHOW doesn't have generated column names
  4242. Make list of fields for SHOW
  4243. SYNOPSIS
  4244. make_old_format()
  4245. thd thread handler
  4246. schema_table pointer to 'schema_tables' element
  4247. RETURN
  4248. 1 error
  4249. 0 success
  4250. */
  4251. int make_old_format(THD *thd, ST_SCHEMA_TABLE *schema_table)
  4252. {
  4253. ST_FIELD_INFO *field_info= schema_table->fields_info;
  4254. Name_resolution_context *context= &thd->lex->select_lex.context;
  4255. for (; field_info->field_name; field_info++)
  4256. {
  4257. if (field_info->old_name)
  4258. {
  4259. Item_field *field= new Item_field(context,
  4260. NullS, NullS, field_info->field_name);
  4261. if (field)
  4262. {
  4263. field->set_name(field_info->old_name,
  4264. strlen(field_info->old_name),
  4265. system_charset_info);
  4266. if (add_item_to_list(thd, field))
  4267. return 1;
  4268. }
  4269. }
  4270. }
  4271. return 0;
  4272. }
  4273. int make_schemata_old_format(THD *thd, ST_SCHEMA_TABLE *schema_table)
  4274. {
  4275. char tmp[128];
  4276. LEX *lex= thd->lex;
  4277. SELECT_LEX *sel= lex->current_select;
  4278. Name_resolution_context *context= &sel->context;
  4279. if (!sel->item_list.elements)
  4280. {
  4281. ST_FIELD_INFO *field_info= &schema_table->fields_info[1];
  4282. String buffer(tmp,sizeof(tmp), system_charset_info);
  4283. Item_field *field= new Item_field(context,
  4284. NullS, NullS, field_info->field_name);
  4285. if (!field || add_item_to_list(thd, field))
  4286. return 1;
  4287. buffer.length(0);
  4288. buffer.append(field_info->old_name);
  4289. if (lex->wild && lex->wild->ptr())
  4290. {
  4291. buffer.append(STRING_WITH_LEN(" ("));
  4292. buffer.append(lex->wild->ptr());
  4293. buffer.append(')');
  4294. }
  4295. field->set_name(buffer.ptr(), buffer.length(), system_charset_info);
  4296. }
  4297. return 0;
  4298. }
  4299. int make_table_names_old_format(THD *thd, ST_SCHEMA_TABLE *schema_table)
  4300. {
  4301. char tmp[128];
  4302. String buffer(tmp,sizeof(tmp), thd->charset());
  4303. LEX *lex= thd->lex;
  4304. Name_resolution_context *context= &lex->select_lex.context;
  4305. ST_FIELD_INFO *field_info= &schema_table->fields_info[2];
  4306. buffer.length(0);
  4307. buffer.append(field_info->old_name);
  4308. buffer.append(lex->select_lex.db);
  4309. if (lex->wild && lex->wild->ptr())
  4310. {
  4311. buffer.append(STRING_WITH_LEN(" ("));
  4312. buffer.append(lex->wild->ptr());
  4313. buffer.append(')');
  4314. }
  4315. Item_field *field= new Item_field(context,
  4316. NullS, NullS, field_info->field_name);
  4317. if (add_item_to_list(thd, field))
  4318. return 1;
  4319. field->set_name(buffer.ptr(), buffer.length(), system_charset_info);
  4320. if (thd->lex->verbose)
  4321. {
  4322. field->set_name(buffer.ptr(), buffer.length(), system_charset_info);
  4323. field_info= &schema_table->fields_info[3];
  4324. field= new Item_field(context, NullS, NullS, field_info->field_name);
  4325. if (add_item_to_list(thd, field))
  4326. return 1;
  4327. field->set_name(field_info->old_name, strlen(field_info->old_name),
  4328. system_charset_info);
  4329. }
  4330. return 0;
  4331. }
  4332. int make_columns_old_format(THD *thd, ST_SCHEMA_TABLE *schema_table)
  4333. {
  4334. int fields_arr[]= {3, 14, 13, 6, 15, 5, 16, 17, 18, -1};
  4335. int *field_num= fields_arr;
  4336. ST_FIELD_INFO *field_info;
  4337. Name_resolution_context *context= &thd->lex->select_lex.context;
  4338. for (; *field_num >= 0; field_num++)
  4339. {
  4340. field_info= &schema_table->fields_info[*field_num];
  4341. if (!thd->lex->verbose && (*field_num == 13 ||
  4342. *field_num == 17 ||
  4343. *field_num == 18))
  4344. continue;
  4345. Item_field *field= new Item_field(context,
  4346. NullS, NullS, field_info->field_name);
  4347. if (field)
  4348. {
  4349. field->set_name(field_info->old_name,
  4350. strlen(field_info->old_name),
  4351. system_charset_info);
  4352. if (add_item_to_list(thd, field))
  4353. return 1;
  4354. }
  4355. }
  4356. return 0;
  4357. }
  4358. int make_character_sets_old_format(THD *thd, ST_SCHEMA_TABLE *schema_table)
  4359. {
  4360. int fields_arr[]= {0, 2, 1, 3, -1};
  4361. int *field_num= fields_arr;
  4362. ST_FIELD_INFO *field_info;
  4363. Name_resolution_context *context= &thd->lex->select_lex.context;
  4364. for (; *field_num >= 0; field_num++)
  4365. {
  4366. field_info= &schema_table->fields_info[*field_num];
  4367. Item_field *field= new Item_field(context,
  4368. NullS, NullS, field_info->field_name);
  4369. if (field)
  4370. {
  4371. field->set_name(field_info->old_name,
  4372. strlen(field_info->old_name),
  4373. system_charset_info);
  4374. if (add_item_to_list(thd, field))
  4375. return 1;
  4376. }
  4377. }
  4378. return 0;
  4379. }
  4380. int make_proc_old_format(THD *thd, ST_SCHEMA_TABLE *schema_table)
  4381. {
  4382. int fields_arr[]= {2, 3, 4, 19, 16, 15, 14, 18, -1};
  4383. int *field_num= fields_arr;
  4384. ST_FIELD_INFO *field_info;
  4385. Name_resolution_context *context= &thd->lex->select_lex.context;
  4386. for (; *field_num >= 0; field_num++)
  4387. {
  4388. field_info= &schema_table->fields_info[*field_num];
  4389. Item_field *field= new Item_field(context,
  4390. NullS, NullS, field_info->field_name);
  4391. if (field)
  4392. {
  4393. field->set_name(field_info->old_name,
  4394. strlen(field_info->old_name),
  4395. system_charset_info);
  4396. if (add_item_to_list(thd, field))
  4397. return 1;
  4398. }
  4399. }
  4400. return 0;
  4401. }
  4402. /*
  4403. Create information_schema table
  4404. SYNOPSIS
  4405. mysql_schema_table()
  4406. thd thread handler
  4407. lex pointer to LEX
  4408. table_list pointer to table_list
  4409. RETURN
  4410. 0 success
  4411. 1 error
  4412. */
  4413. int mysql_schema_table(THD *thd, LEX *lex, TABLE_LIST *table_list)
  4414. {
  4415. TABLE *table;
  4416. DBUG_ENTER("mysql_schema_table");
  4417. if (!(table= table_list->schema_table->create_table(thd, table_list)))
  4418. {
  4419. DBUG_RETURN(1);
  4420. }
  4421. table->s->tmp_table= SYSTEM_TMP_TABLE;
  4422. table->grant.privilege= SELECT_ACL;
  4423. /*
  4424. This test is necessary to make
  4425. case insensitive file systems +
  4426. upper case table names(information schema tables) +
  4427. views
  4428. working correctly
  4429. */
  4430. if (table_list->schema_table_name)
  4431. table->alias_name_used= my_strcasecmp(table_alias_charset,
  4432. table_list->schema_table_name,
  4433. table_list->alias);
  4434. table_list->table_name= table->s->table_name.str;
  4435. table_list->table_name_length= table->s->table_name.length;
  4436. table_list->table= table;
  4437. table->next= thd->derived_tables;
  4438. thd->derived_tables= table;
  4439. table_list->select_lex->options |= OPTION_SCHEMA_TABLE;
  4440. lex->safe_to_cache_query= 0;
  4441. if (table_list->schema_table_reformed) // show command
  4442. {
  4443. SELECT_LEX *sel= lex->current_select;
  4444. Item *item;
  4445. Field_translator *transl, *org_transl;
  4446. if (table_list->field_translation)
  4447. {
  4448. Field_translator *end= table_list->field_translation_end;
  4449. for (transl= table_list->field_translation; transl < end; transl++)
  4450. {
  4451. if (!transl->item->fixed &&
  4452. transl->item->fix_fields(thd, &transl->item))
  4453. DBUG_RETURN(1);
  4454. }
  4455. DBUG_RETURN(0);
  4456. }
  4457. List_iterator_fast<Item> it(sel->item_list);
  4458. if (!(transl=
  4459. (Field_translator*)(thd->stmt_arena->
  4460. alloc(sel->item_list.elements *
  4461. sizeof(Field_translator)))))
  4462. {
  4463. DBUG_RETURN(1);
  4464. }
  4465. for (org_transl= transl; (item= it++); transl++)
  4466. {
  4467. transl->item= item;
  4468. transl->name= item->name;
  4469. if (!item->fixed && item->fix_fields(thd, &transl->item))
  4470. {
  4471. DBUG_RETURN(1);
  4472. }
  4473. }
  4474. table_list->field_translation= org_transl;
  4475. table_list->field_translation_end= transl;
  4476. }
  4477. DBUG_RETURN(0);
  4478. }
  4479. /*
  4480. Generate select from information_schema table
  4481. SYNOPSIS
  4482. make_schema_select()
  4483. thd thread handler
  4484. sel pointer to SELECT_LEX
  4485. schema_table_idx index of 'schema_tables' element
  4486. RETURN
  4487. 0 success
  4488. 1 error
  4489. */
  4490. int make_schema_select(THD *thd, SELECT_LEX *sel,
  4491. enum enum_schema_tables schema_table_idx)
  4492. {
  4493. ST_SCHEMA_TABLE *schema_table= get_schema_table(schema_table_idx);
  4494. LEX_STRING db, table;
  4495. DBUG_ENTER("mysql_schema_select");
  4496. DBUG_PRINT("enter", ("mysql_schema_select: %s", schema_table->table_name));
  4497. /*
  4498. We have to make non const db_name & table_name
  4499. because of lower_case_table_names
  4500. */
  4501. make_lex_string(thd, &db, information_schema_name.str,
  4502. information_schema_name.length, 0);
  4503. make_lex_string(thd, &table, schema_table->table_name,
  4504. strlen(schema_table->table_name), 0);
  4505. if (schema_table->old_format(thd, schema_table) || /* Handle old syntax */
  4506. !sel->add_table_to_list(thd, new Table_ident(thd, db, table, 0),
  4507. 0, 0, TL_READ, (List<String> *) 0,
  4508. (List<String> *) 0))
  4509. {
  4510. DBUG_RETURN(1);
  4511. }
  4512. DBUG_RETURN(0);
  4513. }
  4514. /*
  4515. Fill temporary schema tables before SELECT
  4516. SYNOPSIS
  4517. get_schema_tables_result()
  4518. join join which use schema tables
  4519. RETURN
  4520. FALSE success
  4521. TRUE error
  4522. */
  4523. bool get_schema_tables_result(JOIN *join)
  4524. {
  4525. JOIN_TAB *tmp_join_tab= join->join_tab+join->tables;
  4526. THD *thd= join->thd;
  4527. LEX *lex= thd->lex;
  4528. bool result= 0;
  4529. DBUG_ENTER("get_schema_tables_result");
  4530. thd->no_warnings_for_error= 1;
  4531. for (JOIN_TAB *tab= join->join_tab; tab < tmp_join_tab; tab++)
  4532. {
  4533. if (!tab->table || !tab->table->pos_in_table_list)
  4534. break;
  4535. TABLE_LIST *table_list= tab->table->pos_in_table_list;
  4536. if (table_list->schema_table && thd->fill_information_schema_tables())
  4537. {
  4538. bool is_subselect= (&lex->unit != lex->current_select->master_unit() &&
  4539. lex->current_select->master_unit()->item);
  4540. /*
  4541. The schema table is already processed and
  4542. the statement is not a subselect.
  4543. So we don't need to handle this table again.
  4544. */
  4545. if (table_list->is_schema_table_processed && !is_subselect)
  4546. continue;
  4547. if (is_subselect) // is subselect
  4548. {
  4549. table_list->table->file->extra(HA_EXTRA_RESET_STATE);
  4550. table_list->table->file->delete_all_rows();
  4551. free_io_cache(table_list->table);
  4552. filesort_free_buffers(table_list->table);
  4553. }
  4554. else
  4555. table_list->table->file->stats.records= 0;
  4556. if (table_list->schema_table->fill_table(thd, table_list,
  4557. tab->select_cond))
  4558. result= 1;
  4559. table_list->is_schema_table_processed= TRUE;
  4560. }
  4561. }
  4562. thd->no_warnings_for_error= 0;
  4563. DBUG_RETURN(result);
  4564. }
  4565. struct run_hton_fill_schema_files_args
  4566. {
  4567. TABLE_LIST *tables;
  4568. COND *cond;
  4569. };
  4570. static my_bool run_hton_fill_schema_files(THD *thd, st_plugin_int *plugin,
  4571. void *arg)
  4572. {
  4573. struct run_hton_fill_schema_files_args *args=
  4574. (run_hton_fill_schema_files_args *) arg;
  4575. handlerton *hton= (handlerton *)plugin->data;
  4576. if(hton->fill_files_table)
  4577. hton->fill_files_table(thd, args->tables, args->cond);
  4578. return false;
  4579. }
  4580. int fill_schema_files(THD *thd, TABLE_LIST *tables, COND *cond)
  4581. {
  4582. int i;
  4583. TABLE *table= tables->table;
  4584. DBUG_ENTER("fill_schema_files");
  4585. struct run_hton_fill_schema_files_args args;
  4586. args.tables= tables;
  4587. args.cond= cond;
  4588. plugin_foreach(thd, run_hton_fill_schema_files,
  4589. MYSQL_STORAGE_ENGINE_PLUGIN, &args);
  4590. DBUG_RETURN(0);
  4591. }
  4592. ST_FIELD_INFO schema_fields_info[]=
  4593. {
  4594. {"CATALOG_NAME", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4595. {"SCHEMA_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Database"},
  4596. {"DEFAULT_CHARACTER_SET_NAME", 64, MYSQL_TYPE_STRING, 0, 0, 0},
  4597. {"DEFAULT_COLLATION_NAME", 64, MYSQL_TYPE_STRING, 0, 0, 0},
  4598. {"SQL_PATH", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4599. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4600. };
  4601. ST_FIELD_INFO tables_fields_info[]=
  4602. {
  4603. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4604. {"TABLE_SCHEMA",NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4605. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Name"},
  4606. {"TABLE_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4607. {"ENGINE", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, "Engine"},
  4608. {"VERSION", 21 , MYSQL_TYPE_LONG, 0, 1, "Version"},
  4609. {"ROW_FORMAT", 10, MYSQL_TYPE_STRING, 0, 1, "Row_format"},
  4610. {"TABLE_ROWS", 21 , MYSQL_TYPE_LONG, 0, 1, "Rows"},
  4611. {"AVG_ROW_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Avg_row_length"},
  4612. {"DATA_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Data_length"},
  4613. {"MAX_DATA_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Max_data_length"},
  4614. {"INDEX_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Index_length"},
  4615. {"DATA_FREE", 21 , MYSQL_TYPE_LONG, 0, 1, "Data_free"},
  4616. {"AUTO_INCREMENT", 21 , MYSQL_TYPE_LONG, 0, 1, "Auto_increment"},
  4617. {"CREATE_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Create_time"},
  4618. {"UPDATE_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Update_time"},
  4619. {"CHECK_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Check_time"},
  4620. {"TABLE_COLLATION", 64, MYSQL_TYPE_STRING, 0, 1, "Collation"},
  4621. {"CHECKSUM", 21 , MYSQL_TYPE_LONG, 0, 1, "Checksum"},
  4622. {"CREATE_OPTIONS", 255, MYSQL_TYPE_STRING, 0, 1, "Create_options"},
  4623. {"TABLE_COMMENT", 80, MYSQL_TYPE_STRING, 0, 0, "Comment"},
  4624. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4625. };
  4626. ST_FIELD_INFO columns_fields_info[]=
  4627. {
  4628. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4629. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4630. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4631. {"COLUMN_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Field"},
  4632. {"ORDINAL_POSITION", 21 , MYSQL_TYPE_LONG, 0, 0, 0},
  4633. {"COLUMN_DEFAULT", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, "Default"},
  4634. {"IS_NULLABLE", 3, MYSQL_TYPE_STRING, 0, 0, "Null"},
  4635. {"DATA_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4636. {"CHARACTER_MAXIMUM_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4637. {"CHARACTER_OCTET_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4638. {"NUMERIC_PRECISION", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4639. {"NUMERIC_SCALE", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4640. {"CHARACTER_SET_NAME", 64, MYSQL_TYPE_STRING, 0, 1, 0},
  4641. {"COLLATION_NAME", 64, MYSQL_TYPE_STRING, 0, 1, "Collation"},
  4642. {"COLUMN_TYPE", 65535, MYSQL_TYPE_STRING, 0, 0, "Type"},
  4643. {"COLUMN_KEY", 3, MYSQL_TYPE_STRING, 0, 0, "Key"},
  4644. {"EXTRA", 20, MYSQL_TYPE_STRING, 0, 0, "Extra"},
  4645. {"PRIVILEGES", 80, MYSQL_TYPE_STRING, 0, 0, "Privileges"},
  4646. {"COLUMN_COMMENT", 255, MYSQL_TYPE_STRING, 0, 0, "Comment"},
  4647. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4648. };
  4649. ST_FIELD_INFO charsets_fields_info[]=
  4650. {
  4651. {"CHARACTER_SET_NAME", 64, MYSQL_TYPE_STRING, 0, 0, "Charset"},
  4652. {"DEFAULT_COLLATE_NAME", 64, MYSQL_TYPE_STRING, 0, 0, "Default collation"},
  4653. {"DESCRIPTION", 60, MYSQL_TYPE_STRING, 0, 0, "Description"},
  4654. {"MAXLEN", 3 ,MYSQL_TYPE_LONG, 0, 0, "Maxlen"},
  4655. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4656. };
  4657. ST_FIELD_INFO collation_fields_info[]=
  4658. {
  4659. {"COLLATION_NAME", 64, MYSQL_TYPE_STRING, 0, 0, "Collation"},
  4660. {"CHARACTER_SET_NAME", 64, MYSQL_TYPE_STRING, 0, 0, "Charset"},
  4661. {"ID", 11, MYSQL_TYPE_LONG, 0, 0, "Id"},
  4662. {"IS_DEFAULT", 3, MYSQL_TYPE_STRING, 0, 0, "Default"},
  4663. {"IS_COMPILED", 3, MYSQL_TYPE_STRING, 0, 0, "Compiled"},
  4664. {"SORTLEN", 3 ,MYSQL_TYPE_LONG, 0, 0, "Sortlen"},
  4665. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4666. };
  4667. ST_FIELD_INFO engines_fields_info[]=
  4668. {
  4669. {"ENGINE", 64, MYSQL_TYPE_STRING, 0, 0, "Engine"},
  4670. {"SUPPORT", 8, MYSQL_TYPE_STRING, 0, 0, "Support"},
  4671. {"COMMENT", 80, MYSQL_TYPE_STRING, 0, 0, "Comment"},
  4672. {"TRANSACTIONS", 3, MYSQL_TYPE_STRING, 0, 0, "Transactions"},
  4673. {"XA", 3, MYSQL_TYPE_STRING, 0, 0, "XA"},
  4674. {"SAVEPOINTS", 3 ,MYSQL_TYPE_STRING, 0, 0, "Savepoints"},
  4675. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4676. };
  4677. ST_FIELD_INFO events_fields_info[]=
  4678. {
  4679. {"EVENT_CATALOG", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4680. {"EVENT_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Db"},
  4681. {"EVENT_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Name"},
  4682. {"DEFINER", 77, MYSQL_TYPE_STRING, 0, 0, "Definer"},
  4683. {"EVENT_BODY", 8, MYSQL_TYPE_STRING, 0, 0, 0},
  4684. {"EVENT_DEFINITION", 65535, MYSQL_TYPE_STRING, 0, 0, 0},
  4685. {"EVENT_TYPE", 9, MYSQL_TYPE_STRING, 0, 0, "Type"},
  4686. {"EXECUTE_AT", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Execute at"},
  4687. {"INTERVAL_VALUE", 256, MYSQL_TYPE_STRING, 0, 1, "Interval value"},
  4688. {"INTERVAL_FIELD", 18, MYSQL_TYPE_STRING, 0, 1, "Interval field"},
  4689. {"SQL_MODE", 65535, MYSQL_TYPE_STRING, 0, 0, 0},
  4690. {"STARTS", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Starts"},
  4691. {"ENDS", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Ends"},
  4692. {"STATUS", 8, MYSQL_TYPE_STRING, 0, 0, "Status"},
  4693. {"ON_COMPLETION", 12, MYSQL_TYPE_STRING, 0, 0, 0},
  4694. {"CREATED", 0, MYSQL_TYPE_TIMESTAMP, 0, 0, 0},
  4695. {"LAST_ALTERED", 0, MYSQL_TYPE_TIMESTAMP, 0, 0, 0},
  4696. {"LAST_EXECUTED", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, 0},
  4697. {"EVENT_COMMENT", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4698. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4699. };
  4700. ST_FIELD_INFO coll_charset_app_fields_info[]=
  4701. {
  4702. {"COLLATION_NAME", 64, MYSQL_TYPE_STRING, 0, 0, 0},
  4703. {"CHARACTER_SET_NAME", 64, MYSQL_TYPE_STRING, 0, 0, 0},
  4704. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4705. };
  4706. ST_FIELD_INFO proc_fields_info[]=
  4707. {
  4708. {"SPECIFIC_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4709. {"ROUTINE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4710. {"ROUTINE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Db"},
  4711. {"ROUTINE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Name"},
  4712. {"ROUTINE_TYPE", 9, MYSQL_TYPE_STRING, 0, 0, "Type"},
  4713. {"DTD_IDENTIFIER", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4714. {"ROUTINE_BODY", 8, MYSQL_TYPE_STRING, 0, 0, 0},
  4715. {"ROUTINE_DEFINITION", 65535, MYSQL_TYPE_STRING, 0, 0, 0},
  4716. {"EXTERNAL_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4717. {"EXTERNAL_LANGUAGE", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4718. {"PARAMETER_STYLE", 8, MYSQL_TYPE_STRING, 0, 0, 0},
  4719. {"IS_DETERMINISTIC", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4720. {"SQL_DATA_ACCESS", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4721. {"SQL_PATH", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4722. {"SECURITY_TYPE", 7, MYSQL_TYPE_STRING, 0, 0, "Security_type"},
  4723. {"CREATED", 0, MYSQL_TYPE_TIMESTAMP, 0, 0, "Created"},
  4724. {"LAST_ALTERED", 0, MYSQL_TYPE_TIMESTAMP, 0, 0, "Modified"},
  4725. {"SQL_MODE", 65535, MYSQL_TYPE_STRING, 0, 0, 0},
  4726. {"ROUTINE_COMMENT", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Comment"},
  4727. {"DEFINER", 77, MYSQL_TYPE_STRING, 0, 0, "Definer"},
  4728. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4729. };
  4730. ST_FIELD_INFO stat_fields_info[]=
  4731. {
  4732. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4733. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4734. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table"},
  4735. {"NON_UNIQUE", 1, MYSQL_TYPE_LONG, 0, 0, "Non_unique"},
  4736. {"INDEX_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4737. {"INDEX_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Key_name"},
  4738. {"SEQ_IN_INDEX", 2, MYSQL_TYPE_LONG, 0, 0, "Seq_in_index"},
  4739. {"COLUMN_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Column_name"},
  4740. {"COLLATION", 1, MYSQL_TYPE_STRING, 0, 1, "Collation"},
  4741. {"CARDINALITY", 21, MYSQL_TYPE_LONG, 0, 1, "Cardinality"},
  4742. {"SUB_PART", 3, MYSQL_TYPE_LONG, 0, 1, "Sub_part"},
  4743. {"PACKED", 10, MYSQL_TYPE_STRING, 0, 1, "Packed"},
  4744. {"NULLABLE", 3, MYSQL_TYPE_STRING, 0, 0, "Null"},
  4745. {"INDEX_TYPE", 16, MYSQL_TYPE_STRING, 0, 0, "Index_type"},
  4746. {"COMMENT", 16, MYSQL_TYPE_STRING, 0, 1, "Comment"},
  4747. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4748. };
  4749. ST_FIELD_INFO view_fields_info[]=
  4750. {
  4751. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4752. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4753. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4754. {"VIEW_DEFINITION", 65535, MYSQL_TYPE_STRING, 0, 0, 0},
  4755. {"CHECK_OPTION", 8, MYSQL_TYPE_STRING, 0, 0, 0},
  4756. {"IS_UPDATABLE", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4757. {"DEFINER", 77, MYSQL_TYPE_STRING, 0, 0, 0},
  4758. {"SECURITY_TYPE", 7, MYSQL_TYPE_STRING, 0, 0, 0},
  4759. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4760. };
  4761. ST_FIELD_INFO user_privileges_fields_info[]=
  4762. {
  4763. {"GRANTEE", 81, MYSQL_TYPE_STRING, 0, 0, 0},
  4764. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4765. {"PRIVILEGE_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4766. {"IS_GRANTABLE", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4767. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4768. };
  4769. ST_FIELD_INFO schema_privileges_fields_info[]=
  4770. {
  4771. {"GRANTEE", 81, MYSQL_TYPE_STRING, 0, 0, 0},
  4772. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4773. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4774. {"PRIVILEGE_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4775. {"IS_GRANTABLE", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4776. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4777. };
  4778. ST_FIELD_INFO table_privileges_fields_info[]=
  4779. {
  4780. {"GRANTEE", 81, MYSQL_TYPE_STRING, 0, 0, 0},
  4781. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4782. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4783. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4784. {"PRIVILEGE_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4785. {"IS_GRANTABLE", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4786. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4787. };
  4788. ST_FIELD_INFO column_privileges_fields_info[]=
  4789. {
  4790. {"GRANTEE", 81, MYSQL_TYPE_STRING, 0, 0, 0},
  4791. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4792. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4793. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4794. {"COLUMN_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4795. {"PRIVILEGE_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4796. {"IS_GRANTABLE", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4797. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4798. };
  4799. ST_FIELD_INFO table_constraints_fields_info[]=
  4800. {
  4801. {"CONSTRAINT_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4802. {"CONSTRAINT_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4803. {"CONSTRAINT_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4804. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4805. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4806. {"CONSTRAINT_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4807. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4808. };
  4809. ST_FIELD_INFO key_column_usage_fields_info[]=
  4810. {
  4811. {"CONSTRAINT_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4812. {"CONSTRAINT_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4813. {"CONSTRAINT_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4814. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4815. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4816. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4817. {"COLUMN_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4818. {"ORDINAL_POSITION", 10 ,MYSQL_TYPE_LONG, 0, 0, 0},
  4819. {"POSITION_IN_UNIQUE_CONSTRAINT", 10 ,MYSQL_TYPE_LONG, 0, 1, 0},
  4820. {"REFERENCED_TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4821. {"REFERENCED_TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4822. {"REFERENCED_COLUMN_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4823. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4824. };
  4825. ST_FIELD_INFO table_names_fields_info[]=
  4826. {
  4827. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4828. {"TABLE_SCHEMA",NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4829. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Tables_in_"},
  4830. {"TABLE_TYPE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table_type"},
  4831. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4832. };
  4833. ST_FIELD_INFO open_tables_fields_info[]=
  4834. {
  4835. {"Database", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Database"},
  4836. {"Table",NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table"},
  4837. {"In_use", 1, MYSQL_TYPE_LONG, 0, 0, "In_use"},
  4838. {"Name_locked", 4, MYSQL_TYPE_LONG, 0, 0, "Name_locked"},
  4839. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4840. };
  4841. ST_FIELD_INFO triggers_fields_info[]=
  4842. {
  4843. {"TRIGGER_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4844. {"TRIGGER_SCHEMA",NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4845. {"TRIGGER_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Trigger"},
  4846. {"EVENT_MANIPULATION", 6, MYSQL_TYPE_STRING, 0, 0, "Event"},
  4847. {"EVENT_OBJECT_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4848. {"EVENT_OBJECT_SCHEMA",NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4849. {"EVENT_OBJECT_TABLE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table"},
  4850. {"ACTION_ORDER", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4851. {"ACTION_CONDITION", 65535, MYSQL_TYPE_STRING, 0, 1, 0},
  4852. {"ACTION_STATEMENT", 65535, MYSQL_TYPE_STRING, 0, 0, "Statement"},
  4853. {"ACTION_ORIENTATION", 9, MYSQL_TYPE_STRING, 0, 0, 0},
  4854. {"ACTION_TIMING", 6, MYSQL_TYPE_STRING, 0, 0, "Timing"},
  4855. {"ACTION_REFERENCE_OLD_TABLE", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4856. {"ACTION_REFERENCE_NEW_TABLE", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4857. {"ACTION_REFERENCE_OLD_ROW", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4858. {"ACTION_REFERENCE_NEW_ROW", 3, MYSQL_TYPE_STRING, 0, 0, 0},
  4859. {"CREATED", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Created"},
  4860. {"SQL_MODE", 65535, MYSQL_TYPE_STRING, 0, 0, "sql_mode"},
  4861. {"DEFINER", 65535, MYSQL_TYPE_STRING, 0, 0, "Definer"},
  4862. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4863. };
  4864. ST_FIELD_INFO partitions_fields_info[]=
  4865. {
  4866. {"TABLE_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4867. {"TABLE_SCHEMA",NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4868. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4869. {"PARTITION_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4870. {"SUBPARTITION_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4871. {"PARTITION_ORDINAL_POSITION", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4872. {"SUBPARTITION_ORDINAL_POSITION", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4873. {"PARTITION_METHOD", 12, MYSQL_TYPE_STRING, 0, 1, 0},
  4874. {"SUBPARTITION_METHOD", 12, MYSQL_TYPE_STRING, 0, 1, 0},
  4875. {"PARTITION_EXPRESSION", 65535, MYSQL_TYPE_STRING, 0, 1, 0},
  4876. {"SUBPARTITION_EXPRESSION", 65535, MYSQL_TYPE_STRING, 0, 1, 0},
  4877. {"PARTITION_DESCRIPTION", 65535, MYSQL_TYPE_STRING, 0, 1, 0},
  4878. {"TABLE_ROWS", 21 , MYSQL_TYPE_LONG, 0, 0, 0},
  4879. {"AVG_ROW_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 0, 0},
  4880. {"DATA_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 0, 0},
  4881. {"MAX_DATA_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4882. {"INDEX_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 0, 0},
  4883. {"DATA_FREE", 21 , MYSQL_TYPE_LONG, 0, 0, 0},
  4884. {"CREATE_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, 0},
  4885. {"UPDATE_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, 0},
  4886. {"CHECK_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, 0},
  4887. {"CHECKSUM", 21 , MYSQL_TYPE_LONG, 0, 1, 0},
  4888. {"PARTITION_COMMENT", 80, MYSQL_TYPE_STRING, 0, 0, 0},
  4889. {"NODEGROUP", 21 , MYSQL_TYPE_LONG, 0, 0, 0},
  4890. {"TABLESPACE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4891. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4892. };
  4893. ST_FIELD_INFO variables_fields_info[]=
  4894. {
  4895. {"Variable_name", 80, MYSQL_TYPE_STRING, 0, 0, "Variable_name"},
  4896. {"Value", 255, MYSQL_TYPE_STRING, 0, 0, "Value"},
  4897. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4898. };
  4899. ST_FIELD_INFO processlist_fields_info[]=
  4900. {
  4901. {"ID", 4, MYSQL_TYPE_LONG, 0, 0, "Id"},
  4902. {"USER", 16, MYSQL_TYPE_STRING, 0, 0, "User"},
  4903. {"HOST", LIST_PROCESS_HOST_LEN, MYSQL_TYPE_STRING, 0, 0, "Host"},
  4904. {"DB", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, "Db"},
  4905. {"COMMAND", 16, MYSQL_TYPE_STRING, 0, 0, "Command"},
  4906. {"TIME", 7, MYSQL_TYPE_LONG, 0, 0, "Time"},
  4907. {"STATE", 30, MYSQL_TYPE_STRING, 0, 1, "State"},
  4908. {"INFO", PROCESS_LIST_INFO_WIDTH, MYSQL_TYPE_STRING, 0, 1, "Info"},
  4909. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4910. };
  4911. ST_FIELD_INFO plugin_fields_info[]=
  4912. {
  4913. {"PLUGIN_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Name"},
  4914. {"PLUGIN_VERSION", 20, MYSQL_TYPE_STRING, 0, 0, 0},
  4915. {"PLUGIN_STATUS", 10, MYSQL_TYPE_STRING, 0, 0, "Status"},
  4916. {"PLUGIN_TYPE", 80, MYSQL_TYPE_STRING, 0, 0, "Type"},
  4917. {"PLUGIN_TYPE_VERSION", 20, MYSQL_TYPE_STRING, 0, 0, 0},
  4918. {"PLUGIN_LIBRARY", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, "Library"},
  4919. {"PLUGIN_LIBRARY_VERSION", 20, MYSQL_TYPE_STRING, 0, 1, 0},
  4920. {"PLUGIN_AUTHOR", NAME_LEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4921. {"PLUGIN_DESCRIPTION", 65535, MYSQL_TYPE_STRING, 0, 1, 0},
  4922. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4923. };
  4924. ST_FIELD_INFO files_fields_info[]=
  4925. {
  4926. {"FILE_ID", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4927. {"FILE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4928. {"FILE_TYPE", 20, MYSQL_TYPE_STRING, 0, 0, 0},
  4929. {"TABLESPACE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4930. {"TABLE_CATALOG", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4931. {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4932. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4933. {"LOGFILE_GROUP_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4934. {"LOGFILE_GROUP_NUMBER", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4935. {"ENGINE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4936. {"FULLTEXT_KEYS", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4937. {"DELETED_ROWS", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4938. {"UPDATE_COUNT", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4939. {"FREE_EXTENTS", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4940. {"TOTAL_EXTENTS", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4941. {"EXTENT_SIZE", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4942. {"INITIAL_SIZE", 21, MYSQL_TYPE_LONG, 0, 0, 0},
  4943. {"MAXIMUM_SIZE", 21, MYSQL_TYPE_LONG, 0, 0, 0},
  4944. {"AUTOEXTEND_SIZE", 21, MYSQL_TYPE_LONG, 0, 0, 0},
  4945. {"CREATION_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 0, 0},
  4946. {"LAST_UPDATE_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 0, 0},
  4947. {"LAST_ACCESS_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 0, 0},
  4948. {"RECOVER_TIME", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4949. {"TRANSACTION_COUNTER", 4, MYSQL_TYPE_LONG, 0, 0, 0},
  4950. {"VERSION", 21 , MYSQL_TYPE_LONG, 0, 1, "Version"},
  4951. {"ROW_FORMAT", 10, MYSQL_TYPE_STRING, 0, 1, "Row_format"},
  4952. {"TABLE_ROWS", 21 , MYSQL_TYPE_LONG, 0, 1, "Rows"},
  4953. {"AVG_ROW_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Avg_row_length"},
  4954. {"DATA_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Data_length"},
  4955. {"MAX_DATA_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Max_data_length"},
  4956. {"INDEX_LENGTH", 21 , MYSQL_TYPE_LONG, 0, 1, "Index_length"},
  4957. {"DATA_FREE", 21 , MYSQL_TYPE_LONG, 0, 1, "Data_free"},
  4958. {"CREATE_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Create_time"},
  4959. {"UPDATE_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Update_time"},
  4960. {"CHECK_TIME", 0, MYSQL_TYPE_TIMESTAMP, 0, 1, "Check_time"},
  4961. {"CHECKSUM", 21 , MYSQL_TYPE_LONG, 0, 1, "Checksum"},
  4962. {"STATUS", 20, MYSQL_TYPE_STRING, 0, 0, 0},
  4963. {"EXTRA", 255, MYSQL_TYPE_STRING, 0, 0, 0},
  4964. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4965. };
  4966. ST_FIELD_INFO referential_constraints_fields_info[]=
  4967. {
  4968. {"CONSTRAINT_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4969. {"CONSTRAINT_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4970. {"CONSTRAINT_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4971. {"UNIQUE_CONSTRAINT_CATALOG", FN_REFLEN, MYSQL_TYPE_STRING, 0, 1, 0},
  4972. {"UNIQUE_CONSTRAINT_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4973. {"UNIQUE_CONSTRAINT_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4974. {"MATCH_OPTION", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4975. {"UPDATE_RULE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4976. {"DELETE_RULE", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4977. {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, 0},
  4978. {0, 0, MYSQL_TYPE_STRING, 0, 0, 0}
  4979. };
  4980. /*
  4981. Description of ST_FIELD_INFO in table.h
  4982. Make sure that the order of schema_tables and enum_schema_tables are the same.
  4983. */
  4984. ST_SCHEMA_TABLE schema_tables[]=
  4985. {
  4986. {"CHARACTER_SETS", charsets_fields_info, create_schema_table,
  4987. fill_schema_charsets, make_character_sets_old_format, 0, -1, -1, 0},
  4988. {"COLLATIONS", collation_fields_info, create_schema_table,
  4989. fill_schema_collation, make_old_format, 0, -1, -1, 0},
  4990. {"COLLATION_CHARACTER_SET_APPLICABILITY", coll_charset_app_fields_info,
  4991. create_schema_table, fill_schema_coll_charset_app, 0, 0, -1, -1, 0},
  4992. {"COLUMNS", columns_fields_info, create_schema_table,
  4993. get_all_tables, make_columns_old_format, get_schema_column_record, 1, 2, 0},
  4994. {"COLUMN_PRIVILEGES", column_privileges_fields_info, create_schema_table,
  4995. fill_schema_column_privileges, 0, 0, -1, -1, 0},
  4996. {"ENGINES", engines_fields_info, create_schema_table,
  4997. fill_schema_engines, make_old_format, 0, -1, -1, 0},
  4998. {"EVENTS", events_fields_info, create_schema_table,
  4999. fill_schema_events, make_old_format, 0, -1, -1, 0},
  5000. {"FILES", files_fields_info, create_schema_table,
  5001. fill_schema_files, 0, 0, -1, -1, 0},
  5002. {"KEY_COLUMN_USAGE", key_column_usage_fields_info, create_schema_table,
  5003. get_all_tables, 0, get_schema_key_column_usage_record, 4, 5, 0},
  5004. {"OPEN_TABLES", open_tables_fields_info, create_schema_table,
  5005. fill_open_tables, make_old_format, 0, -1, -1, 1},
  5006. {"PARTITIONS", partitions_fields_info, create_schema_table,
  5007. get_all_tables, 0, get_schema_partitions_record, 1, 2, 0},
  5008. {"PLUGINS", plugin_fields_info, create_schema_table,
  5009. fill_plugins, make_old_format, 0, -1, -1, 0},
  5010. {"PROCESSLIST", processlist_fields_info, create_schema_table,
  5011. fill_schema_processlist, make_old_format, 0, -1, -1, 0},
  5012. {"REFERENTIAL_CONSTRAINTS", referential_constraints_fields_info,
  5013. create_schema_table, get_all_tables, 0, get_referential_constraints_record,
  5014. 1, 9, 0},
  5015. {"ROUTINES", proc_fields_info, create_schema_table,
  5016. fill_schema_proc, make_proc_old_format, 0, -1, -1, 0},
  5017. {"SCHEMATA", schema_fields_info, create_schema_table,
  5018. fill_schema_shemata, make_schemata_old_format, 0, 1, -1, 0},
  5019. {"SCHEMA_PRIVILEGES", schema_privileges_fields_info, create_schema_table,
  5020. fill_schema_schema_privileges, 0, 0, -1, -1, 0},
  5021. {"STATISTICS", stat_fields_info, create_schema_table,
  5022. get_all_tables, make_old_format, get_schema_stat_record, 1, 2, 0},
  5023. {"STATUS", variables_fields_info, create_schema_table, fill_status,
  5024. make_old_format, 0, -1, -1, 1},
  5025. {"TABLES", tables_fields_info, create_schema_table,
  5026. get_all_tables, make_old_format, get_schema_tables_record, 1, 2, 0},
  5027. {"TABLE_CONSTRAINTS", table_constraints_fields_info, create_schema_table,
  5028. get_all_tables, 0, get_schema_constraints_record, 3, 4, 0},
  5029. {"TABLE_NAMES", table_names_fields_info, create_schema_table,
  5030. get_all_tables, make_table_names_old_format, 0, 1, 2, 1},
  5031. {"TABLE_PRIVILEGES", table_privileges_fields_info, create_schema_table,
  5032. fill_schema_table_privileges, 0, 0, -1, -1, 0},
  5033. {"TRIGGERS", triggers_fields_info, create_schema_table,
  5034. get_all_tables, make_old_format, get_schema_triggers_record, 5, 6, 0},
  5035. {"USER_PRIVILEGES", user_privileges_fields_info, create_schema_table,
  5036. fill_schema_user_privileges, 0, 0, -1, -1, 0},
  5037. {"VARIABLES", variables_fields_info, create_schema_table, fill_variables,
  5038. make_old_format, 0, -1, -1, 1},
  5039. {"VIEWS", view_fields_info, create_schema_table,
  5040. get_all_tables, 0, get_schema_views_record, 1, 2, 0},
  5041. {0, 0, 0, 0, 0, 0, 0, 0, 0}
  5042. };
  5043. #ifdef HAVE_EXPLICIT_TEMPLATE_INSTANTIATION
  5044. template class List_iterator_fast<char>;
  5045. template class List<char>;
  5046. #endif