You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

3672 lines
104 KiB

A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
22 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
23 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
23 years ago
23 years ago
23 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
22 years ago
22 years ago
22 years ago
21 years ago
21 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
21 years ago
21 years ago
21 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
22 years ago
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
20 years ago
22 years ago
Fix for: Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT with locked tables" Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers" Bug #24738 "CREATE TABLE ... SELECT is not isolated properly" Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when temporary table exists" Deadlock occured when one tried to execute CREATE TABLE IF NOT EXISTS ... SELECT statement under LOCK TABLES which held read lock on target table. Attempt to execute the same statement for already existing target table with triggers caused server crashes. Also concurrent execution of CREATE TABLE ... SELECT statement and other statements involving target table suffered from various races (some of which might've led to deadlocks). Finally, attempt to execute CREATE TABLE ... SELECT in case when a temporary table with same name was already present led to the insertion of data into this temporary table and creation of empty non-temporary table. All above problems stemmed from the old implementation of CREATE TABLE ... SELECT in which we created, opened and locked target table without any special protection in a separate step and not with the rest of tables used by this statement. This underminded deadlock-avoidance approach used in server and created window for races. It also excluded target table from prelocking causing problems with trigger execution. The patch solves these problems by implementing new approach to handling of CREATE TABLE ... SELECT for base tables. We try to open and lock table to be created at the same time as the rest of tables used by this statement. If such table does not exist at this moment we create and place in the table cache special placeholder for it which prevents its creation or any other usage by other threads. We still use old approach for creation of temporary tables. Note that we have separate fix for 5.0 since there we use slightly different less intrusive approach.
19 years ago
Fix for: Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT with locked tables" Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers" Bug #24738 "CREATE TABLE ... SELECT is not isolated properly" Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when temporary table exists" Deadlock occured when one tried to execute CREATE TABLE IF NOT EXISTS ... SELECT statement under LOCK TABLES which held read lock on target table. Attempt to execute the same statement for already existing target table with triggers caused server crashes. Also concurrent execution of CREATE TABLE ... SELECT statement and other statements involving target table suffered from various races (some of which might've led to deadlocks). Finally, attempt to execute CREATE TABLE ... SELECT in case when a temporary table with same name was already present led to the insertion of data into this temporary table and creation of empty non-temporary table. All above problems stemmed from the old implementation of CREATE TABLE ... SELECT in which we created, opened and locked target table without any special protection in a separate step and not with the rest of tables used by this statement. This underminded deadlock-avoidance approach used in server and created window for races. It also excluded target table from prelocking causing problems with trigger execution. The patch solves these problems by implementing new approach to handling of CREATE TABLE ... SELECT for base tables. We try to open and lock table to be created at the same time as the rest of tables used by this statement. If such table does not exist at this moment we create and place in the table cache special placeholder for it which prevents its creation or any other usage by other threads. We still use old approach for creation of temporary tables. Note that we have separate fix for 5.0 since there we use slightly different less intrusive approach.
19 years ago
21 years ago
21 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
21 years ago
21 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
21 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
Patch for the following bugs: - BUG#11986: Stored routines and triggers can fail if the code has a non-ascii symbol - BUG#16291: mysqldump corrupts string-constants with non-ascii-chars - BUG#19443: INFORMATION_SCHEMA does not support charsets properly - BUG#21249: Character set of SP-var can be ignored - BUG#25212: Character set of string constant is ignored (stored routines) - BUG#25221: Character set of string constant is ignored (triggers) There were a few general problems that caused these bugs: 1. Character set information of the original (definition) query for views, triggers, stored routines and events was lost. 2. mysqldump output query in client character set, which can be inappropriate to encode definition-query. 3. INFORMATION_SCHEMA used strings with mixed encodings to display object definition; 1. No query-definition-character set. In order to compile query into execution code, some extra data (such as environment variables or the database character set) is used. The problem here was that this context was not preserved. So, on the next load it can differ from the original one, thus the result will be different. The context contains the following data: - client character set; - connection collation (character set and collation); - collation of the owner database; The fix is to store this context and use it each time we parse (compile) and execute the object (stored routine, trigger, ...). 2. Wrong mysqldump-output. The original query can contain several encodings (by means of character set introducers). The problem here was that we tried to convert original query to the mysqldump-client character set. Moreover, we stored queries in different character sets for different objects (views, for one, used UTF8, triggers used original character set). The solution is - to store definition queries in the original character set; - to change SHOW CREATE statement to output definition query in the binary character set (i.e. without any conversion); - introduce SHOW CREATE TRIGGER statement; - to dump special statements to switch the context to the original one before dumping and restore it afterwards. Note, in order to preserve the database collation at the creation time, additional ALTER DATABASE might be used (to temporary switch the database collation back to the original value). In this case, ALTER DATABASE privilege will be required. This is a backward-incompatible change. 3. INFORMATION_SCHEMA showed non-UTF8 strings The fix is to generate UTF8-query during the parsing, store it in the object and show it in the INFORMATION_SCHEMA. Basically, the idea is to create a copy of the original query convert it to UTF8. Character set introducers are removed and all text literals are converted to UTF8. This UTF8 query is intended to provide user-readable output. It must not be used to recreate the object. Specialized SHOW CREATE statements should be used for this. The reason for this limitation is the following: the original query can contain symbols from several character sets (by means of character set introducers). Example: - original query: CREATE VIEW v1 AS SELECT _cp1251 'Hello' AS c1; - UTF8 query (for INFORMATION_SCHEMA): CREATE VIEW v1 AS SELECT 'Hello' AS c1;
19 years ago
23 years ago
23 years ago
23 years ago
23 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
20 years ago
20 years ago
Fixed compiler warnings Fixed compile-pentium64 scripts Fixed wrong estimate of update_with_key_prefix in sql-bench Merge bk-internal.mysql.com:/home/bk/mysql-5.1 into mysql.com:/home/my/mysql-5.1 Fixed unsafe define of uint4korr() Fixed that --extern works with mysql-test-run.pl Small trivial cleanups This also fixes a bug in counting number of rows that are updated when we have many simultanous queries Move all connection handling and command exectuion main loop from sql_parse.cc to sql_connection.cc Split handle_one_connection() into reusable sub functions. Split create_new_thread() into reusable sub functions. Added thread_scheduler; Preliminary interface code for future thread_handling code. Use 'my_thread_id' for internal thread id's Make thr_alarm_kill() to depend on thread_id instead of thread Make thr_abort_locks_for_thread() depend on thread_id instead of thread In store_globals(), set my_thread_var->id to be thd->thread_id. Use my_thread_var->id as basis for my_thread_name() The above changes makes the connection we have between THD and threads more soft. Added a lot of DBUG_PRINT() and DBUG_ASSERT() functions Fixed compiler warnings Fixed core dumps when running with --debug Removed setting of signal masks (was never used) Made event code call pthread_exit() (portability fix) Fixed that event code doesn't call DBUG_xxx functions before my_thread_init() is called. Made handling of thread_id and thd->variables.pseudo_thread_id uniform. Removed one common 'not freed memory' warning from mysqltest Fixed a couple of usage of not initialized warnings (unlikely cases) Suppress compiler warnings from bdb and (for the moment) warnings from ndb
19 years ago
Prevent bugs by making DBUG_* expressions syntactically equivalent to a single statement. --- Bug#24795: SHOW PROFILE Profiling is only partially functional on some architectures. Where there is no getrusage() system call, presently Null values are returned where it would be required. Notably, Windows needs some love applied to make it as useful. Syntax this adds: SHOW PROFILES SHOW PROFILE [types] [FOR QUERY n] [OFFSET n] [LIMIT n] where "n" is an integer and "types" is zero or many (comma-separated) of "CPU" "MEMORY" (not presently supported) "BLOCK IO" "CONTEXT SWITCHES" "PAGE FAULTS" "IPC" "SWAPS" "SOURCE" "ALL" It also adds a session variable (boolean) "profiling", set to "no" by default, and (integer) profiling_history_size, set to 15 by default. This patch abstracts setting THDs' "proc_info" behind a macro that can be used as a hook into the profiling code when profiling support is compiled in. All future code in this line should use that mechanism for setting thd->proc_info. --- Tests are now set to omit the statistics. --- Adds an Information_schema table, "profiling" for access to "show profile" data. --- Merge zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-5.0-community-3--bug24795 into zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-5.0-community --- Fix merge problems. --- Fixed one bug in the query_source being NULL. Updated test results. --- Include more thorough profiling tests. Improve support for prepared statements. Use session-specific query IDs, starting at zero. --- Selecting from I_S.profiling is no longer quashed in profiling, as requested by Giuseppe. Limit the size of captured query text. No longer log queries that are zero length.
19 years ago
23 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
22 years ago
22 years ago
22 years ago
22 years ago
22 years ago
22 years ago
22 years ago
23 years ago
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
19 years ago
  1. /* Copyright (C) 1995-2002 MySQL AB
  2. This program is free software; you can redistribute it and/or modify
  3. it under the terms of the GNU General Public License as published by
  4. the Free Software Foundation; version 2 of the License.
  5. This program is distributed in the hope that it will be useful,
  6. but WITHOUT ANY WARRANTY; without even the implied warranty of
  7. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
  8. GNU General Public License for more details.
  9. You should have received a copy of the GNU General Public License
  10. along with this program; if not, write to the Free Software
  11. Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
  12. /**
  13. @file
  14. This file contains the implementation of prepared statements.
  15. When one prepares a statement:
  16. - Server gets the query from client with command 'COM_STMT_PREPARE';
  17. in the following format:
  18. [COM_STMT_PREPARE:1] [query]
  19. - Parse the query and recognize any parameter markers '?' and
  20. store its information list in lex->param_list
  21. - Allocate a new statement for this prepare; and keep this in
  22. 'thd->stmt_map'.
  23. - Without executing the query, return back to client the total
  24. number of parameters along with result-set metadata information
  25. (if any) in the following format:
  26. @verbatim
  27. [STMT_ID:4]
  28. [Column_count:2]
  29. [Param_count:2]
  30. [Params meta info (stubs only for now)] (if Param_count > 0)
  31. [Columns meta info] (if Column_count > 0)
  32. @endverbatim
  33. During prepare the tables used in a statement are opened, but no
  34. locks are acquired. Table opening will block any DDL during the
  35. operation, and we do not need any locks as we neither read nor
  36. modify any data during prepare. Tables are closed after prepare
  37. finishes.
  38. When one executes a statement:
  39. - Server gets the command 'COM_STMT_EXECUTE' to execute the
  40. previously prepared query. If there are any parameter markers, then the
  41. client will send the data in the following format:
  42. @verbatim
  43. [COM_STMT_EXECUTE:1]
  44. [STMT_ID:4]
  45. [NULL_BITS:(param_count+7)/8)]
  46. [TYPES_SUPPLIED_BY_CLIENT(0/1):1]
  47. [[length]data]
  48. [[length]data] .. [[length]data].
  49. @endverbatim
  50. (Note: Except for string/binary types; all other types will not be
  51. supplied with length field)
  52. - If it is a first execute or types of parameters were altered by client,
  53. then setup the conversion routines.
  54. - Assign parameter items from the supplied data.
  55. - Execute the query without re-parsing and send back the results
  56. to client
  57. During execution of prepared statement tables are opened and locked
  58. the same way they would for normal (non-prepared) statement
  59. execution. Tables are unlocked and closed after the execution.
  60. When one supplies long data for a placeholder:
  61. - Server gets the long data in pieces with command type
  62. 'COM_STMT_SEND_LONG_DATA'.
  63. - The packet recieved will have the format as:
  64. [COM_STMT_SEND_LONG_DATA:1][STMT_ID:4][parameter_number:2][data]
  65. - data from the packet is appended to the long data value buffer for this
  66. placeholder.
  67. - It's up to the client to stop supplying data chunks at any point. The
  68. server doesn't care; also, the server doesn't notify the client whether
  69. it got the data or not; if there is any error, then it will be returned
  70. at statement execute.
  71. */
  72. #include "mysql_priv.h"
  73. #include "sql_select.h" // for JOIN
  74. #include "sql_cursor.h"
  75. #include "sp_head.h"
  76. #include "sp.h"
  77. #include "sp_cache.h"
  78. #include "probes_mysql.h"
  79. #ifdef EMBEDDED_LIBRARY
  80. /* include MYSQL_BIND headers */
  81. #include <mysql.h>
  82. #else
  83. #include <mysql_com.h>
  84. #endif
  85. /**
  86. A result class used to send cursor rows using the binary protocol.
  87. */
  88. class Select_fetch_protocol_binary: public select_send
  89. {
  90. Protocol_binary protocol;
  91. public:
  92. Select_fetch_protocol_binary(THD *thd);
  93. virtual bool send_fields(List<Item> &list, uint flags);
  94. virtual bool send_data(List<Item> &items);
  95. virtual bool send_eof();
  96. #ifdef EMBEDDED_LIBRARY
  97. void begin_dataset()
  98. {
  99. protocol.begin_dataset();
  100. }
  101. #endif
  102. };
  103. /****************************************************************************/
  104. /**
  105. Prepared_statement: a statement that can contain placeholders.
  106. */
  107. class Prepared_statement: public Statement
  108. {
  109. public:
  110. enum flag_values
  111. {
  112. IS_IN_USE= 1,
  113. IS_SQL_PREPARE= 2
  114. };
  115. THD *thd;
  116. Select_fetch_protocol_binary result;
  117. Item_param **param_array;
  118. uint param_count;
  119. uint last_errno;
  120. uint flags;
  121. char last_error[MYSQL_ERRMSG_SIZE];
  122. #ifndef EMBEDDED_LIBRARY
  123. bool (*set_params)(Prepared_statement *st, uchar *data, uchar *data_end,
  124. uchar *read_pos, String *expanded_query);
  125. #else
  126. bool (*set_params_data)(Prepared_statement *st, String *expanded_query);
  127. #endif
  128. bool (*set_params_from_vars)(Prepared_statement *stmt,
  129. List<LEX_STRING>& varnames,
  130. String *expanded_query);
  131. public:
  132. Prepared_statement(THD *thd_arg);
  133. virtual ~Prepared_statement();
  134. void setup_set_params();
  135. virtual Query_arena::Type type() const;
  136. virtual void cleanup_stmt();
  137. bool set_name(LEX_STRING *name);
  138. inline void close_cursor() { delete cursor; cursor= 0; }
  139. inline bool is_in_use() { return flags & (uint) IS_IN_USE; }
  140. inline bool is_sql_prepare() const { return flags & (uint) IS_SQL_PREPARE; }
  141. void set_sql_prepare() { flags|= (uint) IS_SQL_PREPARE; }
  142. bool prepare(const char *packet, uint packet_length);
  143. bool execute_loop(String *expanded_query,
  144. bool open_cursor,
  145. uchar *packet_arg, uchar *packet_end_arg);
  146. /* Destroy this statement */
  147. void deallocate();
  148. private:
  149. /**
  150. The memory root to allocate parsed tree elements (instances of Item,
  151. SELECT_LEX and other classes).
  152. */
  153. MEM_ROOT main_mem_root;
  154. /* Version of the stored functions cache at the time of prepare. */
  155. ulong m_sp_cache_version;
  156. private:
  157. bool set_db(const char *db, uint db_length);
  158. bool set_parameters(String *expanded_query,
  159. uchar *packet, uchar *packet_end);
  160. bool execute(String *expanded_query, bool open_cursor);
  161. bool reprepare();
  162. bool validate_metadata(Prepared_statement *copy);
  163. void swap_prepared_statement(Prepared_statement *copy);
  164. };
  165. /******************************************************************************
  166. Implementation
  167. ******************************************************************************/
  168. inline bool is_param_null(const uchar *pos, ulong param_no)
  169. {
  170. return pos[param_no/8] & (1 << (param_no & 7));
  171. }
  172. /**
  173. Find a prepared statement in the statement map by id.
  174. Try to find a prepared statement and set THD error if it's not found.
  175. @param thd thread handle
  176. @param id statement id
  177. @param where the place from which this function is called (for
  178. error reporting).
  179. @return
  180. 0 if the statement was not found, a pointer otherwise.
  181. */
  182. static Prepared_statement *
  183. find_prepared_statement(THD *thd, ulong id)
  184. {
  185. /*
  186. To strictly separate namespaces of SQL prepared statements and C API
  187. prepared statements find() will return 0 if there is a named prepared
  188. statement with such id.
  189. */
  190. Statement *stmt= thd->stmt_map.find(id);
  191. if (stmt == 0 || stmt->type() != Query_arena::PREPARED_STATEMENT)
  192. return NULL;
  193. return (Prepared_statement *) stmt;
  194. }
  195. /**
  196. Send prepared statement id and metadata to the client after prepare.
  197. @todo
  198. Fix this nasty upcast from List<Item_param> to List<Item>
  199. @return
  200. 0 in case of success, 1 otherwise
  201. */
  202. #ifndef EMBEDDED_LIBRARY
  203. static bool send_prep_stmt(Prepared_statement *stmt, uint columns)
  204. {
  205. NET *net= &stmt->thd->net;
  206. uchar buff[12];
  207. uint tmp;
  208. int error;
  209. THD *thd= stmt->thd;
  210. DBUG_ENTER("send_prep_stmt");
  211. buff[0]= 0; /* OK packet indicator */
  212. int4store(buff+1, stmt->id);
  213. int2store(buff+5, columns);
  214. int2store(buff+7, stmt->param_count);
  215. buff[9]= 0; // Guard against a 4.1 client
  216. tmp= min(stmt->thd->warning_info->statement_warn_count(), 65535);
  217. int2store(buff+10, tmp);
  218. /*
  219. Send types and names of placeholders to the client
  220. XXX: fix this nasty upcast from List<Item_param> to List<Item>
  221. */
  222. error= my_net_write(net, buff, sizeof(buff));
  223. if (stmt->param_count && ! error)
  224. {
  225. error= thd->protocol_text.send_fields((List<Item> *)
  226. &stmt->lex->param_list,
  227. Protocol::SEND_EOF);
  228. }
  229. /* Flag that a response has already been sent */
  230. thd->stmt_da->disable_status();
  231. DBUG_RETURN(error);
  232. }
  233. #else
  234. static bool send_prep_stmt(Prepared_statement *stmt,
  235. uint columns __attribute__((unused)))
  236. {
  237. THD *thd= stmt->thd;
  238. thd->client_stmt_id= stmt->id;
  239. thd->client_param_count= stmt->param_count;
  240. thd->clear_error();
  241. thd->stmt_da->disable_status();
  242. return 0;
  243. }
  244. #endif /*!EMBEDDED_LIBRARY*/
  245. #ifndef EMBEDDED_LIBRARY
  246. /**
  247. Read the length of the parameter data and return it back to
  248. the caller.
  249. Read data length, position the packet to the first byte after it,
  250. and return the length to the caller.
  251. @param packet a pointer to the data
  252. @param len remaining packet length
  253. @return
  254. Length of data piece.
  255. */
  256. static ulong get_param_length(uchar **packet, ulong len)
  257. {
  258. reg1 uchar *pos= *packet;
  259. if (len < 1)
  260. return 0;
  261. if (*pos < 251)
  262. {
  263. (*packet)++;
  264. return (ulong) *pos;
  265. }
  266. if (len < 3)
  267. return 0;
  268. if (*pos == 252)
  269. {
  270. (*packet)+=3;
  271. return (ulong) uint2korr(pos+1);
  272. }
  273. if (len < 4)
  274. return 0;
  275. if (*pos == 253)
  276. {
  277. (*packet)+=4;
  278. return (ulong) uint3korr(pos+1);
  279. }
  280. if (len < 5)
  281. return 0;
  282. (*packet)+=9; // Must be 254 when here
  283. /*
  284. In our client-server protocol all numbers bigger than 2^24
  285. stored as 8 bytes with uint8korr. Here we always know that
  286. parameter length is less than 2^4 so don't look at the second
  287. 4 bytes. But still we need to obey the protocol hence 9 in the
  288. assignment above.
  289. */
  290. return (ulong) uint4korr(pos+1);
  291. }
  292. #else
  293. #define get_param_length(packet, len) len
  294. #endif /*!EMBEDDED_LIBRARY*/
  295. /**
  296. Data conversion routines.
  297. All these functions read the data from pos, convert it to requested
  298. type and assign to param; pos is advanced to predefined length.
  299. Make a note that the NULL handling is examined at first execution
  300. (i.e. when input types altered) and for all subsequent executions
  301. we don't read any values for this.
  302. @param param parameter item
  303. @param pos input data buffer
  304. @param len length of data in the buffer
  305. */
  306. static void set_param_tiny(Item_param *param, uchar **pos, ulong len)
  307. {
  308. #ifndef EMBEDDED_LIBRARY
  309. if (len < 1)
  310. return;
  311. #endif
  312. int8 value= (int8) **pos;
  313. param->set_int(param->unsigned_flag ? (longlong) ((uint8) value) :
  314. (longlong) value, 4);
  315. *pos+= 1;
  316. }
  317. static void set_param_short(Item_param *param, uchar **pos, ulong len)
  318. {
  319. int16 value;
  320. #ifndef EMBEDDED_LIBRARY
  321. if (len < 2)
  322. return;
  323. value= sint2korr(*pos);
  324. #else
  325. shortget(value, *pos);
  326. #endif
  327. param->set_int(param->unsigned_flag ? (longlong) ((uint16) value) :
  328. (longlong) value, 6);
  329. *pos+= 2;
  330. }
  331. static void set_param_int32(Item_param *param, uchar **pos, ulong len)
  332. {
  333. int32 value;
  334. #ifndef EMBEDDED_LIBRARY
  335. if (len < 4)
  336. return;
  337. value= sint4korr(*pos);
  338. #else
  339. longget(value, *pos);
  340. #endif
  341. param->set_int(param->unsigned_flag ? (longlong) ((uint32) value) :
  342. (longlong) value, 11);
  343. *pos+= 4;
  344. }
  345. static void set_param_int64(Item_param *param, uchar **pos, ulong len)
  346. {
  347. longlong value;
  348. #ifndef EMBEDDED_LIBRARY
  349. if (len < 8)
  350. return;
  351. value= (longlong) sint8korr(*pos);
  352. #else
  353. longlongget(value, *pos);
  354. #endif
  355. param->set_int(value, 21);
  356. *pos+= 8;
  357. }
  358. static void set_param_float(Item_param *param, uchar **pos, ulong len)
  359. {
  360. float data;
  361. #ifndef EMBEDDED_LIBRARY
  362. if (len < 4)
  363. return;
  364. float4get(data,*pos);
  365. #else
  366. floatget(data, *pos);
  367. #endif
  368. param->set_double((double) data);
  369. *pos+= 4;
  370. }
  371. static void set_param_double(Item_param *param, uchar **pos, ulong len)
  372. {
  373. double data;
  374. #ifndef EMBEDDED_LIBRARY
  375. if (len < 8)
  376. return;
  377. float8get(data,*pos);
  378. #else
  379. doubleget(data, *pos);
  380. #endif
  381. param->set_double((double) data);
  382. *pos+= 8;
  383. }
  384. static void set_param_decimal(Item_param *param, uchar **pos, ulong len)
  385. {
  386. ulong length= get_param_length(pos, len);
  387. param->set_decimal((char*)*pos, length);
  388. *pos+= length;
  389. }
  390. #ifndef EMBEDDED_LIBRARY
  391. /*
  392. Read date/time/datetime parameter values from network (binary
  393. protocol). See writing counterparts of these functions in
  394. libmysql.c (store_param_{time,date,datetime}).
  395. */
  396. /**
  397. @todo
  398. Add warning 'Data truncated' here
  399. */
  400. static void set_param_time(Item_param *param, uchar **pos, ulong len)
  401. {
  402. MYSQL_TIME tm;
  403. ulong length= get_param_length(pos, len);
  404. if (length >= 8)
  405. {
  406. uchar *to= *pos;
  407. uint day;
  408. tm.neg= (bool) to[0];
  409. day= (uint) sint4korr(to+1);
  410. tm.hour= (uint) to[5] + day * 24;
  411. tm.minute= (uint) to[6];
  412. tm.second= (uint) to[7];
  413. tm.second_part= (length > 8) ? (ulong) sint4korr(to+8) : 0;
  414. if (tm.hour > 838)
  415. {
  416. /* TODO: add warning 'Data truncated' here */
  417. tm.hour= 838;
  418. tm.minute= 59;
  419. tm.second= 59;
  420. }
  421. tm.day= tm.year= tm.month= 0;
  422. }
  423. else
  424. set_zero_time(&tm, MYSQL_TIMESTAMP_TIME);
  425. param->set_time(&tm, MYSQL_TIMESTAMP_TIME,
  426. MAX_TIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
  427. *pos+= length;
  428. }
  429. static void set_param_datetime(Item_param *param, uchar **pos, ulong len)
  430. {
  431. MYSQL_TIME tm;
  432. ulong length= get_param_length(pos, len);
  433. if (length >= 4)
  434. {
  435. uchar *to= *pos;
  436. tm.neg= 0;
  437. tm.year= (uint) sint2korr(to);
  438. tm.month= (uint) to[2];
  439. tm.day= (uint) to[3];
  440. if (length > 4)
  441. {
  442. tm.hour= (uint) to[4];
  443. tm.minute= (uint) to[5];
  444. tm.second= (uint) to[6];
  445. }
  446. else
  447. tm.hour= tm.minute= tm.second= 0;
  448. tm.second_part= (length > 7) ? (ulong) sint4korr(to+7) : 0;
  449. }
  450. else
  451. set_zero_time(&tm, MYSQL_TIMESTAMP_DATETIME);
  452. param->set_time(&tm, MYSQL_TIMESTAMP_DATETIME,
  453. MAX_DATETIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
  454. *pos+= length;
  455. }
  456. static void set_param_date(Item_param *param, uchar **pos, ulong len)
  457. {
  458. MYSQL_TIME tm;
  459. ulong length= get_param_length(pos, len);
  460. if (length >= 4)
  461. {
  462. uchar *to= *pos;
  463. tm.year= (uint) sint2korr(to);
  464. tm.month= (uint) to[2];
  465. tm.day= (uint) to[3];
  466. tm.hour= tm.minute= tm.second= 0;
  467. tm.second_part= 0;
  468. tm.neg= 0;
  469. }
  470. else
  471. set_zero_time(&tm, MYSQL_TIMESTAMP_DATE);
  472. param->set_time(&tm, MYSQL_TIMESTAMP_DATE,
  473. MAX_DATE_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
  474. *pos+= length;
  475. }
  476. #else/*!EMBEDDED_LIBRARY*/
  477. /**
  478. @todo
  479. Add warning 'Data truncated' here
  480. */
  481. void set_param_time(Item_param *param, uchar **pos, ulong len)
  482. {
  483. MYSQL_TIME tm= *((MYSQL_TIME*)*pos);
  484. tm.hour+= tm.day * 24;
  485. tm.day= tm.year= tm.month= 0;
  486. if (tm.hour > 838)
  487. {
  488. /* TODO: add warning 'Data truncated' here */
  489. tm.hour= 838;
  490. tm.minute= 59;
  491. tm.second= 59;
  492. }
  493. param->set_time(&tm, MYSQL_TIMESTAMP_TIME,
  494. MAX_TIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
  495. }
  496. void set_param_datetime(Item_param *param, uchar **pos, ulong len)
  497. {
  498. MYSQL_TIME tm= *((MYSQL_TIME*)*pos);
  499. tm.neg= 0;
  500. param->set_time(&tm, MYSQL_TIMESTAMP_DATETIME,
  501. MAX_DATETIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
  502. }
  503. void set_param_date(Item_param *param, uchar **pos, ulong len)
  504. {
  505. MYSQL_TIME *to= (MYSQL_TIME*)*pos;
  506. param->set_time(to, MYSQL_TIMESTAMP_DATE,
  507. MAX_DATE_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
  508. }
  509. #endif /*!EMBEDDED_LIBRARY*/
  510. static void set_param_str(Item_param *param, uchar **pos, ulong len)
  511. {
  512. ulong length= get_param_length(pos, len);
  513. if (length > len)
  514. length= len;
  515. param->set_str((const char *)*pos, length);
  516. *pos+= length;
  517. }
  518. #undef get_param_length
  519. static void setup_one_conversion_function(THD *thd, Item_param *param,
  520. uchar param_type)
  521. {
  522. switch (param_type) {
  523. case MYSQL_TYPE_TINY:
  524. param->set_param_func= set_param_tiny;
  525. param->item_type= Item::INT_ITEM;
  526. param->item_result_type= INT_RESULT;
  527. break;
  528. case MYSQL_TYPE_SHORT:
  529. param->set_param_func= set_param_short;
  530. param->item_type= Item::INT_ITEM;
  531. param->item_result_type= INT_RESULT;
  532. break;
  533. case MYSQL_TYPE_LONG:
  534. param->set_param_func= set_param_int32;
  535. param->item_type= Item::INT_ITEM;
  536. param->item_result_type= INT_RESULT;
  537. break;
  538. case MYSQL_TYPE_LONGLONG:
  539. param->set_param_func= set_param_int64;
  540. param->item_type= Item::INT_ITEM;
  541. param->item_result_type= INT_RESULT;
  542. break;
  543. case MYSQL_TYPE_FLOAT:
  544. param->set_param_func= set_param_float;
  545. param->item_type= Item::REAL_ITEM;
  546. param->item_result_type= REAL_RESULT;
  547. break;
  548. case MYSQL_TYPE_DOUBLE:
  549. param->set_param_func= set_param_double;
  550. param->item_type= Item::REAL_ITEM;
  551. param->item_result_type= REAL_RESULT;
  552. break;
  553. case MYSQL_TYPE_DECIMAL:
  554. case MYSQL_TYPE_NEWDECIMAL:
  555. param->set_param_func= set_param_decimal;
  556. param->item_type= Item::DECIMAL_ITEM;
  557. param->item_result_type= DECIMAL_RESULT;
  558. break;
  559. case MYSQL_TYPE_TIME:
  560. param->set_param_func= set_param_time;
  561. param->item_type= Item::STRING_ITEM;
  562. param->item_result_type= STRING_RESULT;
  563. break;
  564. case MYSQL_TYPE_DATE:
  565. param->set_param_func= set_param_date;
  566. param->item_type= Item::STRING_ITEM;
  567. param->item_result_type= STRING_RESULT;
  568. break;
  569. case MYSQL_TYPE_DATETIME:
  570. case MYSQL_TYPE_TIMESTAMP:
  571. param->set_param_func= set_param_datetime;
  572. param->item_type= Item::STRING_ITEM;
  573. param->item_result_type= STRING_RESULT;
  574. break;
  575. case MYSQL_TYPE_TINY_BLOB:
  576. case MYSQL_TYPE_MEDIUM_BLOB:
  577. case MYSQL_TYPE_LONG_BLOB:
  578. case MYSQL_TYPE_BLOB:
  579. param->set_param_func= set_param_str;
  580. param->value.cs_info.character_set_of_placeholder= &my_charset_bin;
  581. param->value.cs_info.character_set_client=
  582. thd->variables.character_set_client;
  583. DBUG_ASSERT(thd->variables.character_set_client);
  584. param->value.cs_info.final_character_set_of_str_value= &my_charset_bin;
  585. param->item_type= Item::STRING_ITEM;
  586. param->item_result_type= STRING_RESULT;
  587. break;
  588. default:
  589. /*
  590. The client library ensures that we won't get any other typecodes
  591. except typecodes above and typecodes for string types. Marking
  592. label as 'default' lets us to handle malformed packets as well.
  593. */
  594. {
  595. CHARSET_INFO *fromcs= thd->variables.character_set_client;
  596. CHARSET_INFO *tocs= thd->variables.collation_connection;
  597. uint32 dummy_offset;
  598. param->value.cs_info.character_set_of_placeholder= fromcs;
  599. param->value.cs_info.character_set_client= fromcs;
  600. /*
  601. Setup source and destination character sets so that they
  602. are different only if conversion is necessary: this will
  603. make later checks easier.
  604. */
  605. param->value.cs_info.final_character_set_of_str_value=
  606. String::needs_conversion(0, fromcs, tocs, &dummy_offset) ?
  607. tocs : fromcs;
  608. param->set_param_func= set_param_str;
  609. /*
  610. Exact value of max_length is not known unless data is converted to
  611. charset of connection, so we have to set it later.
  612. */
  613. param->item_type= Item::STRING_ITEM;
  614. param->item_result_type= STRING_RESULT;
  615. }
  616. }
  617. param->param_type= (enum enum_field_types) param_type;
  618. }
  619. #ifndef EMBEDDED_LIBRARY
  620. /**
  621. Routines to assign parameters from data supplied by the client.
  622. Update the parameter markers by reading data from the packet and
  623. and generate a valid query for logging.
  624. @note
  625. This function, along with other _with_log functions is called when one of
  626. binary, slow or general logs is open. Logging of prepared statements in
  627. all cases is performed by means of conventional queries: if parameter
  628. data was supplied from C API, each placeholder in the query is
  629. replaced with its actual value; if we're logging a [Dynamic] SQL
  630. prepared statement, parameter markers are replaced with variable names.
  631. Example:
  632. @verbatim
  633. mysqld_stmt_prepare("UPDATE t1 SET a=a*1.25 WHERE a=?")
  634. --> general logs gets [Prepare] UPDATE t1 SET a*1.25 WHERE a=?"
  635. mysqld_stmt_execute(stmt);
  636. --> general and binary logs get
  637. [Execute] UPDATE t1 SET a*1.25 WHERE a=1"
  638. @endverbatim
  639. If a statement has been prepared using SQL syntax:
  640. @verbatim
  641. PREPARE stmt FROM "UPDATE t1 SET a=a*1.25 WHERE a=?"
  642. --> general log gets
  643. [Query] PREPARE stmt FROM "UPDATE ..."
  644. EXECUTE stmt USING @a
  645. --> general log gets
  646. [Query] EXECUTE stmt USING @a;
  647. @endverbatim
  648. @retval
  649. 0 if success
  650. @retval
  651. 1 otherwise
  652. */
  653. static bool insert_params_with_log(Prepared_statement *stmt, uchar *null_array,
  654. uchar *read_pos, uchar *data_end,
  655. String *query)
  656. {
  657. THD *thd= stmt->thd;
  658. Item_param **begin= stmt->param_array;
  659. Item_param **end= begin + stmt->param_count;
  660. uint32 length= 0;
  661. String str;
  662. const String *res;
  663. DBUG_ENTER("insert_params_with_log");
  664. if (query->copy(stmt->query, stmt->query_length, default_charset_info))
  665. DBUG_RETURN(1);
  666. for (Item_param **it= begin; it < end; ++it)
  667. {
  668. Item_param *param= *it;
  669. if (param->state != Item_param::LONG_DATA_VALUE)
  670. {
  671. if (is_param_null(null_array, (uint) (it - begin)))
  672. param->set_null();
  673. else
  674. {
  675. if (read_pos >= data_end)
  676. DBUG_RETURN(1);
  677. param->set_param_func(param, &read_pos, (uint) (data_end - read_pos));
  678. if (param->state == Item_param::NO_VALUE)
  679. DBUG_RETURN(1);
  680. }
  681. }
  682. res= param->query_val_str(&str);
  683. if (param->convert_str_value(thd))
  684. DBUG_RETURN(1); /* out of memory */
  685. if (query->replace(param->pos_in_query+length, 1, *res))
  686. DBUG_RETURN(1);
  687. length+= res->length()-1;
  688. }
  689. DBUG_RETURN(0);
  690. }
  691. static bool insert_params(Prepared_statement *stmt, uchar *null_array,
  692. uchar *read_pos, uchar *data_end,
  693. String *expanded_query)
  694. {
  695. Item_param **begin= stmt->param_array;
  696. Item_param **end= begin + stmt->param_count;
  697. DBUG_ENTER("insert_params");
  698. for (Item_param **it= begin; it < end; ++it)
  699. {
  700. Item_param *param= *it;
  701. if (param->state != Item_param::LONG_DATA_VALUE)
  702. {
  703. if (is_param_null(null_array, (uint) (it - begin)))
  704. param->set_null();
  705. else
  706. {
  707. if (read_pos >= data_end)
  708. DBUG_RETURN(1);
  709. param->set_param_func(param, &read_pos, (uint) (data_end - read_pos));
  710. if (param->state == Item_param::NO_VALUE)
  711. DBUG_RETURN(1);
  712. }
  713. }
  714. if (param->convert_str_value(stmt->thd))
  715. DBUG_RETURN(1); /* out of memory */
  716. }
  717. DBUG_RETURN(0);
  718. }
  719. static bool setup_conversion_functions(Prepared_statement *stmt,
  720. uchar **data, uchar *data_end)
  721. {
  722. /* skip null bits */
  723. uchar *read_pos= *data + (stmt->param_count+7) / 8;
  724. DBUG_ENTER("setup_conversion_functions");
  725. if (*read_pos++) //types supplied / first execute
  726. {
  727. /*
  728. First execute or types altered by the client, setup the
  729. conversion routines for all parameters (one time)
  730. */
  731. Item_param **it= stmt->param_array;
  732. Item_param **end= it + stmt->param_count;
  733. THD *thd= stmt->thd;
  734. for (; it < end; ++it)
  735. {
  736. ushort typecode;
  737. const uint signed_bit= 1 << 15;
  738. if (read_pos >= data_end)
  739. DBUG_RETURN(1);
  740. typecode= sint2korr(read_pos);
  741. read_pos+= 2;
  742. (**it).unsigned_flag= test(typecode & signed_bit);
  743. setup_one_conversion_function(thd, *it, (uchar) (typecode & ~signed_bit));
  744. }
  745. }
  746. *data= read_pos;
  747. DBUG_RETURN(0);
  748. }
  749. #else
  750. /**
  751. Embedded counterparts of parameter assignment routines.
  752. The main difference between the embedded library and the server is
  753. that in embedded case we don't serialize/deserialize parameters data.
  754. Additionally, for unknown reason, the client-side flag raised for
  755. changed types of placeholders is ignored and we simply setup conversion
  756. functions at each execute (TODO: fix).
  757. */
  758. static bool emb_insert_params(Prepared_statement *stmt, String *expanded_query)
  759. {
  760. THD *thd= stmt->thd;
  761. Item_param **it= stmt->param_array;
  762. Item_param **end= it + stmt->param_count;
  763. MYSQL_BIND *client_param= stmt->thd->client_params;
  764. DBUG_ENTER("emb_insert_params");
  765. for (; it < end; ++it, ++client_param)
  766. {
  767. Item_param *param= *it;
  768. setup_one_conversion_function(thd, param, client_param->buffer_type);
  769. if (param->state != Item_param::LONG_DATA_VALUE)
  770. {
  771. if (*client_param->is_null)
  772. param->set_null();
  773. else
  774. {
  775. uchar *buff= (uchar*) client_param->buffer;
  776. param->unsigned_flag= client_param->is_unsigned;
  777. param->set_param_func(param, &buff,
  778. client_param->length ?
  779. *client_param->length :
  780. client_param->buffer_length);
  781. if (param->state == Item_param::NO_VALUE)
  782. DBUG_RETURN(1);
  783. }
  784. }
  785. if (param->convert_str_value(thd))
  786. DBUG_RETURN(1); /* out of memory */
  787. }
  788. DBUG_RETURN(0);
  789. }
  790. static bool emb_insert_params_with_log(Prepared_statement *stmt,
  791. String *query)
  792. {
  793. THD *thd= stmt->thd;
  794. Item_param **it= stmt->param_array;
  795. Item_param **end= it + stmt->param_count;
  796. MYSQL_BIND *client_param= thd->client_params;
  797. String str;
  798. const String *res;
  799. uint32 length= 0;
  800. DBUG_ENTER("emb_insert_params_with_log");
  801. if (query->copy(stmt->query, stmt->query_length, default_charset_info))
  802. DBUG_RETURN(1);
  803. for (; it < end; ++it, ++client_param)
  804. {
  805. Item_param *param= *it;
  806. setup_one_conversion_function(thd, param, client_param->buffer_type);
  807. if (param->state != Item_param::LONG_DATA_VALUE)
  808. {
  809. if (*client_param->is_null)
  810. param->set_null();
  811. else
  812. {
  813. uchar *buff= (uchar*)client_param->buffer;
  814. param->unsigned_flag= client_param->is_unsigned;
  815. param->set_param_func(param, &buff,
  816. client_param->length ?
  817. *client_param->length :
  818. client_param->buffer_length);
  819. if (param->state == Item_param::NO_VALUE)
  820. DBUG_RETURN(1);
  821. }
  822. }
  823. res= param->query_val_str(&str);
  824. if (param->convert_str_value(thd))
  825. DBUG_RETURN(1); /* out of memory */
  826. if (query->replace(param->pos_in_query+length, 1, *res))
  827. DBUG_RETURN(1);
  828. length+= res->length()-1;
  829. }
  830. DBUG_RETURN(0);
  831. }
  832. #endif /*!EMBEDDED_LIBRARY*/
  833. /**
  834. Setup data conversion routines using an array of parameter
  835. markers from the original prepared statement.
  836. Swap the parameter data of the original prepared
  837. statement to the new one.
  838. Used only when we re-prepare a prepared statement.
  839. There are two reasons for this function to exist:
  840. 1) In the binary client/server protocol, parameter metadata
  841. is sent only at first execute. Consequently, if we need to
  842. reprepare a prepared statement at a subsequent execution,
  843. we may not have metadata information in the packet.
  844. In that case we use the parameter array of the original
  845. prepared statement to setup parameter types of the new
  846. prepared statement.
  847. 2) In the binary client/server protocol, we may supply
  848. long data in pieces. When the last piece is supplied,
  849. we assemble the pieces and convert them from client
  850. character set to the connection character set. After
  851. that the parameter value is only available inside
  852. the parameter, the original pieces are lost, and thus
  853. we can only assign the corresponding parameter of the
  854. reprepared statement from the original value.
  855. @param[out] param_array_dst parameter markers of the new statement
  856. @param[in] param_array_src parameter markers of the original
  857. statement
  858. @param[in] param_count total number of parameters. Is the
  859. same in src and dst arrays, since
  860. the statement query is the same
  861. @return this function never fails
  862. */
  863. static void
  864. swap_parameter_array(Item_param **param_array_dst,
  865. Item_param **param_array_src,
  866. uint param_count)
  867. {
  868. Item_param **dst= param_array_dst;
  869. Item_param **src= param_array_src;
  870. Item_param **end= param_array_dst + param_count;
  871. for (; dst < end; ++src, ++dst)
  872. (*dst)->set_param_type_and_swap_value(*src);
  873. }
  874. /**
  875. Assign prepared statement parameters from user variables.
  876. @param stmt Statement
  877. @param varnames List of variables. Caller must ensure that number
  878. of variables in the list is equal to number of statement
  879. parameters
  880. @param query Ignored
  881. */
  882. static bool insert_params_from_vars(Prepared_statement *stmt,
  883. List<LEX_STRING>& varnames,
  884. String *query __attribute__((unused)))
  885. {
  886. Item_param **begin= stmt->param_array;
  887. Item_param **end= begin + stmt->param_count;
  888. user_var_entry *entry;
  889. LEX_STRING *varname;
  890. List_iterator<LEX_STRING> var_it(varnames);
  891. DBUG_ENTER("insert_params_from_vars");
  892. for (Item_param **it= begin; it < end; ++it)
  893. {
  894. Item_param *param= *it;
  895. varname= var_it++;
  896. entry= (user_var_entry*)hash_search(&stmt->thd->user_vars,
  897. (uchar*) varname->str,
  898. varname->length);
  899. if (param->set_from_user_var(stmt->thd, entry) ||
  900. param->convert_str_value(stmt->thd))
  901. DBUG_RETURN(1);
  902. }
  903. DBUG_RETURN(0);
  904. }
  905. /**
  906. Do the same as insert_params_from_vars but also construct query text for
  907. binary log.
  908. @param stmt Prepared statement
  909. @param varnames List of variables. Caller must ensure that number of
  910. variables in the list is equal to number of statement
  911. parameters
  912. @param query The query with parameter markers replaced with corresponding
  913. user variables that were used to execute the query.
  914. */
  915. static bool insert_params_from_vars_with_log(Prepared_statement *stmt,
  916. List<LEX_STRING>& varnames,
  917. String *query)
  918. {
  919. Item_param **begin= stmt->param_array;
  920. Item_param **end= begin + stmt->param_count;
  921. user_var_entry *entry;
  922. LEX_STRING *varname;
  923. List_iterator<LEX_STRING> var_it(varnames);
  924. String buf;
  925. const String *val;
  926. uint32 length= 0;
  927. THD *thd= stmt->thd;
  928. DBUG_ENTER("insert_params_from_vars");
  929. if (query->copy(stmt->query, stmt->query_length, default_charset_info))
  930. DBUG_RETURN(1);
  931. for (Item_param **it= begin; it < end; ++it)
  932. {
  933. Item_param *param= *it;
  934. varname= var_it++;
  935. entry= (user_var_entry *) hash_search(&thd->user_vars, (uchar*) varname->str,
  936. varname->length);
  937. /*
  938. We have to call the setup_one_conversion_function() here to set
  939. the parameter's members that might be needed further
  940. (e.g. value.cs_info.character_set_client is used in the query_val_str()).
  941. */
  942. setup_one_conversion_function(thd, param, param->param_type);
  943. if (param->set_from_user_var(thd, entry))
  944. DBUG_RETURN(1);
  945. val= param->query_val_str(&buf);
  946. if (param->convert_str_value(thd))
  947. DBUG_RETURN(1); /* out of memory */
  948. if (query->replace(param->pos_in_query+length, 1, *val))
  949. DBUG_RETURN(1);
  950. length+= val->length()-1;
  951. }
  952. DBUG_RETURN(0);
  953. }
  954. /**
  955. Validate INSERT statement.
  956. @param stmt prepared statement
  957. @param tables global/local table list
  958. @retval
  959. FALSE success
  960. @retval
  961. TRUE error, error message is set in THD
  962. */
  963. static bool mysql_test_insert(Prepared_statement *stmt,
  964. TABLE_LIST *table_list,
  965. List<Item> &fields,
  966. List<List_item> &values_list,
  967. List<Item> &update_fields,
  968. List<Item> &update_values,
  969. enum_duplicates duplic)
  970. {
  971. THD *thd= stmt->thd;
  972. List_iterator_fast<List_item> its(values_list);
  973. List_item *values;
  974. DBUG_ENTER("mysql_test_insert");
  975. if (insert_precheck(thd, table_list))
  976. goto error;
  977. /*
  978. open temporary memory pool for temporary data allocated by derived
  979. tables & preparation procedure
  980. Note that this is done without locks (should not be needed as we will not
  981. access any data here)
  982. If we would use locks, then we have to ensure we are not using
  983. TL_WRITE_DELAYED as having two such locks can cause table corruption.
  984. */
  985. if (open_normal_and_derived_tables(thd, table_list, 0))
  986. goto error;
  987. if ((values= its++))
  988. {
  989. uint value_count;
  990. ulong counter= 0;
  991. Item *unused_conds= 0;
  992. if (table_list->table)
  993. {
  994. // don't allocate insert_values
  995. table_list->table->insert_values=(uchar *)1;
  996. }
  997. if (mysql_prepare_insert(thd, table_list, table_list->table,
  998. fields, values, update_fields, update_values,
  999. duplic, &unused_conds, FALSE, FALSE, FALSE))
  1000. goto error;
  1001. value_count= values->elements;
  1002. its.rewind();
  1003. if (table_list->lock_type == TL_WRITE_DELAYED &&
  1004. !(table_list->table->file->ha_table_flags() & HA_CAN_INSERT_DELAYED))
  1005. {
  1006. my_error(ER_DELAYED_NOT_SUPPORTED, MYF(0), (table_list->view ?
  1007. table_list->view_name.str :
  1008. table_list->table_name));
  1009. goto error;
  1010. }
  1011. while ((values= its++))
  1012. {
  1013. counter++;
  1014. if (values->elements != value_count)
  1015. {
  1016. my_error(ER_WRONG_VALUE_COUNT_ON_ROW, MYF(0), counter);
  1017. goto error;
  1018. }
  1019. if (setup_fields(thd, 0, *values, MARK_COLUMNS_NONE, 0, 0))
  1020. goto error;
  1021. }
  1022. }
  1023. DBUG_RETURN(FALSE);
  1024. error:
  1025. /* insert_values is cleared in open_table */
  1026. DBUG_RETURN(TRUE);
  1027. }
  1028. /**
  1029. Validate UPDATE statement.
  1030. @param stmt prepared statement
  1031. @param tables list of tables used in this query
  1032. @todo
  1033. - here we should send types of placeholders to the client.
  1034. @retval
  1035. 0 success
  1036. @retval
  1037. 1 error, error message is set in THD
  1038. @retval
  1039. 2 convert to multi_update
  1040. */
  1041. static int mysql_test_update(Prepared_statement *stmt,
  1042. TABLE_LIST *table_list)
  1043. {
  1044. int res;
  1045. THD *thd= stmt->thd;
  1046. uint table_count= 0;
  1047. SELECT_LEX *select= &stmt->lex->select_lex;
  1048. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  1049. uint want_privilege;
  1050. #endif
  1051. DBUG_ENTER("mysql_test_update");
  1052. if (update_precheck(thd, table_list) ||
  1053. open_tables(thd, &table_list, &table_count, 0))
  1054. goto error;
  1055. if (table_list->multitable_view)
  1056. {
  1057. DBUG_ASSERT(table_list->view != 0);
  1058. DBUG_PRINT("info", ("Switch to multi-update"));
  1059. /* pass counter value */
  1060. thd->lex->table_count= table_count;
  1061. /* convert to multiupdate */
  1062. DBUG_RETURN(2);
  1063. }
  1064. /*
  1065. thd->fill_derived_tables() is false here for sure (because it is
  1066. preparation of PS, so we even do not check it).
  1067. */
  1068. if (mysql_handle_derived(thd->lex, &mysql_derived_prepare))
  1069. goto error;
  1070. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  1071. /* Force privilege re-checking for views after they have been opened. */
  1072. want_privilege= (table_list->view ? UPDATE_ACL :
  1073. table_list->grant.want_privilege);
  1074. #endif
  1075. if (mysql_prepare_update(thd, table_list, &select->where,
  1076. select->order_list.elements,
  1077. (ORDER *) select->order_list.first))
  1078. goto error;
  1079. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  1080. table_list->grant.want_privilege= want_privilege;
  1081. table_list->table->grant.want_privilege= want_privilege;
  1082. table_list->register_want_access(want_privilege);
  1083. #endif
  1084. thd->lex->select_lex.no_wrap_view_item= TRUE;
  1085. res= setup_fields(thd, 0, select->item_list, MARK_COLUMNS_READ, 0, 0);
  1086. thd->lex->select_lex.no_wrap_view_item= FALSE;
  1087. if (res)
  1088. goto error;
  1089. #ifndef NO_EMBEDDED_ACCESS_CHECKS
  1090. /* Check values */
  1091. table_list->grant.want_privilege=
  1092. table_list->table->grant.want_privilege=
  1093. (SELECT_ACL & ~table_list->table->grant.privilege);
  1094. table_list->register_want_access(SELECT_ACL);
  1095. #endif
  1096. if (setup_fields(thd, 0, stmt->lex->value_list, MARK_COLUMNS_NONE, 0, 0))
  1097. goto error;
  1098. /* TODO: here we should send types of placeholders to the client. */
  1099. DBUG_RETURN(0);
  1100. error:
  1101. DBUG_RETURN(1);
  1102. }
  1103. /**
  1104. Validate DELETE statement.
  1105. @param stmt prepared statement
  1106. @param tables list of tables used in this query
  1107. @retval
  1108. FALSE success
  1109. @retval
  1110. TRUE error, error message is set in THD
  1111. */
  1112. static bool mysql_test_delete(Prepared_statement *stmt,
  1113. TABLE_LIST *table_list)
  1114. {
  1115. THD *thd= stmt->thd;
  1116. LEX *lex= stmt->lex;
  1117. DBUG_ENTER("mysql_test_delete");
  1118. if (delete_precheck(thd, table_list) ||
  1119. open_normal_and_derived_tables(thd, table_list, 0))
  1120. goto error;
  1121. if (!table_list->table)
  1122. {
  1123. my_error(ER_VIEW_DELETE_MERGE_VIEW, MYF(0),
  1124. table_list->view_db.str, table_list->view_name.str);
  1125. goto error;
  1126. }
  1127. DBUG_RETURN(mysql_prepare_delete(thd, table_list, &lex->select_lex.where));
  1128. error:
  1129. DBUG_RETURN(TRUE);
  1130. }
  1131. /**
  1132. Validate SELECT statement.
  1133. In case of success, if this query is not EXPLAIN, send column list info
  1134. back to the client.
  1135. @param stmt prepared statement
  1136. @param tables list of tables used in the query
  1137. @retval
  1138. 0 success
  1139. @retval
  1140. 1 error, error message is set in THD
  1141. @retval
  1142. 2 success, and statement metadata has been sent
  1143. */
  1144. static int mysql_test_select(Prepared_statement *stmt,
  1145. TABLE_LIST *tables)
  1146. {
  1147. THD *thd= stmt->thd;
  1148. LEX *lex= stmt->lex;
  1149. SELECT_LEX_UNIT *unit= &lex->unit;
  1150. DBUG_ENTER("mysql_test_select");
  1151. lex->select_lex.context.resolve_in_select_list= TRUE;
  1152. ulong privilege= lex->exchange ? SELECT_ACL | FILE_ACL : SELECT_ACL;
  1153. if (tables)
  1154. {
  1155. if (check_table_access(thd, privilege, tables, UINT_MAX, FALSE))
  1156. goto error;
  1157. }
  1158. else if (check_access(thd, privilege, any_db,0,0,0,0))
  1159. goto error;
  1160. if (!lex->result && !(lex->result= new (stmt->mem_root) select_send))
  1161. {
  1162. my_error(ER_OUTOFMEMORY, MYF(0), sizeof(select_send));
  1163. goto error;
  1164. }
  1165. if (open_normal_and_derived_tables(thd, tables, 0))
  1166. goto error;
  1167. thd->used_tables= 0; // Updated by setup_fields
  1168. /*
  1169. JOIN::prepare calls
  1170. It is not SELECT COMMAND for sure, so setup_tables will be called as
  1171. usual, and we pass 0 as setup_tables_done_option
  1172. */
  1173. if (unit->prepare(thd, 0, 0))
  1174. goto error;
  1175. if (!lex->describe && !stmt->is_sql_prepare())
  1176. {
  1177. /* Make copy of item list, as change_columns may change it */
  1178. List<Item> fields(lex->select_lex.item_list);
  1179. /* Change columns if a procedure like analyse() */
  1180. if (unit->last_procedure && unit->last_procedure->change_columns(fields))
  1181. goto error;
  1182. /*
  1183. We can use lex->result as it should've been prepared in
  1184. unit->prepare call above.
  1185. */
  1186. if (send_prep_stmt(stmt, lex->result->field_count(fields)) ||
  1187. lex->result->send_fields(fields, Protocol::SEND_EOF) ||
  1188. thd->protocol->flush())
  1189. goto error;
  1190. DBUG_RETURN(2);
  1191. }
  1192. DBUG_RETURN(0);
  1193. error:
  1194. DBUG_RETURN(1);
  1195. }
  1196. /**
  1197. Validate and prepare for execution DO statement expressions.
  1198. @param stmt prepared statement
  1199. @param tables list of tables used in this query
  1200. @param values list of expressions
  1201. @retval
  1202. FALSE success
  1203. @retval
  1204. TRUE error, error message is set in THD
  1205. */
  1206. static bool mysql_test_do_fields(Prepared_statement *stmt,
  1207. TABLE_LIST *tables,
  1208. List<Item> *values)
  1209. {
  1210. THD *thd= stmt->thd;
  1211. DBUG_ENTER("mysql_test_do_fields");
  1212. if (tables && check_table_access(thd, SELECT_ACL, tables, UINT_MAX, FALSE))
  1213. DBUG_RETURN(TRUE);
  1214. if (open_normal_and_derived_tables(thd, tables, 0))
  1215. DBUG_RETURN(TRUE);
  1216. DBUG_RETURN(setup_fields(thd, 0, *values, MARK_COLUMNS_NONE, 0, 0));
  1217. }
  1218. /**
  1219. Validate and prepare for execution SET statement expressions.
  1220. @param stmt prepared statement
  1221. @param tables list of tables used in this query
  1222. @param values list of expressions
  1223. @retval
  1224. FALSE success
  1225. @retval
  1226. TRUE error, error message is set in THD
  1227. */
  1228. static bool mysql_test_set_fields(Prepared_statement *stmt,
  1229. TABLE_LIST *tables,
  1230. List<set_var_base> *var_list)
  1231. {
  1232. DBUG_ENTER("mysql_test_set_fields");
  1233. List_iterator_fast<set_var_base> it(*var_list);
  1234. THD *thd= stmt->thd;
  1235. set_var_base *var;
  1236. if ((tables && check_table_access(thd, SELECT_ACL, tables, UINT_MAX, FALSE))
  1237. || open_normal_and_derived_tables(thd, tables, 0))
  1238. goto error;
  1239. while ((var= it++))
  1240. {
  1241. if (var->light_check(thd))
  1242. goto error;
  1243. }
  1244. DBUG_RETURN(FALSE);
  1245. error:
  1246. DBUG_RETURN(TRUE);
  1247. }
  1248. /**
  1249. Validate and prepare for execution CALL statement expressions.
  1250. @param stmt prepared statement
  1251. @param tables list of tables used in this query
  1252. @param value_list list of expressions
  1253. @retval FALSE success
  1254. @retval TRUE error, error message is set in THD
  1255. */
  1256. static bool mysql_test_call_fields(Prepared_statement *stmt,
  1257. TABLE_LIST *tables,
  1258. List<Item> *value_list)
  1259. {
  1260. DBUG_ENTER("mysql_test_call_fields");
  1261. List_iterator<Item> it(*value_list);
  1262. THD *thd= stmt->thd;
  1263. Item *item;
  1264. if ((tables && check_table_access(thd, SELECT_ACL, tables, UINT_MAX, FALSE)) ||
  1265. open_normal_and_derived_tables(thd, tables, 0))
  1266. goto err;
  1267. while ((item= it++))
  1268. {
  1269. if ((!item->fixed && item->fix_fields(thd, it.ref())) ||
  1270. item->check_cols(1))
  1271. goto err;
  1272. }
  1273. DBUG_RETURN(FALSE);
  1274. err:
  1275. DBUG_RETURN(TRUE);
  1276. }
  1277. /**
  1278. Check internal SELECT of the prepared command.
  1279. @param stmt prepared statement
  1280. @param specific_prepare function of command specific prepare
  1281. @param setup_tables_done_option options to be passed to LEX::unit.prepare()
  1282. @note
  1283. This function won't directly open tables used in select. They should
  1284. be opened either by calling function (and in this case you probably
  1285. should use select_like_stmt_test_with_open()) or by
  1286. "specific_prepare" call (like this happens in case of multi-update).
  1287. @retval
  1288. FALSE success
  1289. @retval
  1290. TRUE error, error message is set in THD
  1291. */
  1292. static bool select_like_stmt_test(Prepared_statement *stmt,
  1293. int (*specific_prepare)(THD *thd),
  1294. ulong setup_tables_done_option)
  1295. {
  1296. DBUG_ENTER("select_like_stmt_test");
  1297. THD *thd= stmt->thd;
  1298. LEX *lex= stmt->lex;
  1299. lex->select_lex.context.resolve_in_select_list= TRUE;
  1300. if (specific_prepare && (*specific_prepare)(thd))
  1301. DBUG_RETURN(TRUE);
  1302. thd->used_tables= 0; // Updated by setup_fields
  1303. /* Calls JOIN::prepare */
  1304. DBUG_RETURN(lex->unit.prepare(thd, 0, setup_tables_done_option));
  1305. }
  1306. /**
  1307. Check internal SELECT of the prepared command (with opening of used
  1308. tables).
  1309. @param stmt prepared statement
  1310. @param tables list of tables to be opened
  1311. before calling specific_prepare function
  1312. @param specific_prepare function of command specific prepare
  1313. @param setup_tables_done_option options to be passed to LEX::unit.prepare()
  1314. @retval
  1315. FALSE success
  1316. @retval
  1317. TRUE error
  1318. */
  1319. static bool
  1320. select_like_stmt_test_with_open(Prepared_statement *stmt,
  1321. TABLE_LIST *tables,
  1322. int (*specific_prepare)(THD *thd),
  1323. ulong setup_tables_done_option)
  1324. {
  1325. DBUG_ENTER("select_like_stmt_test_with_open");
  1326. /*
  1327. We should not call LEX::unit.cleanup() after this
  1328. open_normal_and_derived_tables() call because we don't allow
  1329. prepared EXPLAIN yet so derived tables will clean up after
  1330. themself.
  1331. */
  1332. if (open_normal_and_derived_tables(stmt->thd, tables, 0))
  1333. DBUG_RETURN(TRUE);
  1334. DBUG_RETURN(select_like_stmt_test(stmt, specific_prepare,
  1335. setup_tables_done_option));
  1336. }
  1337. /**
  1338. Validate and prepare for execution CREATE TABLE statement.
  1339. @param stmt prepared statement
  1340. @param tables list of tables used in this query
  1341. @retval
  1342. FALSE success
  1343. @retval
  1344. TRUE error, error message is set in THD
  1345. */
  1346. static bool mysql_test_create_table(Prepared_statement *stmt)
  1347. {
  1348. DBUG_ENTER("mysql_test_create_table");
  1349. THD *thd= stmt->thd;
  1350. LEX *lex= stmt->lex;
  1351. SELECT_LEX *select_lex= &lex->select_lex;
  1352. bool res= FALSE;
  1353. /* Skip first table, which is the table we are creating */
  1354. bool link_to_local;
  1355. TABLE_LIST *create_table= lex->unlink_first_table(&link_to_local);
  1356. TABLE_LIST *tables= lex->query_tables;
  1357. if (create_table_precheck(thd, tables, create_table))
  1358. DBUG_RETURN(TRUE);
  1359. if (select_lex->item_list.elements)
  1360. {
  1361. if (!(lex->create_info.options & HA_LEX_CREATE_TMP_TABLE))
  1362. {
  1363. lex->link_first_table_back(create_table, link_to_local);
  1364. create_table->create= TRUE;
  1365. }
  1366. if (open_normal_and_derived_tables(stmt->thd, lex->query_tables, 0))
  1367. DBUG_RETURN(TRUE);
  1368. if (!(lex->create_info.options & HA_LEX_CREATE_TMP_TABLE))
  1369. create_table= lex->unlink_first_table(&link_to_local);
  1370. select_lex->context.resolve_in_select_list= TRUE;
  1371. res= select_like_stmt_test(stmt, 0, 0);
  1372. }
  1373. else if (lex->create_info.options & HA_LEX_CREATE_TABLE_LIKE)
  1374. {
  1375. /*
  1376. Check that the source table exist, and also record
  1377. its metadata version. Even though not strictly necessary,
  1378. we validate metadata of all CREATE TABLE statements,
  1379. which keeps metadata validation code simple.
  1380. */
  1381. if (open_normal_and_derived_tables(stmt->thd, lex->query_tables, 0))
  1382. DBUG_RETURN(TRUE);
  1383. }
  1384. /* put tables back for PS rexecuting */
  1385. lex->link_first_table_back(create_table, link_to_local);
  1386. DBUG_RETURN(res);
  1387. }
  1388. /**
  1389. @brief Validate and prepare for execution CREATE VIEW statement
  1390. @param stmt prepared statement
  1391. @note This function handles create view commands.
  1392. @retval FALSE Operation was a success.
  1393. @retval TRUE An error occured.
  1394. */
  1395. static bool mysql_test_create_view(Prepared_statement *stmt)
  1396. {
  1397. DBUG_ENTER("mysql_test_create_view");
  1398. THD *thd= stmt->thd;
  1399. LEX *lex= stmt->lex;
  1400. bool res= TRUE;
  1401. /* Skip first table, which is the view we are creating */
  1402. bool link_to_local;
  1403. TABLE_LIST *view= lex->unlink_first_table(&link_to_local);
  1404. TABLE_LIST *tables= lex->query_tables;
  1405. if (create_view_precheck(thd, tables, view, lex->create_view_mode))
  1406. goto err;
  1407. if (open_normal_and_derived_tables(thd, tables, 0))
  1408. goto err;
  1409. lex->view_prepare_mode= 1;
  1410. res= select_like_stmt_test(stmt, 0, 0);
  1411. err:
  1412. /* put view back for PS rexecuting */
  1413. lex->link_first_table_back(view, link_to_local);
  1414. DBUG_RETURN(res);
  1415. }
  1416. /*
  1417. Validate and prepare for execution a multi update statement.
  1418. @param stmt prepared statement
  1419. @param tables list of tables used in this query
  1420. @param converted converted to multi-update from usual update
  1421. @retval
  1422. FALSE success
  1423. @retval
  1424. TRUE error, error message is set in THD
  1425. */
  1426. static bool mysql_test_multiupdate(Prepared_statement *stmt,
  1427. TABLE_LIST *tables,
  1428. bool converted)
  1429. {
  1430. /* if we switched from normal update, rights are checked */
  1431. if (!converted && multi_update_precheck(stmt->thd, tables))
  1432. return TRUE;
  1433. return select_like_stmt_test(stmt, &mysql_multi_update_prepare,
  1434. OPTION_SETUP_TABLES_DONE);
  1435. }
  1436. /**
  1437. Validate and prepare for execution a multi delete statement.
  1438. @param stmt prepared statement
  1439. @param tables list of tables used in this query
  1440. @retval
  1441. FALSE success
  1442. @retval
  1443. TRUE error, error message in THD is set.
  1444. */
  1445. static bool mysql_test_multidelete(Prepared_statement *stmt,
  1446. TABLE_LIST *tables)
  1447. {
  1448. stmt->thd->lex->current_select= &stmt->thd->lex->select_lex;
  1449. if (add_item_to_list(stmt->thd, new Item_null()))
  1450. {
  1451. my_error(ER_OUTOFMEMORY, MYF(0), 0);
  1452. goto error;
  1453. }
  1454. if (multi_delete_precheck(stmt->thd, tables) ||
  1455. select_like_stmt_test_with_open(stmt, tables,
  1456. &mysql_multi_delete_prepare,
  1457. OPTION_SETUP_TABLES_DONE))
  1458. goto error;
  1459. if (!tables->table)
  1460. {
  1461. my_error(ER_VIEW_DELETE_MERGE_VIEW, MYF(0),
  1462. tables->view_db.str, tables->view_name.str);
  1463. goto error;
  1464. }
  1465. return FALSE;
  1466. error:
  1467. return TRUE;
  1468. }
  1469. /**
  1470. Wrapper for mysql_insert_select_prepare, to make change of local tables
  1471. after open_normal_and_derived_tables() call.
  1472. @param thd thread handle
  1473. @note
  1474. We need to remove the first local table after
  1475. open_normal_and_derived_tables(), because mysql_handle_derived
  1476. uses local tables lists.
  1477. */
  1478. static int mysql_insert_select_prepare_tester(THD *thd)
  1479. {
  1480. SELECT_LEX *first_select= &thd->lex->select_lex;
  1481. TABLE_LIST *second_table= ((TABLE_LIST*)first_select->table_list.first)->
  1482. next_local;
  1483. /* Skip first table, which is the table we are inserting in */
  1484. first_select->table_list.first= (uchar *) second_table;
  1485. thd->lex->select_lex.context.table_list=
  1486. thd->lex->select_lex.context.first_name_resolution_table= second_table;
  1487. return mysql_insert_select_prepare(thd);
  1488. }
  1489. /**
  1490. Validate and prepare for execution INSERT ... SELECT statement.
  1491. @param stmt prepared statement
  1492. @param tables list of tables used in this query
  1493. @retval
  1494. FALSE success
  1495. @retval
  1496. TRUE error, error message is set in THD
  1497. */
  1498. static bool mysql_test_insert_select(Prepared_statement *stmt,
  1499. TABLE_LIST *tables)
  1500. {
  1501. int res;
  1502. LEX *lex= stmt->lex;
  1503. TABLE_LIST *first_local_table;
  1504. if (tables->table)
  1505. {
  1506. // don't allocate insert_values
  1507. tables->table->insert_values=(uchar *)1;
  1508. }
  1509. if (insert_precheck(stmt->thd, tables))
  1510. return 1;
  1511. /* store it, because mysql_insert_select_prepare_tester change it */
  1512. first_local_table= (TABLE_LIST *)lex->select_lex.table_list.first;
  1513. DBUG_ASSERT(first_local_table != 0);
  1514. res=
  1515. select_like_stmt_test_with_open(stmt, tables,
  1516. &mysql_insert_select_prepare_tester,
  1517. OPTION_SETUP_TABLES_DONE);
  1518. /* revert changes made by mysql_insert_select_prepare_tester */
  1519. lex->select_lex.table_list.first= (uchar*) first_local_table;
  1520. return res;
  1521. }
  1522. /**
  1523. Perform semantic analysis of the parsed tree and send a response packet
  1524. to the client.
  1525. This function
  1526. - opens all tables and checks access rights
  1527. - validates semantics of statement columns and SQL functions
  1528. by calling fix_fields.
  1529. @param stmt prepared statement
  1530. @retval
  1531. FALSE success, statement metadata is sent to client
  1532. @retval
  1533. TRUE error, error message is set in THD (but not sent)
  1534. */
  1535. static bool check_prepared_statement(Prepared_statement *stmt)
  1536. {
  1537. THD *thd= stmt->thd;
  1538. LEX *lex= stmt->lex;
  1539. SELECT_LEX *select_lex= &lex->select_lex;
  1540. TABLE_LIST *tables;
  1541. enum enum_sql_command sql_command= lex->sql_command;
  1542. int res= 0;
  1543. DBUG_ENTER("check_prepared_statement");
  1544. DBUG_PRINT("enter",("command: %d param_count: %u",
  1545. sql_command, stmt->param_count));
  1546. lex->first_lists_tables_same();
  1547. tables= lex->query_tables;
  1548. /* set context for commands which do not use setup_tables */
  1549. lex->select_lex.context.resolve_in_table_list_only(select_lex->
  1550. get_table_list());
  1551. /* Reset warning count for each query that uses tables */
  1552. if (tables)
  1553. thd->warning_info->opt_clear_warning_info(thd->query_id);
  1554. switch (sql_command) {
  1555. case SQLCOM_REPLACE:
  1556. case SQLCOM_INSERT:
  1557. res= mysql_test_insert(stmt, tables, lex->field_list,
  1558. lex->many_values,
  1559. lex->update_list, lex->value_list,
  1560. lex->duplicates);
  1561. break;
  1562. case SQLCOM_UPDATE:
  1563. res= mysql_test_update(stmt, tables);
  1564. /* mysql_test_update returns 2 if we need to switch to multi-update */
  1565. if (res != 2)
  1566. break;
  1567. case SQLCOM_UPDATE_MULTI:
  1568. res= mysql_test_multiupdate(stmt, tables, res == 2);
  1569. break;
  1570. case SQLCOM_DELETE:
  1571. res= mysql_test_delete(stmt, tables);
  1572. break;
  1573. /* The following allow WHERE clause, so they must be tested like SELECT */
  1574. case SQLCOM_SHOW_DATABASES:
  1575. case SQLCOM_SHOW_TABLES:
  1576. case SQLCOM_SHOW_TRIGGERS:
  1577. case SQLCOM_SHOW_EVENTS:
  1578. case SQLCOM_SHOW_OPEN_TABLES:
  1579. case SQLCOM_SHOW_FIELDS:
  1580. case SQLCOM_SHOW_KEYS:
  1581. case SQLCOM_SHOW_COLLATIONS:
  1582. case SQLCOM_SHOW_CHARSETS:
  1583. case SQLCOM_SHOW_VARIABLES:
  1584. case SQLCOM_SHOW_STATUS:
  1585. case SQLCOM_SHOW_TABLE_STATUS:
  1586. case SQLCOM_SHOW_STATUS_PROC:
  1587. case SQLCOM_SHOW_STATUS_FUNC:
  1588. case SQLCOM_SELECT:
  1589. res= mysql_test_select(stmt, tables);
  1590. if (res == 2)
  1591. {
  1592. /* Statement and field info has already been sent */
  1593. DBUG_RETURN(FALSE);
  1594. }
  1595. break;
  1596. case SQLCOM_CREATE_TABLE:
  1597. res= mysql_test_create_table(stmt);
  1598. break;
  1599. case SQLCOM_CREATE_VIEW:
  1600. if (lex->create_view_mode == VIEW_ALTER)
  1601. {
  1602. my_message(ER_UNSUPPORTED_PS, ER(ER_UNSUPPORTED_PS), MYF(0));
  1603. goto error;
  1604. }
  1605. res= mysql_test_create_view(stmt);
  1606. break;
  1607. case SQLCOM_DO:
  1608. res= mysql_test_do_fields(stmt, tables, lex->insert_list);
  1609. break;
  1610. case SQLCOM_CALL:
  1611. res= mysql_test_call_fields(stmt, tables, &lex->value_list);
  1612. break;
  1613. case SQLCOM_SET_OPTION:
  1614. res= mysql_test_set_fields(stmt, tables, &lex->var_list);
  1615. break;
  1616. case SQLCOM_DELETE_MULTI:
  1617. res= mysql_test_multidelete(stmt, tables);
  1618. break;
  1619. case SQLCOM_INSERT_SELECT:
  1620. case SQLCOM_REPLACE_SELECT:
  1621. res= mysql_test_insert_select(stmt, tables);
  1622. break;
  1623. /*
  1624. Note that we don't need to have cases in this list if they are
  1625. marked with CF_STATUS_COMMAND in sql_command_flags
  1626. */
  1627. case SQLCOM_SHOW_PROCESSLIST:
  1628. case SQLCOM_SHOW_STORAGE_ENGINES:
  1629. case SQLCOM_SHOW_PRIVILEGES:
  1630. case SQLCOM_SHOW_COLUMN_TYPES:
  1631. case SQLCOM_SHOW_ENGINE_LOGS:
  1632. case SQLCOM_SHOW_ENGINE_STATUS:
  1633. case SQLCOM_SHOW_ENGINE_MUTEX:
  1634. case SQLCOM_SHOW_CREATE_DB:
  1635. case SQLCOM_SHOW_GRANTS:
  1636. case SQLCOM_SHOW_BINLOG_EVENTS:
  1637. case SQLCOM_SHOW_MASTER_STAT:
  1638. case SQLCOM_SHOW_SLAVE_STAT:
  1639. case SQLCOM_SHOW_CREATE_PROC:
  1640. case SQLCOM_SHOW_CREATE_FUNC:
  1641. case SQLCOM_SHOW_CREATE_EVENT:
  1642. case SQLCOM_SHOW_CREATE_TRIGGER:
  1643. case SQLCOM_SHOW_CREATE:
  1644. case SQLCOM_SHOW_PROC_CODE:
  1645. case SQLCOM_SHOW_FUNC_CODE:
  1646. case SQLCOM_SHOW_AUTHORS:
  1647. case SQLCOM_SHOW_CONTRIBUTORS:
  1648. case SQLCOM_SHOW_WARNS:
  1649. case SQLCOM_SHOW_ERRORS:
  1650. case SQLCOM_SHOW_BINLOGS:
  1651. case SQLCOM_DROP_TABLE:
  1652. case SQLCOM_RENAME_TABLE:
  1653. case SQLCOM_ALTER_TABLE:
  1654. case SQLCOM_COMMIT:
  1655. case SQLCOM_CREATE_INDEX:
  1656. case SQLCOM_DROP_INDEX:
  1657. case SQLCOM_ROLLBACK:
  1658. case SQLCOM_TRUNCATE:
  1659. case SQLCOM_DROP_VIEW:
  1660. case SQLCOM_REPAIR:
  1661. case SQLCOM_ANALYZE:
  1662. case SQLCOM_OPTIMIZE:
  1663. case SQLCOM_CHANGE_MASTER:
  1664. case SQLCOM_RESET:
  1665. case SQLCOM_FLUSH:
  1666. case SQLCOM_SLAVE_START:
  1667. case SQLCOM_SLAVE_STOP:
  1668. case SQLCOM_INSTALL_PLUGIN:
  1669. case SQLCOM_UNINSTALL_PLUGIN:
  1670. case SQLCOM_CREATE_DB:
  1671. case SQLCOM_DROP_DB:
  1672. case SQLCOM_ALTER_DB_UPGRADE:
  1673. case SQLCOM_CHECKSUM:
  1674. case SQLCOM_CREATE_USER:
  1675. case SQLCOM_RENAME_USER:
  1676. case SQLCOM_DROP_USER:
  1677. case SQLCOM_ASSIGN_TO_KEYCACHE:
  1678. case SQLCOM_PRELOAD_KEYS:
  1679. case SQLCOM_GRANT:
  1680. case SQLCOM_REVOKE:
  1681. case SQLCOM_KILL:
  1682. break;
  1683. case SQLCOM_PREPARE:
  1684. case SQLCOM_EXECUTE:
  1685. case SQLCOM_DEALLOCATE_PREPARE:
  1686. default:
  1687. /*
  1688. Trivial check of all status commands. This is easier than having
  1689. things in the above case list, as it's less chance for mistakes.
  1690. */
  1691. if (!(sql_command_flags[sql_command] & CF_STATUS_COMMAND))
  1692. {
  1693. /* All other statements are not supported yet. */
  1694. my_message(ER_UNSUPPORTED_PS, ER(ER_UNSUPPORTED_PS), MYF(0));
  1695. goto error;
  1696. }
  1697. break;
  1698. }
  1699. if (res == 0)
  1700. DBUG_RETURN(stmt->is_sql_prepare() ?
  1701. FALSE : (send_prep_stmt(stmt, 0) || thd->protocol->flush()));
  1702. error:
  1703. DBUG_RETURN(TRUE);
  1704. }
  1705. /**
  1706. Initialize array of parameters in statement from LEX.
  1707. (We need to have quick access to items by number in mysql_stmt_get_longdata).
  1708. This is to avoid using malloc/realloc in the parser.
  1709. */
  1710. static bool init_param_array(Prepared_statement *stmt)
  1711. {
  1712. LEX *lex= stmt->lex;
  1713. if ((stmt->param_count= lex->param_list.elements))
  1714. {
  1715. if (stmt->param_count > (uint) UINT_MAX16)
  1716. {
  1717. /* Error code to be defined in 5.0 */
  1718. my_message(ER_PS_MANY_PARAM, ER(ER_PS_MANY_PARAM), MYF(0));
  1719. return TRUE;
  1720. }
  1721. Item_param **to;
  1722. List_iterator<Item_param> param_iterator(lex->param_list);
  1723. /* Use thd->mem_root as it points at statement mem_root */
  1724. stmt->param_array= (Item_param **)
  1725. alloc_root(stmt->thd->mem_root,
  1726. sizeof(Item_param*) * stmt->param_count);
  1727. if (!stmt->param_array)
  1728. return TRUE;
  1729. for (to= stmt->param_array;
  1730. to < stmt->param_array + stmt->param_count;
  1731. ++to)
  1732. {
  1733. *to= param_iterator++;
  1734. }
  1735. }
  1736. return FALSE;
  1737. }
  1738. /**
  1739. COM_STMT_PREPARE handler.
  1740. Given a query string with parameter markers, create a prepared
  1741. statement from it and send PS info back to the client.
  1742. If parameter markers are found in the query, then store the information
  1743. using Item_param along with maintaining a list in lex->param_array, so
  1744. that a fast and direct retrieval can be made without going through all
  1745. field items.
  1746. @param packet query to be prepared
  1747. @param packet_length query string length, including ignored
  1748. trailing NULL or quote char.
  1749. @note
  1750. This function parses the query and sends the total number of parameters
  1751. and resultset metadata information back to client (if any), without
  1752. executing the query i.e. without any log/disk writes. This allows the
  1753. queries to be re-executed without re-parsing during execute.
  1754. @return
  1755. none: in case of success a new statement id and metadata is sent
  1756. to the client, otherwise an error message is set in THD.
  1757. */
  1758. void mysqld_stmt_prepare(THD *thd, const char *packet, uint packet_length)
  1759. {
  1760. Protocol *save_protocol= thd->protocol;
  1761. Prepared_statement *stmt;
  1762. bool error;
  1763. DBUG_ENTER("mysqld_stmt_prepare");
  1764. DBUG_PRINT("prep_query", ("%s", packet));
  1765. /* First of all clear possible warnings from the previous command */
  1766. mysql_reset_thd_for_next_command(thd);
  1767. if (! (stmt= new Prepared_statement(thd)))
  1768. DBUG_VOID_RETURN; /* out of memory: error is set in Sql_alloc */
  1769. if (thd->stmt_map.insert(thd, stmt))
  1770. {
  1771. /*
  1772. The error is set in the insert. The statement itself
  1773. will be also deleted there (this is how the hash works).
  1774. */
  1775. DBUG_VOID_RETURN;
  1776. }
  1777. sp_cache_flush_obsolete(&thd->sp_proc_cache);
  1778. sp_cache_flush_obsolete(&thd->sp_func_cache);
  1779. thd->protocol= &thd->protocol_binary;
  1780. if (!(specialflag & SPECIAL_NO_PRIOR))
  1781. my_pthread_setprio(pthread_self(),QUERY_PRIOR);
  1782. error= stmt->prepare(packet, packet_length);
  1783. if (!(specialflag & SPECIAL_NO_PRIOR))
  1784. my_pthread_setprio(pthread_self(),WAIT_PRIOR);
  1785. if (error)
  1786. {
  1787. /* Statement map deletes statement on erase */
  1788. thd->stmt_map.erase(stmt);
  1789. }
  1790. thd->protocol= save_protocol;
  1791. /* check_prepared_statemnt sends the metadata packet in case of success */
  1792. DBUG_VOID_RETURN;
  1793. }
  1794. /**
  1795. Get an SQL statement text from a user variable or from plain text.
  1796. If the statement is plain text, just assign the
  1797. pointers, otherwise allocate memory in thd->mem_root and copy
  1798. the contents of the variable, possibly with character
  1799. set conversion.
  1800. @param[in] lex main lex
  1801. @param[out] query_len length of the SQL statement (is set only
  1802. in case of success)
  1803. @retval
  1804. non-zero success
  1805. @retval
  1806. 0 in case of error (out of memory)
  1807. */
  1808. static const char *get_dynamic_sql_string(LEX *lex, uint *query_len)
  1809. {
  1810. THD *thd= lex->thd;
  1811. char *query_str= 0;
  1812. if (lex->prepared_stmt_code_is_varref)
  1813. {
  1814. /* This is PREPARE stmt FROM or EXECUTE IMMEDIATE @var. */
  1815. String str;
  1816. CHARSET_INFO *to_cs= thd->variables.collation_connection;
  1817. bool needs_conversion;
  1818. user_var_entry *entry;
  1819. String *var_value= &str;
  1820. uint32 unused, len;
  1821. /*
  1822. Convert @var contents to string in connection character set. Although
  1823. it is known that int/real/NULL value cannot be a valid query we still
  1824. convert it for error messages to be uniform.
  1825. */
  1826. if ((entry=
  1827. (user_var_entry*)hash_search(&thd->user_vars,
  1828. (uchar*)lex->prepared_stmt_code.str,
  1829. lex->prepared_stmt_code.length))
  1830. && entry->value)
  1831. {
  1832. my_bool is_var_null;
  1833. var_value= entry->val_str(&is_var_null, &str, NOT_FIXED_DEC);
  1834. /*
  1835. NULL value of variable checked early as entry->value so here
  1836. we can't get NULL in normal conditions
  1837. */
  1838. DBUG_ASSERT(!is_var_null);
  1839. if (!var_value)
  1840. goto end;
  1841. }
  1842. else
  1843. {
  1844. /*
  1845. variable absent or equal to NULL, so we need to set variable to
  1846. something reasonable to get a readable error message during parsing
  1847. */
  1848. str.set(STRING_WITH_LEN("NULL"), &my_charset_latin1);
  1849. }
  1850. needs_conversion= String::needs_conversion(var_value->length(),
  1851. var_value->charset(), to_cs,
  1852. &unused);
  1853. len= (needs_conversion ? var_value->length() * to_cs->mbmaxlen :
  1854. var_value->length());
  1855. if (!(query_str= (char*) alloc_root(thd->mem_root, len+1)))
  1856. goto end;
  1857. if (needs_conversion)
  1858. {
  1859. uint dummy_errors;
  1860. len= copy_and_convert(query_str, len, to_cs, var_value->ptr(),
  1861. var_value->length(), var_value->charset(),
  1862. &dummy_errors);
  1863. }
  1864. else
  1865. memcpy(query_str, var_value->ptr(), var_value->length());
  1866. query_str[len]= '\0'; // Safety (mostly for debug)
  1867. *query_len= len;
  1868. }
  1869. else
  1870. {
  1871. query_str= lex->prepared_stmt_code.str;
  1872. *query_len= lex->prepared_stmt_code.length;
  1873. }
  1874. end:
  1875. return query_str;
  1876. }
  1877. /** Init PS/SP specific parse tree members. */
  1878. static void init_stmt_after_parse(LEX *lex)
  1879. {
  1880. SELECT_LEX *sl= lex->all_selects_list;
  1881. /*
  1882. Switch off a temporary flag that prevents evaluation of
  1883. subqueries in statement prepare.
  1884. */
  1885. for (; sl; sl= sl->next_select_in_list())
  1886. sl->uncacheable&= ~UNCACHEABLE_PREPARE;
  1887. }
  1888. /**
  1889. SQLCOM_PREPARE implementation.
  1890. Prepare an SQL prepared statement. This is called from
  1891. mysql_execute_command and should therefore behave like an
  1892. ordinary query (e.g. should not reset any global THD data).
  1893. @param thd thread handle
  1894. @return
  1895. none: in case of success, OK packet is sent to the client,
  1896. otherwise an error message is set in THD
  1897. */
  1898. void mysql_sql_stmt_prepare(THD *thd)
  1899. {
  1900. LEX *lex= thd->lex;
  1901. LEX_STRING *name= &lex->prepared_stmt_name;
  1902. Prepared_statement *stmt;
  1903. const char *query;
  1904. uint query_len= 0;
  1905. DBUG_ENTER("mysql_sql_stmt_prepare");
  1906. if ((stmt= (Prepared_statement*) thd->stmt_map.find_by_name(name)))
  1907. {
  1908. /*
  1909. If there is a statement with the same name, remove it. It is ok to
  1910. remove old and fail to insert a new one at the same time.
  1911. */
  1912. if (stmt->is_in_use())
  1913. {
  1914. my_error(ER_PS_NO_RECURSION, MYF(0));
  1915. DBUG_VOID_RETURN;
  1916. }
  1917. stmt->deallocate();
  1918. }
  1919. if (! (query= get_dynamic_sql_string(lex, &query_len)) ||
  1920. ! (stmt= new Prepared_statement(thd)))
  1921. {
  1922. DBUG_VOID_RETURN; /* out of memory */
  1923. }
  1924. stmt->set_sql_prepare();
  1925. /* Set the name first, insert should know that this statement has a name */
  1926. if (stmt->set_name(name))
  1927. {
  1928. delete stmt;
  1929. DBUG_VOID_RETURN;
  1930. }
  1931. if (thd->stmt_map.insert(thd, stmt))
  1932. {
  1933. /* The statement is deleted and an error is set if insert fails */
  1934. DBUG_VOID_RETURN;
  1935. }
  1936. if (stmt->prepare(query, query_len))
  1937. {
  1938. /* Statement map deletes the statement on erase */
  1939. thd->stmt_map.erase(stmt);
  1940. }
  1941. else
  1942. my_ok(thd, 0L, 0L, "Statement prepared");
  1943. DBUG_VOID_RETURN;
  1944. }
  1945. /**
  1946. Reinit prepared statement/stored procedure before execution.
  1947. @todo
  1948. When the new table structure is ready, then have a status bit
  1949. to indicate the table is altered, and re-do the setup_*
  1950. and open the tables back.
  1951. */
  1952. void reinit_stmt_before_use(THD *thd, LEX *lex)
  1953. {
  1954. SELECT_LEX *sl= lex->all_selects_list;
  1955. DBUG_ENTER("reinit_stmt_before_use");
  1956. /*
  1957. We have to update "thd" pointer in LEX, all its units and in LEX::result,
  1958. since statements which belong to trigger body are associated with TABLE
  1959. object and because of this can be used in different threads.
  1960. */
  1961. lex->thd= thd;
  1962. if (lex->empty_field_list_on_rset)
  1963. {
  1964. lex->empty_field_list_on_rset= 0;
  1965. lex->field_list.empty();
  1966. }
  1967. for (; sl; sl= sl->next_select_in_list())
  1968. {
  1969. if (!sl->first_execution)
  1970. {
  1971. /* remove option which was put by mysql_explain_union() */
  1972. sl->options&= ~SELECT_DESCRIBE;
  1973. /* see unique_table() */
  1974. sl->exclude_from_table_unique_test= FALSE;
  1975. /*
  1976. Copy WHERE, HAVING clause pointers to avoid damaging them
  1977. by optimisation
  1978. */
  1979. if (sl->prep_where)
  1980. {
  1981. sl->where= sl->prep_where->copy_andor_structure(thd);
  1982. sl->where->cleanup();
  1983. }
  1984. if (sl->prep_having)
  1985. {
  1986. sl->having= sl->prep_having->copy_andor_structure(thd);
  1987. sl->having->cleanup();
  1988. }
  1989. DBUG_ASSERT(sl->join == 0);
  1990. ORDER *order;
  1991. /* Fix GROUP list */
  1992. for (order= (ORDER *)sl->group_list.first; order; order= order->next)
  1993. order->item= &order->item_ptr;
  1994. /* Fix ORDER list */
  1995. for (order= (ORDER *)sl->order_list.first; order; order= order->next)
  1996. order->item= &order->item_ptr;
  1997. }
  1998. {
  1999. SELECT_LEX_UNIT *unit= sl->master_unit();
  2000. unit->unclean();
  2001. unit->types.empty();
  2002. /* for derived tables & PS (which can't be reset by Item_subquery) */
  2003. unit->reinit_exec_mechanism();
  2004. unit->set_thd(thd);
  2005. }
  2006. }
  2007. /*
  2008. TODO: When the new table structure is ready, then have a status bit
  2009. to indicate the table is altered, and re-do the setup_*
  2010. and open the tables back.
  2011. */
  2012. /*
  2013. NOTE: We should reset whole table list here including all tables added
  2014. by prelocking algorithm (it is not a problem for substatements since
  2015. they have their own table list).
  2016. */
  2017. for (TABLE_LIST *tables= lex->query_tables;
  2018. tables;
  2019. tables= tables->next_global)
  2020. {
  2021. tables->reinit_before_use(thd);
  2022. }
  2023. /*
  2024. Cleanup of the special case of DELETE t1, t2 FROM t1, t2, t3 ...
  2025. (multi-delete). We do a full clean up, although at the moment all we
  2026. need to clean in the tables of MULTI-DELETE list is 'table' member.
  2027. */
  2028. for (TABLE_LIST *tables= (TABLE_LIST*) lex->auxiliary_table_list.first;
  2029. tables;
  2030. tables= tables->next_global)
  2031. {
  2032. tables->reinit_before_use(thd);
  2033. }
  2034. lex->current_select= &lex->select_lex;
  2035. /* restore original list used in INSERT ... SELECT */
  2036. if (lex->leaf_tables_insert)
  2037. lex->select_lex.leaf_tables= lex->leaf_tables_insert;
  2038. if (lex->result)
  2039. {
  2040. lex->result->cleanup();
  2041. lex->result->set_thd(thd);
  2042. }
  2043. lex->allow_sum_func= 0;
  2044. lex->in_sum_func= NULL;
  2045. DBUG_VOID_RETURN;
  2046. }
  2047. /**
  2048. Clears parameters from data left from previous execution or long data.
  2049. @param stmt prepared statement for which parameters should
  2050. be reset
  2051. */
  2052. static void reset_stmt_params(Prepared_statement *stmt)
  2053. {
  2054. Item_param **item= stmt->param_array;
  2055. Item_param **end= item + stmt->param_count;
  2056. for (;item < end ; ++item)
  2057. (**item).reset();
  2058. }
  2059. /**
  2060. COM_STMT_EXECUTE handler: execute a previously prepared statement.
  2061. If there are any parameters, then replace parameter markers with the
  2062. data supplied from the client, and then execute the statement.
  2063. This function uses binary protocol to send a possible result set
  2064. to the client.
  2065. @param thd current thread
  2066. @param packet_arg parameter types and data, if any
  2067. @param packet_length packet length, including the terminator character.
  2068. @return
  2069. none: in case of success OK packet or a result set is sent to the
  2070. client, otherwise an error message is set in THD.
  2071. */
  2072. void mysqld_stmt_execute(THD *thd, char *packet_arg, uint packet_length)
  2073. {
  2074. uchar *packet= (uchar*)packet_arg; // GCC 4.0.1 workaround
  2075. ulong stmt_id= uint4korr(packet);
  2076. ulong flags= (ulong) packet[4];
  2077. /* Query text for binary, general or slow log, if any of them is open */
  2078. String expanded_query;
  2079. uchar *packet_end= packet + packet_length;
  2080. Prepared_statement *stmt;
  2081. Protocol *save_protocol= thd->protocol;
  2082. bool open_cursor;
  2083. DBUG_ENTER("mysqld_stmt_execute");
  2084. packet+= 9; /* stmt_id + 5 bytes of flags */
  2085. /* First of all clear possible warnings from the previous command */
  2086. mysql_reset_thd_for_next_command(thd);
  2087. if (!(stmt= find_prepared_statement(thd, stmt_id)))
  2088. {
  2089. char llbuf[22];
  2090. my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0), sizeof(llbuf),
  2091. llstr(stmt_id, llbuf), "mysqld_stmt_execute");
  2092. DBUG_VOID_RETURN;
  2093. }
  2094. #if defined(ENABLED_PROFILING) && defined(COMMUNITY_SERVER)
  2095. thd->profiling.set_query_source(stmt->query, stmt->query_length);
  2096. #endif
  2097. DBUG_PRINT("exec_query", ("%s", stmt->query));
  2098. DBUG_PRINT("info",("stmt: 0x%lx", (long) stmt));
  2099. sp_cache_flush_obsolete(&thd->sp_proc_cache);
  2100. sp_cache_flush_obsolete(&thd->sp_func_cache);
  2101. open_cursor= test(flags & (ulong) CURSOR_TYPE_READ_ONLY);
  2102. thd->protocol= &thd->protocol_binary;
  2103. stmt->execute_loop(&expanded_query, open_cursor, packet, packet_end);
  2104. thd->protocol= save_protocol;
  2105. /* Close connection socket; for use with client testing (Bug#43560). */
  2106. DBUG_EXECUTE_IF("close_conn_after_stmt_execute", vio_close(thd->net.vio););
  2107. DBUG_VOID_RETURN;
  2108. }
  2109. /**
  2110. SQLCOM_EXECUTE implementation.
  2111. Execute prepared statement using parameter values from
  2112. lex->prepared_stmt_params and send result to the client using
  2113. text protocol. This is called from mysql_execute_command and
  2114. therefore should behave like an ordinary query (e.g. not change
  2115. global THD data, such as warning count, server status, etc).
  2116. This function uses text protocol to send a possible result set.
  2117. @param thd thread handle
  2118. @return
  2119. none: in case of success, OK (or result set) packet is sent to the
  2120. client, otherwise an error is set in THD
  2121. */
  2122. void mysql_sql_stmt_execute(THD *thd)
  2123. {
  2124. LEX *lex= thd->lex;
  2125. Prepared_statement *stmt;
  2126. LEX_STRING *name= &lex->prepared_stmt_name;
  2127. /* Query text for binary, general or slow log, if any of them is open */
  2128. String expanded_query;
  2129. DBUG_ENTER("mysql_sql_stmt_execute");
  2130. DBUG_PRINT("info", ("EXECUTE: %.*s\n", (int) name->length, name->str));
  2131. if (!(stmt= (Prepared_statement*) thd->stmt_map.find_by_name(name)))
  2132. {
  2133. my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0),
  2134. name->length, name->str, "EXECUTE");
  2135. DBUG_VOID_RETURN;
  2136. }
  2137. if (stmt->param_count != lex->prepared_stmt_params.elements)
  2138. {
  2139. my_error(ER_WRONG_ARGUMENTS, MYF(0), "EXECUTE");
  2140. DBUG_VOID_RETURN;
  2141. }
  2142. DBUG_PRINT("info",("stmt: 0x%lx", (long) stmt));
  2143. (void) stmt->execute_loop(&expanded_query, FALSE, NULL, NULL);
  2144. DBUG_VOID_RETURN;
  2145. }
  2146. /**
  2147. COM_STMT_FETCH handler: fetches requested amount of rows from cursor.
  2148. @param thd Thread handle
  2149. @param packet Packet from client (with stmt_id & num_rows)
  2150. @param packet_length Length of packet
  2151. */
  2152. void mysqld_stmt_fetch(THD *thd, char *packet, uint packet_length)
  2153. {
  2154. /* assume there is always place for 8-16 bytes */
  2155. ulong stmt_id= uint4korr(packet);
  2156. ulong num_rows= uint4korr(packet+4);
  2157. Prepared_statement *stmt;
  2158. Statement stmt_backup;
  2159. Server_side_cursor *cursor;
  2160. DBUG_ENTER("mysqld_stmt_fetch");
  2161. /* First of all clear possible warnings from the previous command */
  2162. mysql_reset_thd_for_next_command(thd);
  2163. status_var_increment(thd->status_var.com_stmt_fetch);
  2164. if (!(stmt= find_prepared_statement(thd, stmt_id)))
  2165. {
  2166. char llbuf[22];
  2167. my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0), sizeof(llbuf),
  2168. llstr(stmt_id, llbuf), "mysqld_stmt_fetch");
  2169. DBUG_VOID_RETURN;
  2170. }
  2171. cursor= stmt->cursor;
  2172. if (!cursor)
  2173. {
  2174. my_error(ER_STMT_HAS_NO_OPEN_CURSOR, MYF(0), stmt_id);
  2175. DBUG_VOID_RETURN;
  2176. }
  2177. thd->stmt_arena= stmt;
  2178. thd->set_n_backup_statement(stmt, &stmt_backup);
  2179. if (!(specialflag & SPECIAL_NO_PRIOR))
  2180. my_pthread_setprio(pthread_self(), QUERY_PRIOR);
  2181. cursor->fetch(num_rows);
  2182. if (!(specialflag & SPECIAL_NO_PRIOR))
  2183. my_pthread_setprio(pthread_self(), WAIT_PRIOR);
  2184. if (!cursor->is_open())
  2185. {
  2186. stmt->close_cursor();
  2187. thd->cursor= 0;
  2188. reset_stmt_params(stmt);
  2189. }
  2190. thd->restore_backup_statement(stmt, &stmt_backup);
  2191. thd->stmt_arena= thd;
  2192. DBUG_VOID_RETURN;
  2193. }
  2194. /**
  2195. Reset a prepared statement in case there was a recoverable error.
  2196. This function resets statement to the state it was right after prepare.
  2197. It can be used to:
  2198. - clear an error happened during mysqld_stmt_send_long_data
  2199. - cancel long data stream for all placeholders without
  2200. having to call mysqld_stmt_execute.
  2201. - close an open cursor
  2202. Sends 'OK' packet in case of success (statement was reset)
  2203. or 'ERROR' packet (unrecoverable error/statement not found/etc).
  2204. @param thd Thread handle
  2205. @param packet Packet with stmt id
  2206. */
  2207. void mysqld_stmt_reset(THD *thd, char *packet)
  2208. {
  2209. /* There is always space for 4 bytes in buffer */
  2210. ulong stmt_id= uint4korr(packet);
  2211. Prepared_statement *stmt;
  2212. DBUG_ENTER("mysqld_stmt_reset");
  2213. /* First of all clear possible warnings from the previous command */
  2214. mysql_reset_thd_for_next_command(thd);
  2215. status_var_increment(thd->status_var.com_stmt_reset);
  2216. if (!(stmt= find_prepared_statement(thd, stmt_id)))
  2217. {
  2218. char llbuf[22];
  2219. my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0), sizeof(llbuf),
  2220. llstr(stmt_id, llbuf), "mysqld_stmt_reset");
  2221. DBUG_VOID_RETURN;
  2222. }
  2223. stmt->close_cursor();
  2224. /*
  2225. Clear parameters from data which could be set by
  2226. mysqld_stmt_send_long_data() call.
  2227. */
  2228. reset_stmt_params(stmt);
  2229. stmt->state= Query_arena::PREPARED;
  2230. general_log_print(thd, thd->command, NullS);
  2231. my_ok(thd);
  2232. DBUG_VOID_RETURN;
  2233. }
  2234. /**
  2235. Delete a prepared statement from memory.
  2236. @note
  2237. we don't send any reply to this command.
  2238. */
  2239. void mysqld_stmt_close(THD *thd, char *packet)
  2240. {
  2241. /* There is always space for 4 bytes in packet buffer */
  2242. ulong stmt_id= uint4korr(packet);
  2243. Prepared_statement *stmt;
  2244. DBUG_ENTER("mysqld_stmt_close");
  2245. thd->stmt_da->disable_status();
  2246. if (!(stmt= find_prepared_statement(thd, stmt_id)))
  2247. DBUG_VOID_RETURN;
  2248. /*
  2249. The only way currently a statement can be deallocated when it's
  2250. in use is from within Dynamic SQL.
  2251. */
  2252. DBUG_ASSERT(! stmt->is_in_use());
  2253. stmt->deallocate();
  2254. general_log_print(thd, thd->command, NullS);
  2255. DBUG_VOID_RETURN;
  2256. }
  2257. /**
  2258. SQLCOM_DEALLOCATE implementation.
  2259. Close an SQL prepared statement. As this can be called from Dynamic
  2260. SQL, we should be careful to not close a statement that is currently
  2261. being executed.
  2262. @return
  2263. none: OK packet is sent in case of success, otherwise an error
  2264. message is set in THD
  2265. */
  2266. void mysql_sql_stmt_close(THD *thd)
  2267. {
  2268. Prepared_statement* stmt;
  2269. LEX_STRING *name= &thd->lex->prepared_stmt_name;
  2270. DBUG_PRINT("info", ("DEALLOCATE PREPARE: %.*s\n", (int) name->length,
  2271. name->str));
  2272. if (! (stmt= (Prepared_statement*) thd->stmt_map.find_by_name(name)))
  2273. my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0),
  2274. name->length, name->str, "DEALLOCATE PREPARE");
  2275. else if (stmt->is_in_use())
  2276. my_error(ER_PS_NO_RECURSION, MYF(0));
  2277. else
  2278. {
  2279. stmt->deallocate();
  2280. my_ok(thd);
  2281. }
  2282. }
  2283. /**
  2284. Handle long data in pieces from client.
  2285. Get a part of a long data. To make the protocol efficient, we are
  2286. not sending any return packets here. If something goes wrong, then
  2287. we will send the error on 'execute' We assume that the client takes
  2288. care of checking that all parts are sent to the server. (No checking
  2289. that we get a 'end of column' in the server is performed).
  2290. @param thd Thread handle
  2291. @param packet String to append
  2292. @param packet_length Length of string (including end \\0)
  2293. */
  2294. void mysql_stmt_get_longdata(THD *thd, char *packet, ulong packet_length)
  2295. {
  2296. ulong stmt_id;
  2297. uint param_number;
  2298. Prepared_statement *stmt;
  2299. Item_param *param;
  2300. #ifndef EMBEDDED_LIBRARY
  2301. char *packet_end= packet + packet_length;
  2302. #endif
  2303. DBUG_ENTER("mysql_stmt_get_longdata");
  2304. status_var_increment(thd->status_var.com_stmt_send_long_data);
  2305. thd->stmt_da->disable_status();
  2306. #ifndef EMBEDDED_LIBRARY
  2307. /* Minimal size of long data packet is 6 bytes */
  2308. if (packet_length < MYSQL_LONG_DATA_HEADER)
  2309. DBUG_VOID_RETURN;
  2310. #endif
  2311. stmt_id= uint4korr(packet);
  2312. packet+= 4;
  2313. if (!(stmt=find_prepared_statement(thd, stmt_id)))
  2314. DBUG_VOID_RETURN;
  2315. param_number= uint2korr(packet);
  2316. packet+= 2;
  2317. #ifndef EMBEDDED_LIBRARY
  2318. if (param_number >= stmt->param_count)
  2319. {
  2320. /* Error will be sent in execute call */
  2321. stmt->state= Query_arena::ERROR;
  2322. stmt->last_errno= ER_WRONG_ARGUMENTS;
  2323. sprintf(stmt->last_error, ER(ER_WRONG_ARGUMENTS),
  2324. "mysqld_stmt_send_long_data");
  2325. DBUG_VOID_RETURN;
  2326. }
  2327. #endif
  2328. param= stmt->param_array[param_number];
  2329. #ifndef EMBEDDED_LIBRARY
  2330. if (param->set_longdata(packet, (ulong) (packet_end - packet)))
  2331. #else
  2332. if (param->set_longdata(thd->extra_data, thd->extra_length))
  2333. #endif
  2334. {
  2335. stmt->state= Query_arena::ERROR;
  2336. stmt->last_errno= ER_OUTOFMEMORY;
  2337. sprintf(stmt->last_error, ER(ER_OUTOFMEMORY), 0);
  2338. }
  2339. general_log_print(thd, thd->command, NullS);
  2340. DBUG_VOID_RETURN;
  2341. }
  2342. /***************************************************************************
  2343. Select_fetch_protocol_binary
  2344. ****************************************************************************/
  2345. Select_fetch_protocol_binary::Select_fetch_protocol_binary(THD *thd_arg)
  2346. :protocol(thd_arg)
  2347. {}
  2348. bool Select_fetch_protocol_binary::send_fields(List<Item> &list, uint flags)
  2349. {
  2350. bool rc;
  2351. Protocol *save_protocol= thd->protocol;
  2352. /*
  2353. Protocol::send_fields caches the information about column types:
  2354. this information is later used to send data. Therefore, the same
  2355. dedicated Protocol object must be used for all operations with
  2356. a cursor.
  2357. */
  2358. thd->protocol= &protocol;
  2359. rc= select_send::send_fields(list, flags);
  2360. thd->protocol= save_protocol;
  2361. return rc;
  2362. }
  2363. bool Select_fetch_protocol_binary::send_eof()
  2364. {
  2365. ::my_eof(thd);
  2366. return FALSE;
  2367. }
  2368. bool
  2369. Select_fetch_protocol_binary::send_data(List<Item> &fields)
  2370. {
  2371. Protocol *save_protocol= thd->protocol;
  2372. bool rc;
  2373. thd->protocol= &protocol;
  2374. rc= select_send::send_data(fields);
  2375. thd->protocol= save_protocol;
  2376. return rc;
  2377. }
  2378. /*******************************************************************
  2379. * Reprepare_observer
  2380. *******************************************************************/
  2381. /** Push an error to the error stack and return TRUE for now. */
  2382. bool
  2383. Reprepare_observer::report_error(THD *thd)
  2384. {
  2385. /*
  2386. This 'error' is purely internal to the server:
  2387. - No exception handler is invoked,
  2388. - No condition is added in the condition area (warn_list).
  2389. The diagnostics area is set to an error status to enforce
  2390. that this thread execution stops and returns to the caller,
  2391. backtracking all the way to Prepared_statement::execute_loop().
  2392. */
  2393. thd->stmt_da->set_error_status(thd, ER_NEED_REPREPARE,
  2394. ER(ER_NEED_REPREPARE), "HY000");
  2395. m_invalidated= TRUE;
  2396. return TRUE;
  2397. }
  2398. /***************************************************************************
  2399. Prepared_statement
  2400. ****************************************************************************/
  2401. Prepared_statement::Prepared_statement(THD *thd_arg)
  2402. :Statement(NULL, &main_mem_root,
  2403. INITIALIZED, ++thd_arg->statement_id_counter),
  2404. thd(thd_arg),
  2405. result(thd_arg),
  2406. param_array(0),
  2407. param_count(0),
  2408. last_errno(0),
  2409. flags((uint) IS_IN_USE),
  2410. m_sp_cache_version(0)
  2411. {
  2412. init_sql_alloc(&main_mem_root, thd_arg->variables.query_alloc_block_size,
  2413. thd_arg->variables.query_prealloc_size);
  2414. *last_error= '\0';
  2415. }
  2416. void Prepared_statement::setup_set_params()
  2417. {
  2418. /*
  2419. Note: BUG#25843 applies here too (query cache lookup uses thd->db, not
  2420. db from "prepare" time).
  2421. */
  2422. if (query_cache_maybe_disabled(thd)) // we won't expand the query
  2423. lex->safe_to_cache_query= FALSE; // so don't cache it at Execution
  2424. /*
  2425. Decide if we have to expand the query (because we must write it to logs or
  2426. because we want to look it up in the query cache) or not.
  2427. */
  2428. if ((mysql_bin_log.is_open() && is_update_query(lex->sql_command)) ||
  2429. opt_log || opt_slow_log ||
  2430. query_cache_is_cacheable_query(lex))
  2431. {
  2432. set_params_from_vars= insert_params_from_vars_with_log;
  2433. #ifndef EMBEDDED_LIBRARY
  2434. set_params= insert_params_with_log;
  2435. #else
  2436. set_params_data= emb_insert_params_with_log;
  2437. #endif
  2438. }
  2439. else
  2440. {
  2441. set_params_from_vars= insert_params_from_vars;
  2442. #ifndef EMBEDDED_LIBRARY
  2443. set_params= insert_params;
  2444. #else
  2445. set_params_data= emb_insert_params;
  2446. #endif
  2447. }
  2448. }
  2449. /**
  2450. Destroy this prepared statement, cleaning up all used memory
  2451. and resources.
  2452. This is called from ::deallocate() to handle COM_STMT_CLOSE and
  2453. DEALLOCATE PREPARE or when THD ends and all prepared statements are freed.
  2454. */
  2455. Prepared_statement::~Prepared_statement()
  2456. {
  2457. DBUG_ENTER("Prepared_statement::~Prepared_statement");
  2458. DBUG_PRINT("enter",("stmt: 0x%lx cursor: 0x%lx",
  2459. (long) this, (long) cursor));
  2460. delete cursor;
  2461. /*
  2462. We have to call free on the items even if cleanup is called as some items,
  2463. like Item_param, don't free everything until free_items()
  2464. */
  2465. free_items();
  2466. if (lex)
  2467. {
  2468. delete lex->result;
  2469. delete (st_lex_local *) lex;
  2470. }
  2471. free_root(&main_mem_root, MYF(0));
  2472. DBUG_VOID_RETURN;
  2473. }
  2474. Query_arena::Type Prepared_statement::type() const
  2475. {
  2476. return PREPARED_STATEMENT;
  2477. }
  2478. void Prepared_statement::cleanup_stmt()
  2479. {
  2480. DBUG_ENTER("Prepared_statement::cleanup_stmt");
  2481. DBUG_PRINT("enter",("stmt: 0x%lx", (long) this));
  2482. DBUG_ASSERT(lex->sphead == 0);
  2483. /* The order is important */
  2484. lex->unit.cleanup();
  2485. cleanup_items(free_list);
  2486. thd->cleanup_after_query();
  2487. close_thread_tables(thd);
  2488. thd->rollback_item_tree_changes();
  2489. DBUG_VOID_RETURN;
  2490. }
  2491. bool Prepared_statement::set_name(LEX_STRING *name_arg)
  2492. {
  2493. name.length= name_arg->length;
  2494. name.str= (char*) memdup_root(mem_root, name_arg->str, name_arg->length);
  2495. return name.str == 0;
  2496. }
  2497. /**
  2498. Remember the current database.
  2499. We must reset/restore the current database during execution of
  2500. a prepared statement since it affects execution environment:
  2501. privileges, @@character_set_database, and other.
  2502. @return Returns an error if out of memory.
  2503. */
  2504. bool
  2505. Prepared_statement::set_db(const char *db_arg, uint db_length_arg)
  2506. {
  2507. /* Remember the current database. */
  2508. if (db_arg && db_length_arg)
  2509. {
  2510. db= this->strmake(db_arg, db_length_arg);
  2511. db_length= db_length_arg;
  2512. }
  2513. else
  2514. {
  2515. db= NULL;
  2516. db_length= 0;
  2517. }
  2518. return db_arg != NULL && db == NULL;
  2519. }
  2520. /**************************************************************************
  2521. Common parts of mysql_[sql]_stmt_prepare, mysql_[sql]_stmt_execute.
  2522. Essentially, these functions do all the magic of preparing/executing
  2523. a statement, leaving network communication, input data handling and
  2524. global THD state management to the caller.
  2525. ***************************************************************************/
  2526. /**
  2527. Parse statement text, validate the statement, and prepare it for execution.
  2528. You should not change global THD state in this function, if at all
  2529. possible: it may be called from any context, e.g. when executing
  2530. a COM_* command, and SQLCOM_* command, or a stored procedure.
  2531. @param packet statement text
  2532. @param packet_len
  2533. @note
  2534. Precondition:
  2535. The caller must ensure that thd->change_list and thd->free_list
  2536. is empty: this function will not back them up but will free
  2537. in the end of its execution.
  2538. @note
  2539. Postcondition:
  2540. thd->mem_root contains unused memory allocated during validation.
  2541. */
  2542. bool Prepared_statement::prepare(const char *packet, uint packet_len)
  2543. {
  2544. bool error;
  2545. Statement stmt_backup;
  2546. Query_arena *old_stmt_arena;
  2547. DBUG_ENTER("Prepared_statement::prepare");
  2548. /*
  2549. If this is an SQLCOM_PREPARE, we also increase Com_prepare_sql.
  2550. However, it seems handy if com_stmt_prepare is increased always,
  2551. no matter what kind of prepare is processed.
  2552. */
  2553. status_var_increment(thd->status_var.com_stmt_prepare);
  2554. if (! (lex= new (mem_root) st_lex_local))
  2555. DBUG_RETURN(TRUE);
  2556. if (set_db(thd->db, thd->db_length))
  2557. DBUG_RETURN(TRUE);
  2558. /*
  2559. alloc_query() uses thd->memroot && thd->query, so we should call
  2560. both of backup_statement() and backup_query_arena() here.
  2561. */
  2562. thd->set_n_backup_statement(this, &stmt_backup);
  2563. thd->set_n_backup_active_arena(this, &stmt_backup);
  2564. if (alloc_query(thd, packet, packet_len))
  2565. {
  2566. thd->restore_backup_statement(this, &stmt_backup);
  2567. thd->restore_active_arena(this, &stmt_backup);
  2568. DBUG_RETURN(TRUE);
  2569. }
  2570. old_stmt_arena= thd->stmt_arena;
  2571. thd->stmt_arena= this;
  2572. Parser_state parser_state(thd, thd->query, thd->query_length);
  2573. parser_state.m_lip.stmt_prepare_mode= TRUE;
  2574. lex_start(thd);
  2575. error= parse_sql(thd, & parser_state, NULL) ||
  2576. thd->is_error() ||
  2577. init_param_array(this);
  2578. lex->set_trg_event_type_for_tables();
  2579. /*
  2580. While doing context analysis of the query (in check_prepared_statement)
  2581. we allocate a lot of additional memory: for open tables, JOINs, derived
  2582. tables, etc. Let's save a snapshot of current parse tree to the
  2583. statement and restore original THD. In cases when some tree
  2584. transformation can be reused on execute, we set again thd->mem_root from
  2585. stmt->mem_root (see setup_wild for one place where we do that).
  2586. */
  2587. thd->restore_active_arena(this, &stmt_backup);
  2588. /*
  2589. If called from a stored procedure, ensure that we won't rollback
  2590. external changes when cleaning up after validation.
  2591. */
  2592. DBUG_ASSERT(thd->change_list.is_empty());
  2593. /*
  2594. The only case where we should have items in the thd->free_list is
  2595. after stmt->set_params_from_vars(), which may in some cases create
  2596. Item_null objects.
  2597. */
  2598. if (error == 0)
  2599. error= check_prepared_statement(this);
  2600. /*
  2601. Currently CREATE PROCEDURE/TRIGGER/EVENT are prohibited in prepared
  2602. statements: ensure we have no memory leak here if by someone tries
  2603. to PREPARE stmt FROM "CREATE PROCEDURE ..."
  2604. */
  2605. DBUG_ASSERT(lex->sphead == NULL || error != 0);
  2606. if (lex->sphead)
  2607. {
  2608. delete lex->sphead;
  2609. lex->sphead= NULL;
  2610. }
  2611. lex_end(lex);
  2612. cleanup_stmt();
  2613. thd->restore_backup_statement(this, &stmt_backup);
  2614. thd->stmt_arena= old_stmt_arena;
  2615. if (error == 0)
  2616. {
  2617. setup_set_params();
  2618. init_stmt_after_parse(lex);
  2619. state= Query_arena::PREPARED;
  2620. flags&= ~ (uint) IS_IN_USE;
  2621. /*
  2622. This is for prepared statement validation purposes.
  2623. A statement looks up and pre-loads all its stored functions
  2624. at prepare. Later on, if a function is gone from the cache,
  2625. execute may fail.
  2626. Remember the cache version to be able to invalidate the prepared
  2627. statement at execute if it changes.
  2628. We only need to care about version of the stored functions cache:
  2629. if a prepared statement uses a stored procedure, it's indirect,
  2630. via a stored function. The only exception is SQLCOM_CALL,
  2631. but the latter one looks up the stored procedure each time
  2632. it's invoked, rather than once at prepare.
  2633. */
  2634. m_sp_cache_version= sp_cache_version(&thd->sp_func_cache);
  2635. /*
  2636. Log COM_EXECUTE to the general log. Note, that in case of SQL
  2637. prepared statements this causes two records to be output:
  2638. Query PREPARE stmt from @user_variable
  2639. Prepare <statement SQL text>
  2640. This is considered user-friendly, since in the
  2641. second log entry we output the actual statement text.
  2642. Do not print anything if this is an SQL prepared statement and
  2643. we're inside a stored procedure (also called Dynamic SQL) --
  2644. sub-statements inside stored procedures are not logged into
  2645. the general log.
  2646. */
  2647. if (thd->spcont == NULL)
  2648. general_log_write(thd, COM_STMT_PREPARE, query, query_length);
  2649. }
  2650. DBUG_RETURN(error);
  2651. }
  2652. /**
  2653. Assign parameter values either from variables, in case of SQL PS
  2654. or from the execute packet.
  2655. @param expanded_query a container with the original SQL statement.
  2656. '?' placeholders will be replaced with
  2657. their values in case of success.
  2658. The result is used for logging and replication
  2659. @param packet pointer to execute packet.
  2660. NULL in case of SQL PS
  2661. @param packet_end end of the packet. NULL in case of SQL PS
  2662. @todo Use a paremeter source class family instead of 'if's, and
  2663. support stored procedure variables.
  2664. @retval TRUE an error occurred when assigning a parameter (likely
  2665. a conversion error or out of memory, or malformed packet)
  2666. @retval FALSE success
  2667. */
  2668. bool
  2669. Prepared_statement::set_parameters(String *expanded_query,
  2670. uchar *packet, uchar *packet_end)
  2671. {
  2672. bool is_sql_ps= packet == NULL;
  2673. bool res= FALSE;
  2674. if (is_sql_ps)
  2675. {
  2676. /* SQL prepared statement */
  2677. res= set_params_from_vars(this, thd->lex->prepared_stmt_params,
  2678. expanded_query);
  2679. }
  2680. else if (param_count)
  2681. {
  2682. #ifndef EMBEDDED_LIBRARY
  2683. uchar *null_array= packet;
  2684. res= (setup_conversion_functions(this, &packet, packet_end) ||
  2685. set_params(this, null_array, packet, packet_end, expanded_query));
  2686. #else
  2687. /*
  2688. In embedded library we re-install conversion routines each time
  2689. we set parameters, and also we don't need to parse packet.
  2690. So we do it in one function.
  2691. */
  2692. res= set_params_data(this, expanded_query);
  2693. #endif
  2694. }
  2695. if (res)
  2696. {
  2697. my_error(ER_WRONG_ARGUMENTS, MYF(0),
  2698. is_sql_ps ? "EXECUTE" : "mysqld_stmt_execute");
  2699. reset_stmt_params(this);
  2700. }
  2701. return res;
  2702. }
  2703. /**
  2704. Execute a prepared statement. Re-prepare it a limited number
  2705. of times if necessary.
  2706. Try to execute a prepared statement. If there is a metadata
  2707. validation error, prepare a new copy of the prepared statement,
  2708. swap the old and the new statements, and try again.
  2709. If there is a validation error again, repeat the above, but
  2710. perform no more than MAX_REPREPARE_ATTEMPTS.
  2711. @note We have to try several times in a loop since we
  2712. release metadata locks on tables after prepared statement
  2713. prepare. Therefore, a DDL statement may sneak in between prepare
  2714. and execute of a new statement. If this happens repeatedly
  2715. more than MAX_REPREPARE_ATTEMPTS times, we give up.
  2716. In future we need to be able to keep the metadata locks between
  2717. prepare and execute, but right now open_and_lock_tables(), as
  2718. well as close_thread_tables() are buried deep inside
  2719. execution code (mysql_execute_command()).
  2720. @return TRUE if an error, FALSE if success
  2721. @retval TRUE either MAX_REPREPARE_ATTEMPTS has been reached,
  2722. or some general error
  2723. @retval FALSE successfully executed the statement, perhaps
  2724. after having reprepared it a few times.
  2725. */
  2726. bool
  2727. Prepared_statement::execute_loop(String *expanded_query,
  2728. bool open_cursor,
  2729. uchar *packet,
  2730. uchar *packet_end)
  2731. {
  2732. const int MAX_REPREPARE_ATTEMPTS= 3;
  2733. Reprepare_observer reprepare_observer;
  2734. bool error;
  2735. int reprepare_attempt= 0;
  2736. if (set_parameters(expanded_query, packet, packet_end))
  2737. return TRUE;
  2738. reexecute:
  2739. reprepare_observer.reset_reprepare_observer();
  2740. /*
  2741. If the free_list is not empty, we'll wrongly free some externally
  2742. allocated items when cleaning up after validation of the prepared
  2743. statement.
  2744. */
  2745. DBUG_ASSERT(thd->free_list == NULL);
  2746. /*
  2747. Install the metadata observer. If some metadata version is
  2748. different from prepare time and an observer is installed,
  2749. the observer method will be invoked to push an error into
  2750. the error stack.
  2751. */
  2752. if (sql_command_flags[lex->sql_command] &
  2753. CF_REEXECUTION_FRAGILE)
  2754. {
  2755. DBUG_ASSERT(thd->m_reprepare_observer == NULL);
  2756. thd->m_reprepare_observer = &reprepare_observer;
  2757. }
  2758. if (!(specialflag & SPECIAL_NO_PRIOR))
  2759. my_pthread_setprio(pthread_self(),QUERY_PRIOR);
  2760. error= execute(expanded_query, open_cursor) || thd->is_error();
  2761. if (!(specialflag & SPECIAL_NO_PRIOR))
  2762. my_pthread_setprio(pthread_self(), WAIT_PRIOR);
  2763. thd->m_reprepare_observer= NULL;
  2764. if (error && !thd->is_fatal_error && !thd->killed &&
  2765. reprepare_observer.is_invalidated() &&
  2766. reprepare_attempt++ < MAX_REPREPARE_ATTEMPTS)
  2767. {
  2768. DBUG_ASSERT(thd->stmt_da->sql_errno() == ER_NEED_REPREPARE);
  2769. thd->clear_error();
  2770. error= reprepare();
  2771. if (! error) /* Success */
  2772. goto reexecute;
  2773. }
  2774. reset_stmt_params(this);
  2775. return error;
  2776. }
  2777. /**
  2778. Reprepare this prepared statement.
  2779. Currently this is implemented by creating a new prepared
  2780. statement, preparing it with the original query and then
  2781. swapping the new statement and the original one.
  2782. @retval TRUE an error occurred. Possible errors include
  2783. incompatibility of new and old result set
  2784. metadata
  2785. @retval FALSE success, the statement has been reprepared
  2786. */
  2787. bool
  2788. Prepared_statement::reprepare()
  2789. {
  2790. char saved_cur_db_name_buf[NAME_LEN+1];
  2791. LEX_STRING saved_cur_db_name=
  2792. { saved_cur_db_name_buf, sizeof(saved_cur_db_name_buf) };
  2793. LEX_STRING stmt_db_name= { db, db_length };
  2794. bool cur_db_changed;
  2795. bool error;
  2796. Prepared_statement copy(thd);
  2797. copy.set_sql_prepare(); /* To suppress sending metadata to the client. */
  2798. status_var_increment(thd->status_var.com_stmt_reprepare);
  2799. if (mysql_opt_change_db(thd, &stmt_db_name, &saved_cur_db_name, TRUE,
  2800. &cur_db_changed))
  2801. return TRUE;
  2802. error= ((name.str && copy.set_name(&name)) ||
  2803. copy.prepare(query, query_length) ||
  2804. validate_metadata(&copy));
  2805. if (cur_db_changed)
  2806. mysql_change_db(thd, &saved_cur_db_name, TRUE);
  2807. if (! error)
  2808. {
  2809. swap_prepared_statement(&copy);
  2810. swap_parameter_array(param_array, copy.param_array, param_count);
  2811. #ifndef DBUG_OFF
  2812. is_reprepared= TRUE;
  2813. #endif
  2814. /*
  2815. Clear possible warnings during reprepare, it has to be completely
  2816. transparent to the user. We use clear_warning_info() since
  2817. there were no separate query id issued for re-prepare.
  2818. Sic: we can't simply silence warnings during reprepare, because if
  2819. it's failed, we need to return all the warnings to the user.
  2820. */
  2821. thd->warning_info->clear_warning_info(thd->query_id);
  2822. }
  2823. return error;
  2824. }
  2825. /**
  2826. Validate statement result set metadata (if the statement returns
  2827. a result set).
  2828. Currently we only check that the number of columns of the result
  2829. set did not change.
  2830. This is a helper method used during re-prepare.
  2831. @param[in] copy the re-prepared prepared statement to verify
  2832. the metadata of
  2833. @retval TRUE error, ER_PS_REBIND is reported
  2834. @retval FALSE statement return no or compatible metadata
  2835. */
  2836. bool Prepared_statement::validate_metadata(Prepared_statement *copy)
  2837. {
  2838. /**
  2839. If this is an SQL prepared statement or EXPLAIN,
  2840. return FALSE -- the metadata of the original SELECT,
  2841. if any, has not been sent to the client.
  2842. */
  2843. if (is_sql_prepare() || lex->describe)
  2844. return FALSE;
  2845. if (lex->select_lex.item_list.elements !=
  2846. copy->lex->select_lex.item_list.elements)
  2847. {
  2848. /** Column counts mismatch, update the client */
  2849. thd->server_status|= SERVER_STATUS_METADATA_CHANGED;
  2850. }
  2851. return FALSE;
  2852. }
  2853. /**
  2854. Replace the original prepared statement with a prepared copy.
  2855. This is a private helper that is used as part of statement
  2856. reprepare
  2857. @return This function does not return any errors.
  2858. */
  2859. void
  2860. Prepared_statement::swap_prepared_statement(Prepared_statement *copy)
  2861. {
  2862. Statement tmp_stmt;
  2863. /* Swap memory roots. */
  2864. swap_variables(MEM_ROOT, main_mem_root, copy->main_mem_root);
  2865. /* Swap the arenas */
  2866. tmp_stmt.set_query_arena(this);
  2867. set_query_arena(copy);
  2868. copy->set_query_arena(&tmp_stmt);
  2869. /* Swap the statement parent classes */
  2870. tmp_stmt.set_statement(this);
  2871. set_statement(copy);
  2872. copy->set_statement(&tmp_stmt);
  2873. /* Swap ids back, we need the original id */
  2874. swap_variables(ulong, id, copy->id);
  2875. /* Swap mem_roots back, they must continue pointing at the main_mem_roots */
  2876. swap_variables(MEM_ROOT *, mem_root, copy->mem_root);
  2877. /*
  2878. Swap the old and the new parameters array. The old array
  2879. is allocated in the old arena.
  2880. */
  2881. swap_variables(Item_param **, param_array, copy->param_array);
  2882. /* Swap flags: this is perhaps unnecessary */
  2883. swap_variables(uint, flags, copy->flags);
  2884. /* Swap names, the old name is allocated in the wrong memory root */
  2885. swap_variables(LEX_STRING, name, copy->name);
  2886. /* Ditto */
  2887. swap_variables(char *, db, copy->db);
  2888. swap_variables(ulong, m_sp_cache_version, copy->m_sp_cache_version);
  2889. DBUG_ASSERT(db_length == copy->db_length);
  2890. DBUG_ASSERT(param_count == copy->param_count);
  2891. DBUG_ASSERT(thd == copy->thd);
  2892. last_error[0]= '\0';
  2893. last_errno= 0;
  2894. }
  2895. /**
  2896. Execute a prepared statement.
  2897. You should not change global THD state in this function, if at all
  2898. possible: it may be called from any context, e.g. when executing
  2899. a COM_* command, and SQLCOM_* command, or a stored procedure.
  2900. @param expanded_query A query for binlogging which has all parameter
  2901. markers ('?') replaced with their actual values.
  2902. @param open_cursor True if an attempt to open a cursor should be made.
  2903. Currenlty used only in the binary protocol.
  2904. @note
  2905. Preconditions, postconditions.
  2906. - See the comment for Prepared_statement::prepare().
  2907. @retval
  2908. FALSE ok
  2909. @retval
  2910. TRUE Error
  2911. */
  2912. bool Prepared_statement::execute(String *expanded_query, bool open_cursor)
  2913. {
  2914. Statement stmt_backup;
  2915. Query_arena *old_stmt_arena;
  2916. bool error= TRUE;
  2917. char saved_cur_db_name_buf[NAME_LEN+1];
  2918. LEX_STRING saved_cur_db_name=
  2919. { saved_cur_db_name_buf, sizeof(saved_cur_db_name_buf) };
  2920. bool cur_db_changed;
  2921. LEX_STRING stmt_db_name= { db, db_length };
  2922. status_var_increment(thd->status_var.com_stmt_execute);
  2923. /* Check if we got an error when sending long data */
  2924. if (state == Query_arena::ERROR)
  2925. {
  2926. my_message(last_errno, last_error, MYF(0));
  2927. return TRUE;
  2928. }
  2929. if (flags & (uint) IS_IN_USE)
  2930. {
  2931. my_error(ER_PS_NO_RECURSION, MYF(0));
  2932. return TRUE;
  2933. }
  2934. /*
  2935. Reprepare the statement if we're using stored functions
  2936. and the version of the stored routines cache has changed.
  2937. */
  2938. if (lex->uses_stored_routines() &&
  2939. m_sp_cache_version != sp_cache_version(&thd->sp_func_cache) &&
  2940. thd->m_reprepare_observer &&
  2941. thd->m_reprepare_observer->report_error(thd))
  2942. {
  2943. return TRUE;
  2944. }
  2945. /*
  2946. For SHOW VARIABLES lex->result is NULL, as it's a non-SELECT
  2947. command. For such queries we don't return an error and don't
  2948. open a cursor -- the client library will recognize this case and
  2949. materialize the result set.
  2950. For SELECT statements lex->result is created in
  2951. check_prepared_statement. lex->result->simple_select() is FALSE
  2952. in INSERT ... SELECT and similar commands.
  2953. */
  2954. if (open_cursor && lex->result && lex->result->check_simple_select())
  2955. {
  2956. DBUG_PRINT("info",("Cursor asked for not SELECT stmt"));
  2957. return TRUE;
  2958. }
  2959. /* In case the command has a call to SP which re-uses this statement name */
  2960. flags|= IS_IN_USE;
  2961. close_cursor();
  2962. /*
  2963. If the free_list is not empty, we'll wrongly free some externally
  2964. allocated items when cleaning up after execution of this statement.
  2965. */
  2966. DBUG_ASSERT(thd->change_list.is_empty());
  2967. /*
  2968. The only case where we should have items in the thd->free_list is
  2969. after stmt->set_params_from_vars(), which may in some cases create
  2970. Item_null objects.
  2971. */
  2972. thd->set_n_backup_statement(this, &stmt_backup);
  2973. /*
  2974. Change the current database (if needed).
  2975. Force switching, because the database of the prepared statement may be
  2976. NULL (prepared statements can be created while no current database
  2977. selected).
  2978. */
  2979. if (mysql_opt_change_db(thd, &stmt_db_name, &saved_cur_db_name, TRUE,
  2980. &cur_db_changed))
  2981. goto error;
  2982. /* Allocate query. */
  2983. if (expanded_query->length() &&
  2984. alloc_query(thd, (char*) expanded_query->ptr(),
  2985. expanded_query->length()))
  2986. {
  2987. my_error(ER_OUTOFMEMORY, 0, expanded_query->length());
  2988. goto error;
  2989. }
  2990. /*
  2991. Expanded query is needed for slow logging, so we want thd->query
  2992. to point at it even after we restore from backup. This is ok, as
  2993. expanded query was allocated in thd->mem_root.
  2994. */
  2995. stmt_backup.query= thd->query;
  2996. stmt_backup.query_length= thd->query_length;
  2997. /*
  2998. At first execution of prepared statement we may perform logical
  2999. transformations of the query tree. Such changes should be performed
  3000. on the parse tree of current prepared statement and new items should
  3001. be allocated in its memory root. Set the appropriate pointer in THD
  3002. to the arena of the statement.
  3003. */
  3004. old_stmt_arena= thd->stmt_arena;
  3005. thd->stmt_arena= this;
  3006. reinit_stmt_before_use(thd, lex);
  3007. /* Go! */
  3008. if (open_cursor)
  3009. error= mysql_open_cursor(thd, (uint) ALWAYS_MATERIALIZED_CURSOR,
  3010. &result, &cursor);
  3011. else
  3012. {
  3013. /*
  3014. Try to find it in the query cache, if not, execute it.
  3015. Note that multi-statements cannot exist here (they are not supported in
  3016. prepared statements).
  3017. */
  3018. if (query_cache_send_result_to_client(thd, thd->query,
  3019. thd->query_length) <= 0)
  3020. {
  3021. MYSQL_QUERY_EXEC_START(thd->query,
  3022. thd->thread_id,
  3023. (char *) (thd->db ? thd->db : ""),
  3024. thd->security_ctx->priv_user,
  3025. (char *) thd->security_ctx->host_or_ip,
  3026. 1);
  3027. error= mysql_execute_command(thd);
  3028. MYSQL_QUERY_EXEC_DONE(error);
  3029. }
  3030. }
  3031. /*
  3032. Restore the current database (if changed).
  3033. Force switching back to the saved current database (if changed),
  3034. because it may be NULL. In this case, mysql_change_db() would generate
  3035. an error.
  3036. */
  3037. if (cur_db_changed)
  3038. mysql_change_db(thd, &saved_cur_db_name, TRUE);
  3039. /* Assert that if an error, no cursor is open */
  3040. DBUG_ASSERT(! (error && cursor));
  3041. if (! cursor)
  3042. cleanup_stmt();
  3043. thd->set_statement(&stmt_backup);
  3044. thd->stmt_arena= old_stmt_arena;
  3045. if (state == Query_arena::PREPARED)
  3046. state= Query_arena::EXECUTED;
  3047. /*
  3048. Log COM_EXECUTE to the general log. Note, that in case of SQL
  3049. prepared statements this causes two records to be output:
  3050. Query EXECUTE <statement name>
  3051. Execute <statement SQL text>
  3052. This is considered user-friendly, since in the
  3053. second log entry we output values of parameter markers.
  3054. Do not print anything if this is an SQL prepared statement and
  3055. we're inside a stored procedure (also called Dynamic SQL) --
  3056. sub-statements inside stored procedures are not logged into
  3057. the general log.
  3058. */
  3059. if (error == 0 && thd->spcont == NULL)
  3060. general_log_write(thd, COM_STMT_EXECUTE, thd->query, thd->query_length);
  3061. error:
  3062. flags&= ~ (uint) IS_IN_USE;
  3063. return error;
  3064. }
  3065. /** Common part of DEALLOCATE PREPARE and mysqld_stmt_close. */
  3066. void Prepared_statement::deallocate()
  3067. {
  3068. /* We account deallocate in the same manner as mysqld_stmt_close */
  3069. status_var_increment(thd->status_var.com_stmt_close);
  3070. /* Statement map calls delete stmt on erase */
  3071. thd->stmt_map.erase(this);
  3072. }