You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

4736 lines
125 KiB

20 years ago
branches/zip: Merge 2537:2605 from branches/5.1: ------------------------------------------------------------------------ r2545 | vasil | 2008-07-25 17:24:23 +0300 (Fri, 25 Jul 2008) | 37 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc branches/5.1: Fix Bug#38185 ha_innobase::info can hold locks even when called with HA_STATUS_NO_LOCK The fix is to call fsp_get_available_space_in_free_extents() from ha_innobase::info() only if HA_STATUS_NO_LOCK is not present in the flag *AND* change get_schema_tables_record() in MySQL's sql/sql_show.cc to call ::info() *without* HA_STATUS_NO_LOCK whenever a user issues SELECT FROM information_schema.tables; Without the change to sql/sql_show.cc this patch would lead to Bug#32440 resurfacing. I.e. delete_length would never be updated in ::info() and will remain 0 forever, resulting in the free space not being shown anywhere. This is the change to sql/sql_show.cc for reference, it needs to be committed to the MySQL repo before or at the same time with this change to ha_innodb.cc: --- patch begins here --- --- sql/sql_show.cc.orig 2008-07-23 09:32:14.000000000 +0300 +++ sql/sql_show.cc 2008-07-23 09:32:19.000000000 +0300 @@ -3549,8 +3549,7 @@ static int get_schema_tables_record(THD if(file) { - file->info(HA_STATUS_VARIABLE | HA_STATUS_TIME | HA_STATUS_AUTO | - HA_STATUS_NO_LOCK); + file->info(HA_STATUS_VARIABLE | HA_STATUS_TIME | HA_STATUS_AUTO); enum row_type row_type = file->get_row_type(); switch (row_type) { case ROW_TYPE_NOT_USED: --- patch ends here --- Approved by: Heikki ------------------------------------------------------------------------ r2603 | marko | 2008-08-21 16:25:05 +0300 (Thu, 21 Aug 2008) | 10 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/include/ha_prototypes.h M /branches/5.1/row/row0sel.c branches/5.1: Identify SELECT statements by thd_sql_command() == SQLCOM_SELECT instead of parsing the query string. This fixes MySQL Bug #37885 without us having to implement lexical analysis of SQL comments in yet another place. thd_is_select(): A new predicate. row_search_for_mysql(): Use thd_is_select(). Approved by Heikki. ------------------------------------------------------------------------
17 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
branches/zip: Initialize dfield_t::ext as soon as possible. This should fix the bugs introduced in r1591. row_rec_to_index_entry_low(): Clear "n_ext". Do not allow it to be NULL. Add const qualifier to dict_index_t*. row_rec_to_index_entry(): Add the parameters "offsets" and "n_ext". btr_cur_optimistic_update(): Add an assertion that there are no externally stored columns. Remove the unreachable call to btr_cur_unmark_extern_fields() and the preceding unnecessary call to rec_get_offsets(). btr_push_update_extern_fields(): Remove the parameters index, offsets. Only report the additional externally stored columns of the update vector. row_build(), trx_undo_rec_get_partial_row(): Flag externally stored columns also with dfield_set_ext(). rec_copy_prefix_to_dtuple(): Assert that there are no externally stored columns in the prefix. row_build_row_ref(): Note and assert that the index is a secondary index, and assert that there are no externally stored columns. row_build_row_ref_fast(): Assert that there are no externally stored columns. rec_offs_get_n_alloc(): Expose the function. row_build_row_ref_in_tuple(): Assert that there are no externally stored columns in a record of a secondary index. row_build_row_ref_from_row(): Assert that there are no externally stored columns. row_upd_check_references_constraints(): Add the parameter offsets, to avoid a redundant call to rec_get_offsets(). row_upd_del_mark_clust_rec(): Add the parameter offsets. Remove duplicated code. row_ins_index_entry_set_vals(): Copy the external storage flag. sel_pop_prefetched_row(): Assert that there are no externally stored columns. row_scan_and_check_index(): Copy offsets to a temporary heap across the invocation of row_rec_to_index_entry().
18 years ago
20 years ago
branches/zip: Initialize dfield_t::ext as soon as possible. This should fix the bugs introduced in r1591. row_rec_to_index_entry_low(): Clear "n_ext". Do not allow it to be NULL. Add const qualifier to dict_index_t*. row_rec_to_index_entry(): Add the parameters "offsets" and "n_ext". btr_cur_optimistic_update(): Add an assertion that there are no externally stored columns. Remove the unreachable call to btr_cur_unmark_extern_fields() and the preceding unnecessary call to rec_get_offsets(). btr_push_update_extern_fields(): Remove the parameters index, offsets. Only report the additional externally stored columns of the update vector. row_build(), trx_undo_rec_get_partial_row(): Flag externally stored columns also with dfield_set_ext(). rec_copy_prefix_to_dtuple(): Assert that there are no externally stored columns in the prefix. row_build_row_ref(): Note and assert that the index is a secondary index, and assert that there are no externally stored columns. row_build_row_ref_fast(): Assert that there are no externally stored columns. rec_offs_get_n_alloc(): Expose the function. row_build_row_ref_in_tuple(): Assert that there are no externally stored columns in a record of a secondary index. row_build_row_ref_from_row(): Assert that there are no externally stored columns. row_upd_check_references_constraints(): Add the parameter offsets, to avoid a redundant call to rec_get_offsets(). row_upd_del_mark_clust_rec(): Add the parameter offsets. Remove duplicated code. row_ins_index_entry_set_vals(): Copy the external storage flag. sel_pop_prefetched_row(): Assert that there are no externally stored columns. row_scan_and_check_index(): Copy offsets to a temporary heap across the invocation of row_rec_to_index_entry().
18 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
branches/zip: Merge 2384:2423 from branches/5.1: ------------------------------------------------------------------------ r2386 | vasil | 2008-03-27 07:45:02 +0200 (Thu, 27 Mar 2008) | 22 lines branches/5.1: Merge change from MySQL (this fixes the failing innodb test): ChangeSet@1.1810.3601.4, 2008-02-07 02:33:21+04:00, gshchepa@host.loc +9 -0 Fixed bug#30059. Server handles truncation for assignment of too-long values into CHAR/VARCHAR/TEXT columns in a different ways when the truncated characters are spaces: 1. CHAR(N) columns silently ignore end-space truncation; 2. TEXT columns post a truncation warning/error in the non-strict/strict mode. 3. VARCHAR columns always post a truncation note in any mode. Space truncation processing has been synchronised over CHAR/VARCHAR/TEXT columns: current behavior of VARCHAR columns has been propagated as standard. Binary-encoded string/BLOB columns are not affected. ------------------------------------------------------------------------ r2387 | vasil | 2008-03-27 08:49:05 +0200 (Thu, 27 Mar 2008) | 8 lines branches/5.1: Check whether *trx->mysql_query_str is != NULL in addition to trx->mysql_query_str. This adds more safety. This may or may not fix Bug#35226 RBR event crashes slave. ------------------------------------------------------------------------ r2388 | vasil | 2008-03-27 14:02:34 +0200 (Thu, 27 Mar 2008) | 7 lines branches/5.1: Swap the order in which mysql_thd, mysql_query_str and *mysql_query_str are checked for non-NULL. Suggested by: Marko ------------------------------------------------------------------------ r2419 | vasil | 2008-04-23 19:08:06 +0300 (Wed, 23 Apr 2008) | 9 lines branches/5.1: Change the fix for Bug#32440 to show bytes instead of kilobytes in INFORMATION_SCHEMA.TABLES.DATA_FREE. Suggested by: Domas Mituzas <domas@mysql.com> Approved by: Heikki ------------------------------------------------------------------------ r2420 | calvin | 2008-04-24 15:25:30 +0300 (Thu, 24 Apr 2008) | 4 lines branches/5.1: Fix bug#29507 TRUNCATE shows to many rows effected In InnoDB, the row count is only a rough estimate used by SQL optimization. InnoDB is now return row count 0 for TRUNCATE operation. ------------------------------------------------------------------------ r2421 | calvin | 2008-04-24 15:32:30 +0300 (Thu, 24 Apr 2008) | 6 lines branches/5.1: Fix bug#35537 - Innodb doesn't increment handler_update and handler_delete Add the calls to ha_statistic_increment() in ha_innobase::delete_row() and ha_innobase::update_row(). ------------------------------------------------------------------------ r2422 | vasil | 2008-04-24 16:00:30 +0300 (Thu, 24 Apr 2008) | 11 lines branches/5.1: Fix Bug#36169 create innodb compressed table with too large row size crashed Sometimes it is possible that row_drop_table_for_mysql(index->table_name, trx, FALSE); is invoked in row_create_index_for_mysql() when the index object is freed so copy the table name to a safe place beforehand and use the copy. Approved by: Sunny ------------------------------------------------------------------------
18 years ago
branches/zip: Merge 2537:2605 from branches/5.1: ------------------------------------------------------------------------ r2545 | vasil | 2008-07-25 17:24:23 +0300 (Fri, 25 Jul 2008) | 37 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc branches/5.1: Fix Bug#38185 ha_innobase::info can hold locks even when called with HA_STATUS_NO_LOCK The fix is to call fsp_get_available_space_in_free_extents() from ha_innobase::info() only if HA_STATUS_NO_LOCK is not present in the flag *AND* change get_schema_tables_record() in MySQL's sql/sql_show.cc to call ::info() *without* HA_STATUS_NO_LOCK whenever a user issues SELECT FROM information_schema.tables; Without the change to sql/sql_show.cc this patch would lead to Bug#32440 resurfacing. I.e. delete_length would never be updated in ::info() and will remain 0 forever, resulting in the free space not being shown anywhere. This is the change to sql/sql_show.cc for reference, it needs to be committed to the MySQL repo before or at the same time with this change to ha_innodb.cc: --- patch begins here --- --- sql/sql_show.cc.orig 2008-07-23 09:32:14.000000000 +0300 +++ sql/sql_show.cc 2008-07-23 09:32:19.000000000 +0300 @@ -3549,8 +3549,7 @@ static int get_schema_tables_record(THD if(file) { - file->info(HA_STATUS_VARIABLE | HA_STATUS_TIME | HA_STATUS_AUTO | - HA_STATUS_NO_LOCK); + file->info(HA_STATUS_VARIABLE | HA_STATUS_TIME | HA_STATUS_AUTO); enum row_type row_type = file->get_row_type(); switch (row_type) { case ROW_TYPE_NOT_USED: --- patch ends here --- Approved by: Heikki ------------------------------------------------------------------------ r2603 | marko | 2008-08-21 16:25:05 +0300 (Thu, 21 Aug 2008) | 10 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/include/ha_prototypes.h M /branches/5.1/row/row0sel.c branches/5.1: Identify SELECT statements by thd_sql_command() == SQLCOM_SELECT instead of parsing the query string. This fixes MySQL Bug #37885 without us having to implement lexical analysis of SQL comments in yet another place. thd_is_select(): A new predicate. row_search_for_mysql(): Use thd_is_select(). Approved by Heikki. ------------------------------------------------------------------------
17 years ago
20 years ago
branches/zip: Merge 2537:2605 from branches/5.1: ------------------------------------------------------------------------ r2545 | vasil | 2008-07-25 17:24:23 +0300 (Fri, 25 Jul 2008) | 37 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc branches/5.1: Fix Bug#38185 ha_innobase::info can hold locks even when called with HA_STATUS_NO_LOCK The fix is to call fsp_get_available_space_in_free_extents() from ha_innobase::info() only if HA_STATUS_NO_LOCK is not present in the flag *AND* change get_schema_tables_record() in MySQL's sql/sql_show.cc to call ::info() *without* HA_STATUS_NO_LOCK whenever a user issues SELECT FROM information_schema.tables; Without the change to sql/sql_show.cc this patch would lead to Bug#32440 resurfacing. I.e. delete_length would never be updated in ::info() and will remain 0 forever, resulting in the free space not being shown anywhere. This is the change to sql/sql_show.cc for reference, it needs to be committed to the MySQL repo before or at the same time with this change to ha_innodb.cc: --- patch begins here --- --- sql/sql_show.cc.orig 2008-07-23 09:32:14.000000000 +0300 +++ sql/sql_show.cc 2008-07-23 09:32:19.000000000 +0300 @@ -3549,8 +3549,7 @@ static int get_schema_tables_record(THD if(file) { - file->info(HA_STATUS_VARIABLE | HA_STATUS_TIME | HA_STATUS_AUTO | - HA_STATUS_NO_LOCK); + file->info(HA_STATUS_VARIABLE | HA_STATUS_TIME | HA_STATUS_AUTO); enum row_type row_type = file->get_row_type(); switch (row_type) { case ROW_TYPE_NOT_USED: --- patch ends here --- Approved by: Heikki ------------------------------------------------------------------------ r2603 | marko | 2008-08-21 16:25:05 +0300 (Thu, 21 Aug 2008) | 10 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/include/ha_prototypes.h M /branches/5.1/row/row0sel.c branches/5.1: Identify SELECT statements by thd_sql_command() == SQLCOM_SELECT instead of parsing the query string. This fixes MySQL Bug #37885 without us having to implement lexical analysis of SQL comments in yet another place. thd_is_select(): A new predicate. row_search_for_mysql(): Use thd_is_select(). Approved by Heikki. ------------------------------------------------------------------------
17 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
  1. /*****************************************************************************
  2. Copyright (c) 1997, 2009, Innobase Oy. All Rights Reserved.
  3. Copyright (c) 2008, Google Inc.
  4. Portions of this file contain modifications contributed and copyrighted by
  5. Google, Inc. Those modifications are gratefully acknowledged and are described
  6. briefly in the InnoDB documentation. The contributions by Google are
  7. incorporated with their permission, and subject to the conditions contained in
  8. the file COPYING.Google.
  9. This program is free software; you can redistribute it and/or modify it under
  10. the terms of the GNU General Public License as published by the Free Software
  11. Foundation; version 2 of the License.
  12. This program is distributed in the hope that it will be useful, but WITHOUT
  13. ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
  14. FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
  15. You should have received a copy of the GNU General Public License along with
  16. this program; if not, write to the Free Software Foundation, Inc., 59 Temple
  17. Place, Suite 330, Boston, MA 02111-1307 USA
  18. *****************************************************************************/
  19. /***************************************************//**
  20. @file row/row0sel.c
  21. Select
  22. Created 12/19/1997 Heikki Tuuri
  23. *******************************************************/
  24. #include "row0sel.h"
  25. #ifdef UNIV_NONINL
  26. #include "row0sel.ic"
  27. #endif
  28. #include "dict0dict.h"
  29. #include "dict0boot.h"
  30. #include "trx0undo.h"
  31. #include "trx0trx.h"
  32. #include "btr0btr.h"
  33. #include "btr0cur.h"
  34. #include "btr0sea.h"
  35. #include "mach0data.h"
  36. #include "que0que.h"
  37. #include "row0upd.h"
  38. #include "row0row.h"
  39. #include "row0vers.h"
  40. #include "rem0cmp.h"
  41. #include "lock0lock.h"
  42. #include "eval0eval.h"
  43. #include "pars0sym.h"
  44. #include "pars0pars.h"
  45. #include "row0mysql.h"
  46. #include "read0read.h"
  47. #include "buf0lru.h"
  48. #include "ha_prototypes.h"
  49. /* Maximum number of rows to prefetch; MySQL interface has another parameter */
  50. #define SEL_MAX_N_PREFETCH 16
  51. /* Number of rows fetched, after which to start prefetching; MySQL interface
  52. has another parameter */
  53. #define SEL_PREFETCH_LIMIT 1
  54. /* When a select has accessed about this many pages, it returns control back
  55. to que_run_threads: this is to allow canceling runaway queries */
  56. #define SEL_COST_LIMIT 100
  57. /* Flags for search shortcut */
  58. #define SEL_FOUND 0
  59. #define SEL_EXHAUSTED 1
  60. #define SEL_RETRY 2
  61. /********************************************************************//**
  62. Returns TRUE if the user-defined column in a secondary index record
  63. is alphabetically the same as the corresponding BLOB column in the clustered
  64. index record.
  65. NOTE: the comparison is NOT done as a binary comparison, but character
  66. fields are compared with collation!
  67. @return TRUE if the columns are equal */
  68. static
  69. ibool
  70. row_sel_sec_rec_is_for_blob(
  71. /*========================*/
  72. ulint mtype, /*!< in: main type */
  73. ulint prtype, /*!< in: precise type */
  74. ulint mbminlen, /*!< in: minimum length of a
  75. multi-byte character */
  76. ulint mbmaxlen, /*!< in: maximum length of a
  77. multi-byte character */
  78. const byte* clust_field, /*!< in: the locally stored part of
  79. the clustered index column, including
  80. the BLOB pointer; the clustered
  81. index record must be covered by
  82. a lock or a page latch to protect it
  83. against deletion (rollback or purge) */
  84. ulint clust_len, /*!< in: length of clust_field */
  85. const byte* sec_field, /*!< in: column in secondary index */
  86. ulint sec_len, /*!< in: length of sec_field */
  87. ulint zip_size) /*!< in: compressed page size, or 0 */
  88. {
  89. ulint len;
  90. byte buf[DICT_MAX_INDEX_COL_LEN];
  91. len = btr_copy_externally_stored_field_prefix(buf, sizeof buf,
  92. zip_size,
  93. clust_field, clust_len);
  94. if (UNIV_UNLIKELY(len == 0)) {
  95. /* The BLOB was being deleted as the server crashed.
  96. There should not be any secondary index records
  97. referring to this clustered index record, because
  98. btr_free_externally_stored_field() is called after all
  99. secondary index entries of the row have been purged. */
  100. return(FALSE);
  101. }
  102. len = dtype_get_at_most_n_mbchars(prtype, mbminlen, mbmaxlen,
  103. sec_len, len, (const char*) buf);
  104. return(!cmp_data_data(mtype, prtype, buf, len, sec_field, sec_len));
  105. }
  106. /********************************************************************//**
  107. Returns TRUE if the user-defined column values in a secondary index record
  108. are alphabetically the same as the corresponding columns in the clustered
  109. index record.
  110. NOTE: the comparison is NOT done as a binary comparison, but character
  111. fields are compared with collation!
  112. @return TRUE if the secondary record is equal to the corresponding
  113. fields in the clustered record, when compared with collation */
  114. static
  115. ibool
  116. row_sel_sec_rec_is_for_clust_rec(
  117. /*=============================*/
  118. const rec_t* sec_rec, /*!< in: secondary index record */
  119. dict_index_t* sec_index, /*!< in: secondary index */
  120. const rec_t* clust_rec, /*!< in: clustered index record;
  121. must be protected by a lock or
  122. a page latch against deletion
  123. in rollback or purge */
  124. dict_index_t* clust_index) /*!< in: clustered index */
  125. {
  126. const byte* sec_field;
  127. ulint sec_len;
  128. const byte* clust_field;
  129. ulint n;
  130. ulint i;
  131. mem_heap_t* heap = NULL;
  132. ulint clust_offsets_[REC_OFFS_NORMAL_SIZE];
  133. ulint sec_offsets_[REC_OFFS_SMALL_SIZE];
  134. ulint* clust_offs = clust_offsets_;
  135. ulint* sec_offs = sec_offsets_;
  136. ibool is_equal = TRUE;
  137. rec_offs_init(clust_offsets_);
  138. rec_offs_init(sec_offsets_);
  139. if (rec_get_deleted_flag(clust_rec,
  140. dict_table_is_comp(clust_index->table))) {
  141. /* The clustered index record is delete-marked;
  142. it is not visible in the read view. Besides,
  143. if there are any externally stored columns,
  144. some of them may have already been purged. */
  145. return(FALSE);
  146. }
  147. clust_offs = rec_get_offsets(clust_rec, clust_index, clust_offs,
  148. ULINT_UNDEFINED, &heap);
  149. sec_offs = rec_get_offsets(sec_rec, sec_index, sec_offs,
  150. ULINT_UNDEFINED, &heap);
  151. n = dict_index_get_n_ordering_defined_by_user(sec_index);
  152. for (i = 0; i < n; i++) {
  153. const dict_field_t* ifield;
  154. const dict_col_t* col;
  155. ulint clust_pos;
  156. ulint clust_len;
  157. ulint len;
  158. ifield = dict_index_get_nth_field(sec_index, i);
  159. col = dict_field_get_col(ifield);
  160. clust_pos = dict_col_get_clust_pos(col, clust_index);
  161. clust_field = rec_get_nth_field(
  162. clust_rec, clust_offs, clust_pos, &clust_len);
  163. sec_field = rec_get_nth_field(sec_rec, sec_offs, i, &sec_len);
  164. len = clust_len;
  165. if (ifield->prefix_len > 0 && len != UNIV_SQL_NULL) {
  166. if (rec_offs_nth_extern(clust_offs, clust_pos)) {
  167. len -= BTR_EXTERN_FIELD_REF_SIZE;
  168. }
  169. len = dtype_get_at_most_n_mbchars(
  170. col->prtype, col->mbminlen, col->mbmaxlen,
  171. ifield->prefix_len, len, (char*) clust_field);
  172. if (rec_offs_nth_extern(clust_offs, clust_pos)
  173. && len < sec_len) {
  174. if (!row_sel_sec_rec_is_for_blob(
  175. col->mtype, col->prtype,
  176. col->mbminlen, col->mbmaxlen,
  177. clust_field, clust_len,
  178. sec_field, sec_len,
  179. dict_table_zip_size(
  180. clust_index->table))) {
  181. goto inequal;
  182. }
  183. continue;
  184. }
  185. }
  186. if (0 != cmp_data_data(col->mtype, col->prtype,
  187. clust_field, len,
  188. sec_field, sec_len)) {
  189. inequal:
  190. is_equal = FALSE;
  191. goto func_exit;
  192. }
  193. }
  194. func_exit:
  195. if (UNIV_LIKELY_NULL(heap)) {
  196. mem_heap_free(heap);
  197. }
  198. return(is_equal);
  199. }
  200. /*********************************************************************//**
  201. Creates a select node struct.
  202. @return own: select node struct */
  203. UNIV_INTERN
  204. sel_node_t*
  205. sel_node_create(
  206. /*============*/
  207. mem_heap_t* heap) /*!< in: memory heap where created */
  208. {
  209. sel_node_t* node;
  210. node = mem_heap_alloc(heap, sizeof(sel_node_t));
  211. node->common.type = QUE_NODE_SELECT;
  212. node->state = SEL_NODE_OPEN;
  213. node->plans = NULL;
  214. return(node);
  215. }
  216. /*********************************************************************//**
  217. Frees the memory private to a select node when a query graph is freed,
  218. does not free the heap where the node was originally created. */
  219. UNIV_INTERN
  220. void
  221. sel_node_free_private(
  222. /*==================*/
  223. sel_node_t* node) /*!< in: select node struct */
  224. {
  225. ulint i;
  226. plan_t* plan;
  227. if (node->plans != NULL) {
  228. for (i = 0; i < node->n_tables; i++) {
  229. plan = sel_node_get_nth_plan(node, i);
  230. btr_pcur_close(&(plan->pcur));
  231. btr_pcur_close(&(plan->clust_pcur));
  232. if (plan->old_vers_heap) {
  233. mem_heap_free(plan->old_vers_heap);
  234. }
  235. }
  236. }
  237. }
  238. /*********************************************************************//**
  239. Evaluates the values in a select list. If there are aggregate functions,
  240. their argument value is added to the aggregate total. */
  241. UNIV_INLINE
  242. void
  243. sel_eval_select_list(
  244. /*=================*/
  245. sel_node_t* node) /*!< in: select node */
  246. {
  247. que_node_t* exp;
  248. exp = node->select_list;
  249. while (exp) {
  250. eval_exp(exp);
  251. exp = que_node_get_next(exp);
  252. }
  253. }
  254. /*********************************************************************//**
  255. Assigns the values in the select list to the possible into-variables in
  256. SELECT ... INTO ... */
  257. UNIV_INLINE
  258. void
  259. sel_assign_into_var_values(
  260. /*=======================*/
  261. sym_node_t* var, /*!< in: first variable in a list of variables */
  262. sel_node_t* node) /*!< in: select node */
  263. {
  264. que_node_t* exp;
  265. if (var == NULL) {
  266. return;
  267. }
  268. exp = node->select_list;
  269. while (var) {
  270. ut_ad(exp);
  271. eval_node_copy_val(var->alias, exp);
  272. exp = que_node_get_next(exp);
  273. var = que_node_get_next(var);
  274. }
  275. }
  276. /*********************************************************************//**
  277. Resets the aggregate value totals in the select list of an aggregate type
  278. query. */
  279. UNIV_INLINE
  280. void
  281. sel_reset_aggregate_vals(
  282. /*=====================*/
  283. sel_node_t* node) /*!< in: select node */
  284. {
  285. func_node_t* func_node;
  286. ut_ad(node->is_aggregate);
  287. func_node = node->select_list;
  288. while (func_node) {
  289. eval_node_set_int_val(func_node, 0);
  290. func_node = que_node_get_next(func_node);
  291. }
  292. node->aggregate_already_fetched = FALSE;
  293. }
  294. /*********************************************************************//**
  295. Copies the input variable values when an explicit cursor is opened. */
  296. UNIV_INLINE
  297. void
  298. row_sel_copy_input_variable_vals(
  299. /*=============================*/
  300. sel_node_t* node) /*!< in: select node */
  301. {
  302. sym_node_t* var;
  303. var = UT_LIST_GET_FIRST(node->copy_variables);
  304. while (var) {
  305. eval_node_copy_val(var, var->alias);
  306. var->indirection = NULL;
  307. var = UT_LIST_GET_NEXT(col_var_list, var);
  308. }
  309. }
  310. /*********************************************************************//**
  311. Fetches the column values from a record. */
  312. static
  313. void
  314. row_sel_fetch_columns(
  315. /*==================*/
  316. dict_index_t* index, /*!< in: record index */
  317. const rec_t* rec, /*!< in: record in a clustered or non-clustered
  318. index; must be protected by a page latch */
  319. const ulint* offsets,/*!< in: rec_get_offsets(rec, index) */
  320. sym_node_t* column) /*!< in: first column in a column list, or
  321. NULL */
  322. {
  323. dfield_t* val;
  324. ulint index_type;
  325. ulint field_no;
  326. const byte* data;
  327. ulint len;
  328. ut_ad(rec_offs_validate(rec, index, offsets));
  329. if (dict_index_is_clust(index)) {
  330. index_type = SYM_CLUST_FIELD_NO;
  331. } else {
  332. index_type = SYM_SEC_FIELD_NO;
  333. }
  334. while (column) {
  335. mem_heap_t* heap = NULL;
  336. ibool needs_copy;
  337. field_no = column->field_nos[index_type];
  338. if (field_no != ULINT_UNDEFINED) {
  339. if (UNIV_UNLIKELY(rec_offs_nth_extern(offsets,
  340. field_no))) {
  341. /* Copy an externally stored field to the
  342. temporary heap */
  343. heap = mem_heap_create(1);
  344. data = btr_rec_copy_externally_stored_field(
  345. rec, offsets,
  346. dict_table_zip_size(index->table),
  347. field_no, &len, heap);
  348. ut_a(len != UNIV_SQL_NULL);
  349. needs_copy = TRUE;
  350. } else {
  351. data = rec_get_nth_field(rec, offsets,
  352. field_no, &len);
  353. if (len == UNIV_SQL_NULL) {
  354. len = UNIV_SQL_NULL;
  355. }
  356. needs_copy = column->copy_val;
  357. }
  358. if (needs_copy) {
  359. eval_node_copy_and_alloc_val(column, data,
  360. len);
  361. } else {
  362. val = que_node_get_val(column);
  363. dfield_set_data(val, data, len);
  364. }
  365. if (UNIV_LIKELY_NULL(heap)) {
  366. mem_heap_free(heap);
  367. }
  368. }
  369. column = UT_LIST_GET_NEXT(col_var_list, column);
  370. }
  371. }
  372. /*********************************************************************//**
  373. Allocates a prefetch buffer for a column when prefetch is first time done. */
  374. static
  375. void
  376. sel_col_prefetch_buf_alloc(
  377. /*=======================*/
  378. sym_node_t* column) /*!< in: symbol table node for a column */
  379. {
  380. sel_buf_t* sel_buf;
  381. ulint i;
  382. ut_ad(que_node_get_type(column) == QUE_NODE_SYMBOL);
  383. column->prefetch_buf = mem_alloc(SEL_MAX_N_PREFETCH
  384. * sizeof(sel_buf_t));
  385. for (i = 0; i < SEL_MAX_N_PREFETCH; i++) {
  386. sel_buf = column->prefetch_buf + i;
  387. sel_buf->data = NULL;
  388. sel_buf->val_buf_size = 0;
  389. }
  390. }
  391. /*********************************************************************//**
  392. Frees a prefetch buffer for a column, including the dynamically allocated
  393. memory for data stored there. */
  394. UNIV_INTERN
  395. void
  396. sel_col_prefetch_buf_free(
  397. /*======================*/
  398. sel_buf_t* prefetch_buf) /*!< in, own: prefetch buffer */
  399. {
  400. sel_buf_t* sel_buf;
  401. ulint i;
  402. for (i = 0; i < SEL_MAX_N_PREFETCH; i++) {
  403. sel_buf = prefetch_buf + i;
  404. if (sel_buf->val_buf_size > 0) {
  405. mem_free(sel_buf->data);
  406. }
  407. }
  408. }
  409. /*********************************************************************//**
  410. Pops the column values for a prefetched, cached row from the column prefetch
  411. buffers and places them to the val fields in the column nodes. */
  412. static
  413. void
  414. sel_pop_prefetched_row(
  415. /*===================*/
  416. plan_t* plan) /*!< in: plan node for a table */
  417. {
  418. sym_node_t* column;
  419. sel_buf_t* sel_buf;
  420. dfield_t* val;
  421. byte* data;
  422. ulint len;
  423. ulint val_buf_size;
  424. ut_ad(plan->n_rows_prefetched > 0);
  425. column = UT_LIST_GET_FIRST(plan->columns);
  426. while (column) {
  427. val = que_node_get_val(column);
  428. if (!column->copy_val) {
  429. /* We did not really push any value for the
  430. column */
  431. ut_ad(!column->prefetch_buf);
  432. ut_ad(que_node_get_val_buf_size(column) == 0);
  433. ut_d(dfield_set_null(val));
  434. goto next_col;
  435. }
  436. ut_ad(column->prefetch_buf);
  437. ut_ad(!dfield_is_ext(val));
  438. sel_buf = column->prefetch_buf + plan->first_prefetched;
  439. data = sel_buf->data;
  440. len = sel_buf->len;
  441. val_buf_size = sel_buf->val_buf_size;
  442. /* We must keep track of the allocated memory for
  443. column values to be able to free it later: therefore
  444. we swap the values for sel_buf and val */
  445. sel_buf->data = dfield_get_data(val);
  446. sel_buf->len = dfield_get_len(val);
  447. sel_buf->val_buf_size = que_node_get_val_buf_size(column);
  448. dfield_set_data(val, data, len);
  449. que_node_set_val_buf_size(column, val_buf_size);
  450. next_col:
  451. column = UT_LIST_GET_NEXT(col_var_list, column);
  452. }
  453. plan->n_rows_prefetched--;
  454. plan->first_prefetched++;
  455. }
  456. /*********************************************************************//**
  457. Pushes the column values for a prefetched, cached row to the column prefetch
  458. buffers from the val fields in the column nodes. */
  459. UNIV_INLINE
  460. void
  461. sel_push_prefetched_row(
  462. /*====================*/
  463. plan_t* plan) /*!< in: plan node for a table */
  464. {
  465. sym_node_t* column;
  466. sel_buf_t* sel_buf;
  467. dfield_t* val;
  468. byte* data;
  469. ulint len;
  470. ulint pos;
  471. ulint val_buf_size;
  472. if (plan->n_rows_prefetched == 0) {
  473. pos = 0;
  474. plan->first_prefetched = 0;
  475. } else {
  476. pos = plan->n_rows_prefetched;
  477. /* We have the convention that pushing new rows starts only
  478. after the prefetch stack has been emptied: */
  479. ut_ad(plan->first_prefetched == 0);
  480. }
  481. plan->n_rows_prefetched++;
  482. ut_ad(pos < SEL_MAX_N_PREFETCH);
  483. column = UT_LIST_GET_FIRST(plan->columns);
  484. while (column) {
  485. if (!column->copy_val) {
  486. /* There is no sense to push pointers to database
  487. page fields when we do not keep latch on the page! */
  488. goto next_col;
  489. }
  490. if (!column->prefetch_buf) {
  491. /* Allocate a new prefetch buffer */
  492. sel_col_prefetch_buf_alloc(column);
  493. }
  494. sel_buf = column->prefetch_buf + pos;
  495. val = que_node_get_val(column);
  496. data = dfield_get_data(val);
  497. len = dfield_get_len(val);
  498. val_buf_size = que_node_get_val_buf_size(column);
  499. /* We must keep track of the allocated memory for
  500. column values to be able to free it later: therefore
  501. we swap the values for sel_buf and val */
  502. dfield_set_data(val, sel_buf->data, sel_buf->len);
  503. que_node_set_val_buf_size(column, sel_buf->val_buf_size);
  504. sel_buf->data = data;
  505. sel_buf->len = len;
  506. sel_buf->val_buf_size = val_buf_size;
  507. next_col:
  508. column = UT_LIST_GET_NEXT(col_var_list, column);
  509. }
  510. }
  511. /*********************************************************************//**
  512. Builds a previous version of a clustered index record for a consistent read
  513. @return DB_SUCCESS or error code */
  514. static
  515. ulint
  516. row_sel_build_prev_vers(
  517. /*====================*/
  518. read_view_t* read_view, /*!< in: read view */
  519. dict_index_t* index, /*!< in: plan node for table */
  520. rec_t* rec, /*!< in: record in a clustered index */
  521. ulint** offsets, /*!< in/out: offsets returned by
  522. rec_get_offsets(rec, plan->index) */
  523. mem_heap_t** offset_heap, /*!< in/out: memory heap from which
  524. the offsets are allocated */
  525. mem_heap_t** old_vers_heap, /*!< out: old version heap to use */
  526. rec_t** old_vers, /*!< out: old version, or NULL if the
  527. record does not exist in the view:
  528. i.e., it was freshly inserted
  529. afterwards */
  530. mtr_t* mtr) /*!< in: mtr */
  531. {
  532. ulint err;
  533. if (*old_vers_heap) {
  534. mem_heap_empty(*old_vers_heap);
  535. } else {
  536. *old_vers_heap = mem_heap_create(512);
  537. }
  538. err = row_vers_build_for_consistent_read(
  539. rec, mtr, index, offsets, read_view, offset_heap,
  540. *old_vers_heap, old_vers);
  541. return(err);
  542. }
  543. /*********************************************************************//**
  544. Builds the last committed version of a clustered index record for a
  545. semi-consistent read.
  546. @return DB_SUCCESS or error code */
  547. static
  548. ulint
  549. row_sel_build_committed_vers_for_mysql(
  550. /*===================================*/
  551. dict_index_t* clust_index, /*!< in: clustered index */
  552. row_prebuilt_t* prebuilt, /*!< in: prebuilt struct */
  553. const rec_t* rec, /*!< in: record in a clustered index */
  554. ulint** offsets, /*!< in/out: offsets returned by
  555. rec_get_offsets(rec, clust_index) */
  556. mem_heap_t** offset_heap, /*!< in/out: memory heap from which
  557. the offsets are allocated */
  558. const rec_t** old_vers, /*!< out: old version, or NULL if the
  559. record does not exist in the view:
  560. i.e., it was freshly inserted
  561. afterwards */
  562. mtr_t* mtr) /*!< in: mtr */
  563. {
  564. ulint err;
  565. if (prebuilt->old_vers_heap) {
  566. mem_heap_empty(prebuilt->old_vers_heap);
  567. } else {
  568. prebuilt->old_vers_heap = mem_heap_create(200);
  569. }
  570. err = row_vers_build_for_semi_consistent_read(
  571. rec, mtr, clust_index, offsets, offset_heap,
  572. prebuilt->old_vers_heap, old_vers);
  573. return(err);
  574. }
  575. /*********************************************************************//**
  576. Tests the conditions which determine when the index segment we are searching
  577. through has been exhausted.
  578. @return TRUE if row passed the tests */
  579. UNIV_INLINE
  580. ibool
  581. row_sel_test_end_conds(
  582. /*===================*/
  583. plan_t* plan) /*!< in: plan for the table; the column values must
  584. already have been retrieved and the right sides of
  585. comparisons evaluated */
  586. {
  587. func_node_t* cond;
  588. /* All conditions in end_conds are comparisons of a column to an
  589. expression */
  590. cond = UT_LIST_GET_FIRST(plan->end_conds);
  591. while (cond) {
  592. /* Evaluate the left side of the comparison, i.e., get the
  593. column value if there is an indirection */
  594. eval_sym(cond->args);
  595. /* Do the comparison */
  596. if (!eval_cmp(cond)) {
  597. return(FALSE);
  598. }
  599. cond = UT_LIST_GET_NEXT(cond_list, cond);
  600. }
  601. return(TRUE);
  602. }
  603. /*********************************************************************//**
  604. Tests the other conditions.
  605. @return TRUE if row passed the tests */
  606. UNIV_INLINE
  607. ibool
  608. row_sel_test_other_conds(
  609. /*=====================*/
  610. plan_t* plan) /*!< in: plan for the table; the column values must
  611. already have been retrieved */
  612. {
  613. func_node_t* cond;
  614. cond = UT_LIST_GET_FIRST(plan->other_conds);
  615. while (cond) {
  616. eval_exp(cond);
  617. if (!eval_node_get_ibool_val(cond)) {
  618. return(FALSE);
  619. }
  620. cond = UT_LIST_GET_NEXT(cond_list, cond);
  621. }
  622. return(TRUE);
  623. }
  624. /*********************************************************************//**
  625. Retrieves the clustered index record corresponding to a record in a
  626. non-clustered index. Does the necessary locking.
  627. @return DB_SUCCESS or error code */
  628. static
  629. ulint
  630. row_sel_get_clust_rec(
  631. /*==================*/
  632. sel_node_t* node, /*!< in: select_node */
  633. plan_t* plan, /*!< in: plan node for table */
  634. rec_t* rec, /*!< in: record in a non-clustered index */
  635. que_thr_t* thr, /*!< in: query thread */
  636. rec_t** out_rec,/*!< out: clustered record or an old version of
  637. it, NULL if the old version did not exist
  638. in the read view, i.e., it was a fresh
  639. inserted version */
  640. mtr_t* mtr) /*!< in: mtr used to get access to the
  641. non-clustered record; the same mtr is used to
  642. access the clustered index */
  643. {
  644. dict_index_t* index;
  645. rec_t* clust_rec;
  646. rec_t* old_vers;
  647. ulint err;
  648. mem_heap_t* heap = NULL;
  649. ulint offsets_[REC_OFFS_NORMAL_SIZE];
  650. ulint* offsets = offsets_;
  651. rec_offs_init(offsets_);
  652. *out_rec = NULL;
  653. offsets = rec_get_offsets(rec,
  654. btr_pcur_get_btr_cur(&plan->pcur)->index,
  655. offsets, ULINT_UNDEFINED, &heap);
  656. row_build_row_ref_fast(plan->clust_ref, plan->clust_map, rec, offsets);
  657. index = dict_table_get_first_index(plan->table);
  658. btr_pcur_open_with_no_init(index, plan->clust_ref, PAGE_CUR_LE,
  659. BTR_SEARCH_LEAF, &plan->clust_pcur,
  660. 0, mtr);
  661. clust_rec = btr_pcur_get_rec(&(plan->clust_pcur));
  662. /* Note: only if the search ends up on a non-infimum record is the
  663. low_match value the real match to the search tuple */
  664. if (!page_rec_is_user_rec(clust_rec)
  665. || btr_pcur_get_low_match(&(plan->clust_pcur))
  666. < dict_index_get_n_unique(index)) {
  667. ut_a(rec_get_deleted_flag(rec,
  668. dict_table_is_comp(plan->table)));
  669. ut_a(node->read_view);
  670. /* In a rare case it is possible that no clust rec is found
  671. for a delete-marked secondary index record: if in row0umod.c
  672. in row_undo_mod_remove_clust_low() we have already removed
  673. the clust rec, while purge is still cleaning and removing
  674. secondary index records associated with earlier versions of
  675. the clustered index record. In that case we know that the
  676. clustered index record did not exist in the read view of
  677. trx. */
  678. goto func_exit;
  679. }
  680. offsets = rec_get_offsets(clust_rec, index, offsets,
  681. ULINT_UNDEFINED, &heap);
  682. if (!node->read_view) {
  683. /* Try to place a lock on the index record */
  684. /* If innodb_locks_unsafe_for_binlog option is used
  685. or this session is using READ COMMITTED isolation level
  686. we lock only the record, i.e., next-key locking is
  687. not used. */
  688. ulint lock_type;
  689. trx_t* trx;
  690. trx = thr_get_trx(thr);
  691. if (srv_locks_unsafe_for_binlog
  692. || trx->isolation_level == TRX_ISO_READ_COMMITTED) {
  693. lock_type = LOCK_REC_NOT_GAP;
  694. } else {
  695. lock_type = LOCK_ORDINARY;
  696. }
  697. err = lock_clust_rec_read_check_and_lock(
  698. 0, btr_pcur_get_block(&plan->clust_pcur),
  699. clust_rec, index, offsets,
  700. node->row_lock_mode, lock_type, thr);
  701. if (err != DB_SUCCESS) {
  702. goto err_exit;
  703. }
  704. } else {
  705. /* This is a non-locking consistent read: if necessary, fetch
  706. a previous version of the record */
  707. old_vers = NULL;
  708. if (!lock_clust_rec_cons_read_sees(clust_rec, index, offsets,
  709. node->read_view)) {
  710. err = row_sel_build_prev_vers(
  711. node->read_view, index, clust_rec,
  712. &offsets, &heap, &plan->old_vers_heap,
  713. &old_vers, mtr);
  714. if (err != DB_SUCCESS) {
  715. goto err_exit;
  716. }
  717. clust_rec = old_vers;
  718. if (clust_rec == NULL) {
  719. goto func_exit;
  720. }
  721. }
  722. /* If we had to go to an earlier version of row or the
  723. secondary index record is delete marked, then it may be that
  724. the secondary index record corresponding to clust_rec
  725. (or old_vers) is not rec; in that case we must ignore
  726. such row because in our snapshot rec would not have existed.
  727. Remember that from rec we cannot see directly which transaction
  728. id corresponds to it: we have to go to the clustered index
  729. record. A query where we want to fetch all rows where
  730. the secondary index value is in some interval would return
  731. a wrong result if we would not drop rows which we come to
  732. visit through secondary index records that would not really
  733. exist in our snapshot. */
  734. if ((old_vers
  735. || rec_get_deleted_flag(rec, dict_table_is_comp(
  736. plan->table)))
  737. && !row_sel_sec_rec_is_for_clust_rec(rec, plan->index,
  738. clust_rec, index)) {
  739. goto func_exit;
  740. }
  741. }
  742. /* Fetch the columns needed in test conditions. The clustered
  743. index record is protected by a page latch that was acquired
  744. when plan->clust_pcur was positioned. The latch will not be
  745. released until mtr_commit(mtr). */
  746. row_sel_fetch_columns(index, clust_rec, offsets,
  747. UT_LIST_GET_FIRST(plan->columns));
  748. *out_rec = clust_rec;
  749. func_exit:
  750. err = DB_SUCCESS;
  751. err_exit:
  752. if (UNIV_LIKELY_NULL(heap)) {
  753. mem_heap_free(heap);
  754. }
  755. return(err);
  756. }
  757. /*********************************************************************//**
  758. Sets a lock on a record.
  759. @return DB_SUCCESS or error code */
  760. UNIV_INLINE
  761. ulint
  762. sel_set_rec_lock(
  763. /*=============*/
  764. const buf_block_t* block, /*!< in: buffer block of rec */
  765. const rec_t* rec, /*!< in: record */
  766. dict_index_t* index, /*!< in: index */
  767. const ulint* offsets,/*!< in: rec_get_offsets(rec, index) */
  768. ulint mode, /*!< in: lock mode */
  769. ulint type, /*!< in: LOCK_ORDINARY, LOCK_GAP, or
  770. LOC_REC_NOT_GAP */
  771. que_thr_t* thr) /*!< in: query thread */
  772. {
  773. trx_t* trx;
  774. ulint err;
  775. trx = thr_get_trx(thr);
  776. if (UT_LIST_GET_LEN(trx->trx_locks) > 10000) {
  777. if (buf_LRU_buf_pool_running_out()) {
  778. return(DB_LOCK_TABLE_FULL);
  779. }
  780. }
  781. if (dict_index_is_clust(index)) {
  782. err = lock_clust_rec_read_check_and_lock(
  783. 0, block, rec, index, offsets, mode, type, thr);
  784. } else {
  785. err = lock_sec_rec_read_check_and_lock(
  786. 0, block, rec, index, offsets, mode, type, thr);
  787. }
  788. return(err);
  789. }
  790. /*********************************************************************//**
  791. Opens a pcur to a table index. */
  792. static
  793. void
  794. row_sel_open_pcur(
  795. /*==============*/
  796. plan_t* plan, /*!< in: table plan */
  797. ibool search_latch_locked,
  798. /*!< in: TRUE if the thread currently
  799. has the search latch locked in
  800. s-mode */
  801. mtr_t* mtr) /*!< in: mtr */
  802. {
  803. dict_index_t* index;
  804. func_node_t* cond;
  805. que_node_t* exp;
  806. ulint n_fields;
  807. ulint has_search_latch = 0; /* RW_S_LATCH or 0 */
  808. ulint i;
  809. if (search_latch_locked) {
  810. has_search_latch = RW_S_LATCH;
  811. }
  812. index = plan->index;
  813. /* Calculate the value of the search tuple: the exact match columns
  814. get their expressions evaluated when we evaluate the right sides of
  815. end_conds */
  816. cond = UT_LIST_GET_FIRST(plan->end_conds);
  817. while (cond) {
  818. eval_exp(que_node_get_next(cond->args));
  819. cond = UT_LIST_GET_NEXT(cond_list, cond);
  820. }
  821. if (plan->tuple) {
  822. n_fields = dtuple_get_n_fields(plan->tuple);
  823. if (plan->n_exact_match < n_fields) {
  824. /* There is a non-exact match field which must be
  825. evaluated separately */
  826. eval_exp(plan->tuple_exps[n_fields - 1]);
  827. }
  828. for (i = 0; i < n_fields; i++) {
  829. exp = plan->tuple_exps[i];
  830. dfield_copy_data(dtuple_get_nth_field(plan->tuple, i),
  831. que_node_get_val(exp));
  832. }
  833. /* Open pcur to the index */
  834. btr_pcur_open_with_no_init(index, plan->tuple, plan->mode,
  835. BTR_SEARCH_LEAF, &plan->pcur,
  836. has_search_latch, mtr);
  837. } else {
  838. /* Open the cursor to the start or the end of the index
  839. (FALSE: no init) */
  840. btr_pcur_open_at_index_side(plan->asc, index, BTR_SEARCH_LEAF,
  841. &(plan->pcur), FALSE, mtr);
  842. }
  843. ut_ad(plan->n_rows_prefetched == 0);
  844. ut_ad(plan->n_rows_fetched == 0);
  845. ut_ad(plan->cursor_at_end == FALSE);
  846. plan->pcur_is_open = TRUE;
  847. }
  848. /*********************************************************************//**
  849. Restores a stored pcur position to a table index.
  850. @return TRUE if the cursor should be moved to the next record after we
  851. return from this function (moved to the previous, in the case of a
  852. descending cursor) without processing again the current cursor
  853. record */
  854. static
  855. ibool
  856. row_sel_restore_pcur_pos(
  857. /*=====================*/
  858. plan_t* plan, /*!< in: table plan */
  859. mtr_t* mtr) /*!< in: mtr */
  860. {
  861. ibool equal_position;
  862. ulint relative_position;
  863. ut_ad(!plan->cursor_at_end);
  864. relative_position = btr_pcur_get_rel_pos(&(plan->pcur));
  865. equal_position = btr_pcur_restore_position(BTR_SEARCH_LEAF,
  866. &(plan->pcur), mtr);
  867. /* If the cursor is traveling upwards, and relative_position is
  868. (1) BTR_PCUR_BEFORE: this is not allowed, as we did not have a lock
  869. yet on the successor of the page infimum;
  870. (2) BTR_PCUR_AFTER: btr_pcur_restore_position placed the cursor on the
  871. first record GREATER than the predecessor of a page supremum; we have
  872. not yet processed the cursor record: no need to move the cursor to the
  873. next record;
  874. (3) BTR_PCUR_ON: btr_pcur_restore_position placed the cursor on the
  875. last record LESS or EQUAL to the old stored user record; (a) if
  876. equal_position is FALSE, this means that the cursor is now on a record
  877. less than the old user record, and we must move to the next record;
  878. (b) if equal_position is TRUE, then if
  879. plan->stored_cursor_rec_processed is TRUE, we must move to the next
  880. record, else there is no need to move the cursor. */
  881. if (plan->asc) {
  882. if (relative_position == BTR_PCUR_ON) {
  883. if (equal_position) {
  884. return(plan->stored_cursor_rec_processed);
  885. }
  886. return(TRUE);
  887. }
  888. ut_ad(relative_position == BTR_PCUR_AFTER
  889. || relative_position == BTR_PCUR_AFTER_LAST_IN_TREE);
  890. return(FALSE);
  891. }
  892. /* If the cursor is traveling downwards, and relative_position is
  893. (1) BTR_PCUR_BEFORE: btr_pcur_restore_position placed the cursor on
  894. the last record LESS than the successor of a page infimum; we have not
  895. processed the cursor record: no need to move the cursor;
  896. (2) BTR_PCUR_AFTER: btr_pcur_restore_position placed the cursor on the
  897. first record GREATER than the predecessor of a page supremum; we have
  898. processed the cursor record: we should move the cursor to the previous
  899. record;
  900. (3) BTR_PCUR_ON: btr_pcur_restore_position placed the cursor on the
  901. last record LESS or EQUAL to the old stored user record; (a) if
  902. equal_position is FALSE, this means that the cursor is now on a record
  903. less than the old user record, and we need not move to the previous
  904. record; (b) if equal_position is TRUE, then if
  905. plan->stored_cursor_rec_processed is TRUE, we must move to the previous
  906. record, else there is no need to move the cursor. */
  907. if (relative_position == BTR_PCUR_BEFORE
  908. || relative_position == BTR_PCUR_BEFORE_FIRST_IN_TREE) {
  909. return(FALSE);
  910. }
  911. if (relative_position == BTR_PCUR_ON) {
  912. if (equal_position) {
  913. return(plan->stored_cursor_rec_processed);
  914. }
  915. return(FALSE);
  916. }
  917. ut_ad(relative_position == BTR_PCUR_AFTER
  918. || relative_position == BTR_PCUR_AFTER_LAST_IN_TREE);
  919. return(TRUE);
  920. }
  921. /*********************************************************************//**
  922. Resets a plan cursor to a closed state. */
  923. UNIV_INLINE
  924. void
  925. plan_reset_cursor(
  926. /*==============*/
  927. plan_t* plan) /*!< in: plan */
  928. {
  929. plan->pcur_is_open = FALSE;
  930. plan->cursor_at_end = FALSE;
  931. plan->n_rows_fetched = 0;
  932. plan->n_rows_prefetched = 0;
  933. }
  934. /*********************************************************************//**
  935. Tries to do a shortcut to fetch a clustered index record with a unique key,
  936. using the hash index if possible (not always).
  937. @return SEL_FOUND, SEL_EXHAUSTED, SEL_RETRY */
  938. static
  939. ulint
  940. row_sel_try_search_shortcut(
  941. /*========================*/
  942. sel_node_t* node, /*!< in: select node for a consistent read */
  943. plan_t* plan, /*!< in: plan for a unique search in clustered
  944. index */
  945. mtr_t* mtr) /*!< in: mtr */
  946. {
  947. dict_index_t* index;
  948. rec_t* rec;
  949. mem_heap_t* heap = NULL;
  950. ulint offsets_[REC_OFFS_NORMAL_SIZE];
  951. ulint* offsets = offsets_;
  952. ulint ret;
  953. rec_offs_init(offsets_);
  954. index = plan->index;
  955. ut_ad(node->read_view);
  956. ut_ad(plan->unique_search);
  957. ut_ad(!plan->must_get_clust);
  958. #ifdef UNIV_SYNC_DEBUG
  959. ut_ad(rw_lock_own(&btr_search_latch, RW_LOCK_SHARED));
  960. #endif /* UNIV_SYNC_DEBUG */
  961. row_sel_open_pcur(plan, TRUE, mtr);
  962. rec = btr_pcur_get_rec(&(plan->pcur));
  963. if (!page_rec_is_user_rec(rec)) {
  964. return(SEL_RETRY);
  965. }
  966. ut_ad(plan->mode == PAGE_CUR_GE);
  967. /* As the cursor is now placed on a user record after a search with
  968. the mode PAGE_CUR_GE, the up_match field in the cursor tells how many
  969. fields in the user record matched to the search tuple */
  970. if (btr_pcur_get_up_match(&(plan->pcur)) < plan->n_exact_match) {
  971. return(SEL_EXHAUSTED);
  972. }
  973. /* This is a non-locking consistent read: if necessary, fetch
  974. a previous version of the record */
  975. offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap);
  976. if (dict_index_is_clust(index)) {
  977. if (!lock_clust_rec_cons_read_sees(rec, index, offsets,
  978. node->read_view)) {
  979. ret = SEL_RETRY;
  980. goto func_exit;
  981. }
  982. } else if (!lock_sec_rec_cons_read_sees(rec, node->read_view)) {
  983. ret = SEL_RETRY;
  984. goto func_exit;
  985. }
  986. /* Test the deleted flag. */
  987. if (rec_get_deleted_flag(rec, dict_table_is_comp(plan->table))) {
  988. ret = SEL_EXHAUSTED;
  989. goto func_exit;
  990. }
  991. /* Fetch the columns needed in test conditions. The index
  992. record is protected by a page latch that was acquired when
  993. plan->pcur was positioned. The latch will not be released
  994. until mtr_commit(mtr). */
  995. row_sel_fetch_columns(index, rec, offsets,
  996. UT_LIST_GET_FIRST(plan->columns));
  997. /* Test the rest of search conditions */
  998. if (!row_sel_test_other_conds(plan)) {
  999. ret = SEL_EXHAUSTED;
  1000. goto func_exit;
  1001. }
  1002. ut_ad(plan->pcur.latch_mode == BTR_SEARCH_LEAF);
  1003. plan->n_rows_fetched++;
  1004. ret = SEL_FOUND;
  1005. func_exit:
  1006. if (UNIV_LIKELY_NULL(heap)) {
  1007. mem_heap_free(heap);
  1008. }
  1009. return(ret);
  1010. }
  1011. /*********************************************************************//**
  1012. Performs a select step.
  1013. @return DB_SUCCESS or error code */
  1014. static
  1015. ulint
  1016. row_sel(
  1017. /*====*/
  1018. sel_node_t* node, /*!< in: select node */
  1019. que_thr_t* thr) /*!< in: query thread */
  1020. {
  1021. dict_index_t* index;
  1022. plan_t* plan;
  1023. mtr_t mtr;
  1024. ibool moved;
  1025. rec_t* rec;
  1026. rec_t* old_vers;
  1027. rec_t* clust_rec;
  1028. ibool search_latch_locked;
  1029. ibool consistent_read;
  1030. /* The following flag becomes TRUE when we are doing a
  1031. consistent read from a non-clustered index and we must look
  1032. at the clustered index to find out the previous delete mark
  1033. state of the non-clustered record: */
  1034. ibool cons_read_requires_clust_rec = FALSE;
  1035. ulint cost_counter = 0;
  1036. ibool cursor_just_opened;
  1037. ibool must_go_to_next;
  1038. ibool mtr_has_extra_clust_latch = FALSE;
  1039. /* TRUE if the search was made using
  1040. a non-clustered index, and we had to
  1041. access the clustered record: now &mtr
  1042. contains a clustered index latch, and
  1043. &mtr must be committed before we move
  1044. to the next non-clustered record */
  1045. ulint found_flag;
  1046. ulint err;
  1047. mem_heap_t* heap = NULL;
  1048. ulint offsets_[REC_OFFS_NORMAL_SIZE];
  1049. ulint* offsets = offsets_;
  1050. rec_offs_init(offsets_);
  1051. ut_ad(thr->run_node == node);
  1052. search_latch_locked = FALSE;
  1053. if (node->read_view) {
  1054. /* In consistent reads, we try to do with the hash index and
  1055. not to use the buffer page get. This is to reduce memory bus
  1056. load resulting from semaphore operations. The search latch
  1057. will be s-locked when we access an index with a unique search
  1058. condition, but not locked when we access an index with a
  1059. less selective search condition. */
  1060. consistent_read = TRUE;
  1061. } else {
  1062. consistent_read = FALSE;
  1063. }
  1064. table_loop:
  1065. /* TABLE LOOP
  1066. ----------
  1067. This is the outer major loop in calculating a join. We come here when
  1068. node->fetch_table changes, and after adding a row to aggregate totals
  1069. and, of course, when this function is called. */
  1070. ut_ad(mtr_has_extra_clust_latch == FALSE);
  1071. plan = sel_node_get_nth_plan(node, node->fetch_table);
  1072. index = plan->index;
  1073. if (plan->n_rows_prefetched > 0) {
  1074. sel_pop_prefetched_row(plan);
  1075. goto next_table_no_mtr;
  1076. }
  1077. if (plan->cursor_at_end) {
  1078. /* The cursor has already reached the result set end: no more
  1079. rows to process for this table cursor, as also the prefetch
  1080. stack was empty */
  1081. ut_ad(plan->pcur_is_open);
  1082. goto table_exhausted_no_mtr;
  1083. }
  1084. /* Open a cursor to index, or restore an open cursor position */
  1085. mtr_start(&mtr);
  1086. if (consistent_read && plan->unique_search && !plan->pcur_is_open
  1087. && !plan->must_get_clust
  1088. && !plan->table->big_rows) {
  1089. if (!search_latch_locked) {
  1090. rw_lock_s_lock(&btr_search_latch);
  1091. search_latch_locked = TRUE;
  1092. } else if (rw_lock_get_writer(&btr_search_latch) == RW_LOCK_WAIT_EX) {
  1093. /* There is an x-latch request waiting: release the
  1094. s-latch for a moment; as an s-latch here is often
  1095. kept for some 10 searches before being released,
  1096. a waiting x-latch request would block other threads
  1097. from acquiring an s-latch for a long time, lowering
  1098. performance significantly in multiprocessors. */
  1099. rw_lock_s_unlock(&btr_search_latch);
  1100. rw_lock_s_lock(&btr_search_latch);
  1101. }
  1102. found_flag = row_sel_try_search_shortcut(node, plan, &mtr);
  1103. if (found_flag == SEL_FOUND) {
  1104. goto next_table;
  1105. } else if (found_flag == SEL_EXHAUSTED) {
  1106. goto table_exhausted;
  1107. }
  1108. ut_ad(found_flag == SEL_RETRY);
  1109. plan_reset_cursor(plan);
  1110. mtr_commit(&mtr);
  1111. mtr_start(&mtr);
  1112. }
  1113. if (search_latch_locked) {
  1114. rw_lock_s_unlock(&btr_search_latch);
  1115. search_latch_locked = FALSE;
  1116. }
  1117. if (!plan->pcur_is_open) {
  1118. /* Evaluate the expressions to build the search tuple and
  1119. open the cursor */
  1120. row_sel_open_pcur(plan, search_latch_locked, &mtr);
  1121. cursor_just_opened = TRUE;
  1122. /* A new search was made: increment the cost counter */
  1123. cost_counter++;
  1124. } else {
  1125. /* Restore pcur position to the index */
  1126. must_go_to_next = row_sel_restore_pcur_pos(plan, &mtr);
  1127. cursor_just_opened = FALSE;
  1128. if (must_go_to_next) {
  1129. /* We have already processed the cursor record: move
  1130. to the next */
  1131. goto next_rec;
  1132. }
  1133. }
  1134. rec_loop:
  1135. /* RECORD LOOP
  1136. -----------
  1137. In this loop we use pcur and try to fetch a qualifying row, and
  1138. also fill the prefetch buffer for this table if n_rows_fetched has
  1139. exceeded a threshold. While we are inside this loop, the following
  1140. holds:
  1141. (1) &mtr is started,
  1142. (2) pcur is positioned and open.
  1143. NOTE that if cursor_just_opened is TRUE here, it means that we came
  1144. to this point right after row_sel_open_pcur. */
  1145. ut_ad(mtr_has_extra_clust_latch == FALSE);
  1146. rec = btr_pcur_get_rec(&(plan->pcur));
  1147. /* PHASE 1: Set a lock if specified */
  1148. if (!node->asc && cursor_just_opened
  1149. && !page_rec_is_supremum(rec)) {
  1150. /* When we open a cursor for a descending search, we must set
  1151. a next-key lock on the successor record: otherwise it would
  1152. be possible to insert new records next to the cursor position,
  1153. and it might be that these new records should appear in the
  1154. search result set, resulting in the phantom problem. */
  1155. if (!consistent_read) {
  1156. /* If innodb_locks_unsafe_for_binlog option is used
  1157. or this session is using READ COMMITTED isolation
  1158. level, we lock only the record, i.e., next-key
  1159. locking is not used. */
  1160. rec_t* next_rec = page_rec_get_next(rec);
  1161. ulint lock_type;
  1162. trx_t* trx;
  1163. trx = thr_get_trx(thr);
  1164. offsets = rec_get_offsets(next_rec, index, offsets,
  1165. ULINT_UNDEFINED, &heap);
  1166. if (srv_locks_unsafe_for_binlog
  1167. || trx->isolation_level
  1168. == TRX_ISO_READ_COMMITTED) {
  1169. if (page_rec_is_supremum(next_rec)) {
  1170. goto skip_lock;
  1171. }
  1172. lock_type = LOCK_REC_NOT_GAP;
  1173. } else {
  1174. lock_type = LOCK_ORDINARY;
  1175. }
  1176. err = sel_set_rec_lock(btr_pcur_get_block(&plan->pcur),
  1177. next_rec, index, offsets,
  1178. node->row_lock_mode,
  1179. lock_type, thr);
  1180. if (err != DB_SUCCESS) {
  1181. /* Note that in this case we will store in pcur
  1182. the PREDECESSOR of the record we are waiting
  1183. the lock for */
  1184. goto lock_wait_or_error;
  1185. }
  1186. }
  1187. }
  1188. skip_lock:
  1189. if (page_rec_is_infimum(rec)) {
  1190. /* The infimum record on a page cannot be in the result set,
  1191. and neither can a record lock be placed on it: we skip such
  1192. a record. We also increment the cost counter as we may have
  1193. processed yet another page of index. */
  1194. cost_counter++;
  1195. goto next_rec;
  1196. }
  1197. if (!consistent_read) {
  1198. /* Try to place a lock on the index record */
  1199. /* If innodb_locks_unsafe_for_binlog option is used
  1200. or this session is using READ COMMITTED isolation level,
  1201. we lock only the record, i.e., next-key locking is
  1202. not used. */
  1203. ulint lock_type;
  1204. trx_t* trx;
  1205. offsets = rec_get_offsets(rec, index, offsets,
  1206. ULINT_UNDEFINED, &heap);
  1207. trx = thr_get_trx(thr);
  1208. if (srv_locks_unsafe_for_binlog
  1209. || trx->isolation_level == TRX_ISO_READ_COMMITTED) {
  1210. if (page_rec_is_supremum(rec)) {
  1211. goto next_rec;
  1212. }
  1213. lock_type = LOCK_REC_NOT_GAP;
  1214. } else {
  1215. lock_type = LOCK_ORDINARY;
  1216. }
  1217. err = sel_set_rec_lock(btr_pcur_get_block(&plan->pcur),
  1218. rec, index, offsets,
  1219. node->row_lock_mode, lock_type, thr);
  1220. if (err != DB_SUCCESS) {
  1221. goto lock_wait_or_error;
  1222. }
  1223. }
  1224. if (page_rec_is_supremum(rec)) {
  1225. /* A page supremum record cannot be in the result set: skip
  1226. it now when we have placed a possible lock on it */
  1227. goto next_rec;
  1228. }
  1229. ut_ad(page_rec_is_user_rec(rec));
  1230. if (cost_counter > SEL_COST_LIMIT) {
  1231. /* Now that we have placed the necessary locks, we can stop
  1232. for a while and store the cursor position; NOTE that if we
  1233. would store the cursor position BEFORE placing a record lock,
  1234. it might happen that the cursor would jump over some records
  1235. that another transaction could meanwhile insert adjacent to
  1236. the cursor: this would result in the phantom problem. */
  1237. goto stop_for_a_while;
  1238. }
  1239. /* PHASE 2: Check a mixed index mix id if needed */
  1240. if (plan->unique_search && cursor_just_opened) {
  1241. ut_ad(plan->mode == PAGE_CUR_GE);
  1242. /* As the cursor is now placed on a user record after a search
  1243. with the mode PAGE_CUR_GE, the up_match field in the cursor
  1244. tells how many fields in the user record matched to the search
  1245. tuple */
  1246. if (btr_pcur_get_up_match(&(plan->pcur))
  1247. < plan->n_exact_match) {
  1248. goto table_exhausted;
  1249. }
  1250. /* Ok, no need to test end_conds or mix id */
  1251. }
  1252. /* We are ready to look at a possible new index entry in the result
  1253. set: the cursor is now placed on a user record */
  1254. /* PHASE 3: Get previous version in a consistent read */
  1255. cons_read_requires_clust_rec = FALSE;
  1256. offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap);
  1257. if (consistent_read) {
  1258. /* This is a non-locking consistent read: if necessary, fetch
  1259. a previous version of the record */
  1260. if (dict_index_is_clust(index)) {
  1261. if (!lock_clust_rec_cons_read_sees(rec, index, offsets,
  1262. node->read_view)) {
  1263. err = row_sel_build_prev_vers(
  1264. node->read_view, index, rec,
  1265. &offsets, &heap, &plan->old_vers_heap,
  1266. &old_vers, &mtr);
  1267. if (err != DB_SUCCESS) {
  1268. goto lock_wait_or_error;
  1269. }
  1270. if (old_vers == NULL) {
  1271. offsets = rec_get_offsets(
  1272. rec, index, offsets,
  1273. ULINT_UNDEFINED, &heap);
  1274. /* Fetch the columns needed in
  1275. test conditions. The clustered
  1276. index record is protected by a
  1277. page latch that was acquired
  1278. by row_sel_open_pcur() or
  1279. row_sel_restore_pcur_pos().
  1280. The latch will not be released
  1281. until mtr_commit(mtr). */
  1282. row_sel_fetch_columns(
  1283. index, rec, offsets,
  1284. UT_LIST_GET_FIRST(
  1285. plan->columns));
  1286. if (!row_sel_test_end_conds(plan)) {
  1287. goto table_exhausted;
  1288. }
  1289. goto next_rec;
  1290. }
  1291. rec = old_vers;
  1292. }
  1293. } else if (!lock_sec_rec_cons_read_sees(rec,
  1294. node->read_view)) {
  1295. cons_read_requires_clust_rec = TRUE;
  1296. }
  1297. }
  1298. /* PHASE 4: Test search end conditions and deleted flag */
  1299. /* Fetch the columns needed in test conditions. The record is
  1300. protected by a page latch that was acquired by
  1301. row_sel_open_pcur() or row_sel_restore_pcur_pos(). The latch
  1302. will not be released until mtr_commit(mtr). */
  1303. row_sel_fetch_columns(index, rec, offsets,
  1304. UT_LIST_GET_FIRST(plan->columns));
  1305. /* Test the selection end conditions: these can only contain columns
  1306. which already are found in the index, even though the index might be
  1307. non-clustered */
  1308. if (plan->unique_search && cursor_just_opened) {
  1309. /* No test necessary: the test was already made above */
  1310. } else if (!row_sel_test_end_conds(plan)) {
  1311. goto table_exhausted;
  1312. }
  1313. if (rec_get_deleted_flag(rec, dict_table_is_comp(plan->table))
  1314. && !cons_read_requires_clust_rec) {
  1315. /* The record is delete marked: we can skip it if this is
  1316. not a consistent read which might see an earlier version
  1317. of a non-clustered index record */
  1318. if (plan->unique_search) {
  1319. goto table_exhausted;
  1320. }
  1321. goto next_rec;
  1322. }
  1323. /* PHASE 5: Get the clustered index record, if needed and if we did
  1324. not do the search using the clustered index */
  1325. if (plan->must_get_clust || cons_read_requires_clust_rec) {
  1326. /* It was a non-clustered index and we must fetch also the
  1327. clustered index record */
  1328. err = row_sel_get_clust_rec(node, plan, rec, thr, &clust_rec,
  1329. &mtr);
  1330. mtr_has_extra_clust_latch = TRUE;
  1331. if (err != DB_SUCCESS) {
  1332. goto lock_wait_or_error;
  1333. }
  1334. /* Retrieving the clustered record required a search:
  1335. increment the cost counter */
  1336. cost_counter++;
  1337. if (clust_rec == NULL) {
  1338. /* The record did not exist in the read view */
  1339. ut_ad(consistent_read);
  1340. goto next_rec;
  1341. }
  1342. if (rec_get_deleted_flag(clust_rec,
  1343. dict_table_is_comp(plan->table))) {
  1344. /* The record is delete marked: we can skip it */
  1345. goto next_rec;
  1346. }
  1347. if (node->can_get_updated) {
  1348. btr_pcur_store_position(&(plan->clust_pcur), &mtr);
  1349. }
  1350. }
  1351. /* PHASE 6: Test the rest of search conditions */
  1352. if (!row_sel_test_other_conds(plan)) {
  1353. if (plan->unique_search) {
  1354. goto table_exhausted;
  1355. }
  1356. goto next_rec;
  1357. }
  1358. /* PHASE 7: We found a new qualifying row for the current table; push
  1359. the row if prefetch is on, or move to the next table in the join */
  1360. plan->n_rows_fetched++;
  1361. ut_ad(plan->pcur.latch_mode == BTR_SEARCH_LEAF);
  1362. if ((plan->n_rows_fetched <= SEL_PREFETCH_LIMIT)
  1363. || plan->unique_search || plan->no_prefetch
  1364. || plan->table->big_rows) {
  1365. /* No prefetch in operation: go to the next table */
  1366. goto next_table;
  1367. }
  1368. sel_push_prefetched_row(plan);
  1369. if (plan->n_rows_prefetched == SEL_MAX_N_PREFETCH) {
  1370. /* The prefetch buffer is now full */
  1371. sel_pop_prefetched_row(plan);
  1372. goto next_table;
  1373. }
  1374. next_rec:
  1375. ut_ad(!search_latch_locked);
  1376. if (mtr_has_extra_clust_latch) {
  1377. /* We must commit &mtr if we are moving to the next
  1378. non-clustered index record, because we could break the
  1379. latching order if we would access a different clustered
  1380. index page right away without releasing the previous. */
  1381. goto commit_mtr_for_a_while;
  1382. }
  1383. if (node->asc) {
  1384. moved = btr_pcur_move_to_next(&(plan->pcur), &mtr);
  1385. } else {
  1386. moved = btr_pcur_move_to_prev(&(plan->pcur), &mtr);
  1387. }
  1388. if (!moved) {
  1389. goto table_exhausted;
  1390. }
  1391. cursor_just_opened = FALSE;
  1392. /* END OF RECORD LOOP
  1393. ------------------ */
  1394. goto rec_loop;
  1395. next_table:
  1396. /* We found a record which satisfies the conditions: we can move to
  1397. the next table or return a row in the result set */
  1398. ut_ad(btr_pcur_is_on_user_rec(&plan->pcur));
  1399. if (plan->unique_search && !node->can_get_updated) {
  1400. plan->cursor_at_end = TRUE;
  1401. } else {
  1402. ut_ad(!search_latch_locked);
  1403. plan->stored_cursor_rec_processed = TRUE;
  1404. btr_pcur_store_position(&(plan->pcur), &mtr);
  1405. }
  1406. mtr_commit(&mtr);
  1407. mtr_has_extra_clust_latch = FALSE;
  1408. next_table_no_mtr:
  1409. /* If we use 'goto' to this label, it means that the row was popped
  1410. from the prefetched rows stack, and &mtr is already committed */
  1411. if (node->fetch_table + 1 == node->n_tables) {
  1412. sel_eval_select_list(node);
  1413. if (node->is_aggregate) {
  1414. goto table_loop;
  1415. }
  1416. sel_assign_into_var_values(node->into_list, node);
  1417. thr->run_node = que_node_get_parent(node);
  1418. err = DB_SUCCESS;
  1419. goto func_exit;
  1420. }
  1421. node->fetch_table++;
  1422. /* When we move to the next table, we first reset the plan cursor:
  1423. we do not care about resetting it when we backtrack from a table */
  1424. plan_reset_cursor(sel_node_get_nth_plan(node, node->fetch_table));
  1425. goto table_loop;
  1426. table_exhausted:
  1427. /* The table cursor pcur reached the result set end: backtrack to the
  1428. previous table in the join if we do not have cached prefetched rows */
  1429. plan->cursor_at_end = TRUE;
  1430. mtr_commit(&mtr);
  1431. mtr_has_extra_clust_latch = FALSE;
  1432. if (plan->n_rows_prefetched > 0) {
  1433. /* The table became exhausted during a prefetch */
  1434. sel_pop_prefetched_row(plan);
  1435. goto next_table_no_mtr;
  1436. }
  1437. table_exhausted_no_mtr:
  1438. if (node->fetch_table == 0) {
  1439. err = DB_SUCCESS;
  1440. if (node->is_aggregate && !node->aggregate_already_fetched) {
  1441. node->aggregate_already_fetched = TRUE;
  1442. sel_assign_into_var_values(node->into_list, node);
  1443. thr->run_node = que_node_get_parent(node);
  1444. } else {
  1445. node->state = SEL_NODE_NO_MORE_ROWS;
  1446. thr->run_node = que_node_get_parent(node);
  1447. }
  1448. goto func_exit;
  1449. }
  1450. node->fetch_table--;
  1451. goto table_loop;
  1452. stop_for_a_while:
  1453. /* Return control for a while to que_run_threads, so that runaway
  1454. queries can be canceled. NOTE that when we come here, we must, in a
  1455. locking read, have placed the necessary (possibly waiting request)
  1456. record lock on the cursor record or its successor: when we reposition
  1457. the cursor, this record lock guarantees that nobody can meanwhile have
  1458. inserted new records which should have appeared in the result set,
  1459. which would result in the phantom problem. */
  1460. ut_ad(!search_latch_locked);
  1461. plan->stored_cursor_rec_processed = FALSE;
  1462. btr_pcur_store_position(&(plan->pcur), &mtr);
  1463. mtr_commit(&mtr);
  1464. #ifdef UNIV_SYNC_DEBUG
  1465. ut_ad(sync_thread_levels_empty_gen(TRUE));
  1466. #endif /* UNIV_SYNC_DEBUG */
  1467. err = DB_SUCCESS;
  1468. goto func_exit;
  1469. commit_mtr_for_a_while:
  1470. /* Stores the cursor position and commits &mtr; this is used if
  1471. &mtr may contain latches which would break the latching order if
  1472. &mtr would not be committed and the latches released. */
  1473. plan->stored_cursor_rec_processed = TRUE;
  1474. ut_ad(!search_latch_locked);
  1475. btr_pcur_store_position(&(plan->pcur), &mtr);
  1476. mtr_commit(&mtr);
  1477. mtr_has_extra_clust_latch = FALSE;
  1478. #ifdef UNIV_SYNC_DEBUG
  1479. ut_ad(sync_thread_levels_empty_gen(TRUE));
  1480. #endif /* UNIV_SYNC_DEBUG */
  1481. goto table_loop;
  1482. lock_wait_or_error:
  1483. /* See the note at stop_for_a_while: the same holds for this case */
  1484. ut_ad(!btr_pcur_is_before_first_on_page(&plan->pcur) || !node->asc);
  1485. ut_ad(!search_latch_locked);
  1486. plan->stored_cursor_rec_processed = FALSE;
  1487. btr_pcur_store_position(&(plan->pcur), &mtr);
  1488. mtr_commit(&mtr);
  1489. #ifdef UNIV_SYNC_DEBUG
  1490. ut_ad(sync_thread_levels_empty_gen(TRUE));
  1491. #endif /* UNIV_SYNC_DEBUG */
  1492. func_exit:
  1493. if (search_latch_locked) {
  1494. rw_lock_s_unlock(&btr_search_latch);
  1495. }
  1496. if (UNIV_LIKELY_NULL(heap)) {
  1497. mem_heap_free(heap);
  1498. }
  1499. return(err);
  1500. }
  1501. /**********************************************************************//**
  1502. Performs a select step. This is a high-level function used in SQL execution
  1503. graphs.
  1504. @return query thread to run next or NULL */
  1505. UNIV_INTERN
  1506. que_thr_t*
  1507. row_sel_step(
  1508. /*=========*/
  1509. que_thr_t* thr) /*!< in: query thread */
  1510. {
  1511. ulint i_lock_mode;
  1512. sym_node_t* table_node;
  1513. sel_node_t* node;
  1514. ulint err;
  1515. ut_ad(thr);
  1516. node = thr->run_node;
  1517. ut_ad(que_node_get_type(node) == QUE_NODE_SELECT);
  1518. /* If this is a new time this node is executed (or when execution
  1519. resumes after wait for a table intention lock), set intention locks
  1520. on the tables, or assign a read view */
  1521. if (node->into_list && (thr->prev_node == que_node_get_parent(node))) {
  1522. node->state = SEL_NODE_OPEN;
  1523. }
  1524. if (node->state == SEL_NODE_OPEN) {
  1525. /* It may be that the current session has not yet started
  1526. its transaction, or it has been committed: */
  1527. trx_start_if_not_started(thr_get_trx(thr));
  1528. plan_reset_cursor(sel_node_get_nth_plan(node, 0));
  1529. if (node->consistent_read) {
  1530. /* Assign a read view for the query */
  1531. node->read_view = trx_assign_read_view(
  1532. thr_get_trx(thr));
  1533. } else {
  1534. if (node->set_x_locks) {
  1535. i_lock_mode = LOCK_IX;
  1536. } else {
  1537. i_lock_mode = LOCK_IS;
  1538. }
  1539. table_node = node->table_list;
  1540. while (table_node) {
  1541. err = lock_table(0, table_node->table,
  1542. i_lock_mode, thr);
  1543. if (err != DB_SUCCESS) {
  1544. thr_get_trx(thr)->error_state = err;
  1545. return(NULL);
  1546. }
  1547. table_node = que_node_get_next(table_node);
  1548. }
  1549. }
  1550. /* If this is an explicit cursor, copy stored procedure
  1551. variable values, so that the values cannot change between
  1552. fetches (currently, we copy them also for non-explicit
  1553. cursors) */
  1554. if (node->explicit_cursor
  1555. && UT_LIST_GET_FIRST(node->copy_variables)) {
  1556. row_sel_copy_input_variable_vals(node);
  1557. }
  1558. node->state = SEL_NODE_FETCH;
  1559. node->fetch_table = 0;
  1560. if (node->is_aggregate) {
  1561. /* Reset the aggregate total values */
  1562. sel_reset_aggregate_vals(node);
  1563. }
  1564. }
  1565. err = row_sel(node, thr);
  1566. /* NOTE! if queries are parallelized, the following assignment may
  1567. have problems; the assignment should be made only if thr is the
  1568. only top-level thr in the graph: */
  1569. thr->graph->last_sel_node = node;
  1570. if (err != DB_SUCCESS) {
  1571. thr_get_trx(thr)->error_state = err;
  1572. return(NULL);
  1573. }
  1574. return(thr);
  1575. }
  1576. /**********************************************************************//**
  1577. Performs a fetch for a cursor.
  1578. @return query thread to run next or NULL */
  1579. UNIV_INTERN
  1580. que_thr_t*
  1581. fetch_step(
  1582. /*=======*/
  1583. que_thr_t* thr) /*!< in: query thread */
  1584. {
  1585. sel_node_t* sel_node;
  1586. fetch_node_t* node;
  1587. ut_ad(thr);
  1588. node = thr->run_node;
  1589. sel_node = node->cursor_def;
  1590. ut_ad(que_node_get_type(node) == QUE_NODE_FETCH);
  1591. if (thr->prev_node != que_node_get_parent(node)) {
  1592. if (sel_node->state != SEL_NODE_NO_MORE_ROWS) {
  1593. if (node->into_list) {
  1594. sel_assign_into_var_values(node->into_list,
  1595. sel_node);
  1596. } else {
  1597. void* ret = (*node->func->func)(
  1598. sel_node, node->func->arg);
  1599. if (!ret) {
  1600. sel_node->state
  1601. = SEL_NODE_NO_MORE_ROWS;
  1602. }
  1603. }
  1604. }
  1605. thr->run_node = que_node_get_parent(node);
  1606. return(thr);
  1607. }
  1608. /* Make the fetch node the parent of the cursor definition for
  1609. the time of the fetch, so that execution knows to return to this
  1610. fetch node after a row has been selected or we know that there is
  1611. no row left */
  1612. sel_node->common.parent = node;
  1613. if (sel_node->state == SEL_NODE_CLOSED) {
  1614. fprintf(stderr,
  1615. "InnoDB: Error: fetch called on a closed cursor\n");
  1616. thr_get_trx(thr)->error_state = DB_ERROR;
  1617. return(NULL);
  1618. }
  1619. thr->run_node = sel_node;
  1620. return(thr);
  1621. }
  1622. /****************************************************************//**
  1623. Sample callback function for fetch that prints each row.
  1624. @return always returns non-NULL */
  1625. UNIV_INTERN
  1626. void*
  1627. row_fetch_print(
  1628. /*============*/
  1629. void* row, /*!< in: sel_node_t* */
  1630. void* user_arg) /*!< in: not used */
  1631. {
  1632. sel_node_t* node = row;
  1633. que_node_t* exp;
  1634. ulint i = 0;
  1635. UT_NOT_USED(user_arg);
  1636. fprintf(stderr, "row_fetch_print: row %p\n", row);
  1637. exp = node->select_list;
  1638. while (exp) {
  1639. dfield_t* dfield = que_node_get_val(exp);
  1640. const dtype_t* type = dfield_get_type(dfield);
  1641. fprintf(stderr, " column %lu:\n", (ulong)i);
  1642. dtype_print(type);
  1643. putc('\n', stderr);
  1644. if (dfield_get_len(dfield) != UNIV_SQL_NULL) {
  1645. ut_print_buf(stderr, dfield_get_data(dfield),
  1646. dfield_get_len(dfield));
  1647. putc('\n', stderr);
  1648. } else {
  1649. fputs(" <NULL>;\n", stderr);
  1650. }
  1651. exp = que_node_get_next(exp);
  1652. i++;
  1653. }
  1654. return((void*)42);
  1655. }
  1656. /****************************************************************//**
  1657. Callback function for fetch that stores an unsigned 4 byte integer to the
  1658. location pointed. The column's type must be DATA_INT, DATA_UNSIGNED, length
  1659. = 4.
  1660. @return always returns NULL */
  1661. UNIV_INTERN
  1662. void*
  1663. row_fetch_store_uint4(
  1664. /*==================*/
  1665. void* row, /*!< in: sel_node_t* */
  1666. void* user_arg) /*!< in: data pointer */
  1667. {
  1668. sel_node_t* node = row;
  1669. ib_uint32_t* val = user_arg;
  1670. ulint tmp;
  1671. dfield_t* dfield = que_node_get_val(node->select_list);
  1672. const dtype_t* type = dfield_get_type(dfield);
  1673. ulint len = dfield_get_len(dfield);
  1674. ut_a(dtype_get_mtype(type) == DATA_INT);
  1675. ut_a(dtype_get_prtype(type) & DATA_UNSIGNED);
  1676. ut_a(len == 4);
  1677. tmp = mach_read_from_4(dfield_get_data(dfield));
  1678. *val = (ib_uint32_t) tmp;
  1679. return(NULL);
  1680. }
  1681. /***********************************************************//**
  1682. Prints a row in a select result.
  1683. @return query thread to run next or NULL */
  1684. UNIV_INTERN
  1685. que_thr_t*
  1686. row_printf_step(
  1687. /*============*/
  1688. que_thr_t* thr) /*!< in: query thread */
  1689. {
  1690. row_printf_node_t* node;
  1691. sel_node_t* sel_node;
  1692. que_node_t* arg;
  1693. ut_ad(thr);
  1694. node = thr->run_node;
  1695. sel_node = node->sel_node;
  1696. ut_ad(que_node_get_type(node) == QUE_NODE_ROW_PRINTF);
  1697. if (thr->prev_node == que_node_get_parent(node)) {
  1698. /* Reset the cursor */
  1699. sel_node->state = SEL_NODE_OPEN;
  1700. /* Fetch next row to print */
  1701. thr->run_node = sel_node;
  1702. return(thr);
  1703. }
  1704. if (sel_node->state != SEL_NODE_FETCH) {
  1705. ut_ad(sel_node->state == SEL_NODE_NO_MORE_ROWS);
  1706. /* No more rows to print */
  1707. thr->run_node = que_node_get_parent(node);
  1708. return(thr);
  1709. }
  1710. arg = sel_node->select_list;
  1711. while (arg) {
  1712. dfield_print_also_hex(que_node_get_val(arg));
  1713. fputs(" ::: ", stderr);
  1714. arg = que_node_get_next(arg);
  1715. }
  1716. putc('\n', stderr);
  1717. /* Fetch next row to print */
  1718. thr->run_node = sel_node;
  1719. return(thr);
  1720. }
  1721. /****************************************************************//**
  1722. Converts a key value stored in MySQL format to an Innobase dtuple. The last
  1723. field of the key value may be just a prefix of a fixed length field: hence
  1724. the parameter key_len. But currently we do not allow search keys where the
  1725. last field is only a prefix of the full key field len and print a warning if
  1726. such appears. A counterpart of this function is
  1727. ha_innobase::store_key_val_for_row() in ha_innodb.cc. */
  1728. UNIV_INTERN
  1729. void
  1730. row_sel_convert_mysql_key_to_innobase(
  1731. /*==================================*/
  1732. dtuple_t* tuple, /*!< in/out: tuple where to build;
  1733. NOTE: we assume that the type info
  1734. in the tuple is already according
  1735. to index! */
  1736. byte* buf, /*!< in: buffer to use in field
  1737. conversions */
  1738. ulint buf_len, /*!< in: buffer length */
  1739. dict_index_t* index, /*!< in: index of the key value */
  1740. const byte* key_ptr, /*!< in: MySQL key value */
  1741. ulint key_len, /*!< in: MySQL key value length */
  1742. trx_t* trx) /*!< in: transaction */
  1743. {
  1744. byte* original_buf = buf;
  1745. const byte* original_key_ptr = key_ptr;
  1746. dict_field_t* field;
  1747. dfield_t* dfield;
  1748. ulint data_offset;
  1749. ulint data_len;
  1750. ulint data_field_len;
  1751. ibool is_null;
  1752. const byte* key_end;
  1753. ulint n_fields = 0;
  1754. /* For documentation of the key value storage format in MySQL, see
  1755. ha_innobase::store_key_val_for_row() in ha_innodb.cc. */
  1756. key_end = key_ptr + key_len;
  1757. /* Permit us to access any field in the tuple (ULINT_MAX): */
  1758. dtuple_set_n_fields(tuple, ULINT_MAX);
  1759. dfield = dtuple_get_nth_field(tuple, 0);
  1760. field = dict_index_get_nth_field(index, 0);
  1761. if (UNIV_UNLIKELY(dfield_get_type(dfield)->mtype == DATA_SYS)) {
  1762. /* A special case: we are looking for a position in the
  1763. generated clustered index which InnoDB automatically added
  1764. to a table with no primary key: the first and the only
  1765. ordering column is ROW_ID which InnoDB stored to the key_ptr
  1766. buffer. */
  1767. ut_a(key_len == DATA_ROW_ID_LEN);
  1768. dfield_set_data(dfield, key_ptr, DATA_ROW_ID_LEN);
  1769. dtuple_set_n_fields(tuple, 1);
  1770. return;
  1771. }
  1772. while (key_ptr < key_end) {
  1773. ulint type = dfield_get_type(dfield)->mtype;
  1774. ut_a(field->col->mtype == type);
  1775. data_offset = 0;
  1776. is_null = FALSE;
  1777. if (!(dfield_get_type(dfield)->prtype & DATA_NOT_NULL)) {
  1778. /* The first byte in the field tells if this is
  1779. an SQL NULL value */
  1780. data_offset = 1;
  1781. if (*key_ptr != 0) {
  1782. dfield_set_null(dfield);
  1783. is_null = TRUE;
  1784. }
  1785. }
  1786. /* Calculate data length and data field total length */
  1787. if (type == DATA_BLOB) {
  1788. /* The key field is a column prefix of a BLOB or
  1789. TEXT */
  1790. ut_a(field->prefix_len > 0);
  1791. /* MySQL stores the actual data length to the first 2
  1792. bytes after the optional SQL NULL marker byte. The
  1793. storage format is little-endian, that is, the most
  1794. significant byte at a higher address. In UTF-8, MySQL
  1795. seems to reserve field->prefix_len bytes for
  1796. storing this field in the key value buffer, even
  1797. though the actual value only takes data_len bytes
  1798. from the start. */
  1799. data_len = key_ptr[data_offset]
  1800. + 256 * key_ptr[data_offset + 1];
  1801. data_field_len = data_offset + 2 + field->prefix_len;
  1802. data_offset += 2;
  1803. /* Now that we know the length, we store the column
  1804. value like it would be a fixed char field */
  1805. } else if (field->prefix_len > 0) {
  1806. /* Looks like MySQL pads unused end bytes in the
  1807. prefix with space. Therefore, also in UTF-8, it is ok
  1808. to compare with a prefix containing full prefix_len
  1809. bytes, and no need to take at most prefix_len / 3
  1810. UTF-8 characters from the start.
  1811. If the prefix is used as the upper end of a LIKE
  1812. 'abc%' query, then MySQL pads the end with chars
  1813. 0xff. TODO: in that case does it any harm to compare
  1814. with the full prefix_len bytes. How do characters
  1815. 0xff in UTF-8 behave? */
  1816. data_len = field->prefix_len;
  1817. data_field_len = data_offset + data_len;
  1818. } else {
  1819. data_len = dfield_get_type(dfield)->len;
  1820. data_field_len = data_offset + data_len;
  1821. }
  1822. if (UNIV_UNLIKELY
  1823. (dtype_get_mysql_type(dfield_get_type(dfield))
  1824. == DATA_MYSQL_TRUE_VARCHAR)
  1825. && UNIV_LIKELY(type != DATA_INT)) {
  1826. /* In a MySQL key value format, a true VARCHAR is
  1827. always preceded by 2 bytes of a length field.
  1828. dfield_get_type(dfield)->len returns the maximum
  1829. 'payload' len in bytes. That does not include the
  1830. 2 bytes that tell the actual data length.
  1831. We added the check != DATA_INT to make sure we do
  1832. not treat MySQL ENUM or SET as a true VARCHAR! */
  1833. data_len += 2;
  1834. data_field_len += 2;
  1835. }
  1836. /* Storing may use at most data_len bytes of buf */
  1837. if (UNIV_LIKELY(!is_null)) {
  1838. row_mysql_store_col_in_innobase_format(
  1839. dfield, buf,
  1840. FALSE, /* MySQL key value format col */
  1841. key_ptr + data_offset, data_len,
  1842. dict_table_is_comp(index->table));
  1843. buf += data_len;
  1844. }
  1845. key_ptr += data_field_len;
  1846. if (UNIV_UNLIKELY(key_ptr > key_end)) {
  1847. /* The last field in key was not a complete key field
  1848. but a prefix of it.
  1849. Print a warning about this! HA_READ_PREFIX_LAST does
  1850. not currently work in InnoDB with partial-field key
  1851. value prefixes. Since MySQL currently uses a padding
  1852. trick to calculate LIKE 'abc%' type queries there
  1853. should never be partial-field prefixes in searches. */
  1854. ut_print_timestamp(stderr);
  1855. fputs(" InnoDB: Warning: using a partial-field"
  1856. " key prefix in search.\n"
  1857. "InnoDB: ", stderr);
  1858. dict_index_name_print(stderr, trx, index);
  1859. fprintf(stderr, ". Last data field length %lu bytes,\n"
  1860. "InnoDB: key ptr now exceeds"
  1861. " key end by %lu bytes.\n"
  1862. "InnoDB: Key value in the MySQL format:\n",
  1863. (ulong) data_field_len,
  1864. (ulong) (key_ptr - key_end));
  1865. fflush(stderr);
  1866. ut_print_buf(stderr, original_key_ptr, key_len);
  1867. putc('\n', stderr);
  1868. if (!is_null) {
  1869. ulint len = dfield_get_len(dfield);
  1870. dfield_set_len(dfield, len
  1871. - (ulint) (key_ptr - key_end));
  1872. }
  1873. }
  1874. n_fields++;
  1875. field++;
  1876. dfield++;
  1877. }
  1878. ut_a(buf <= original_buf + buf_len);
  1879. /* We set the length of tuple to n_fields: we assume that the memory
  1880. area allocated for it is big enough (usually bigger than n_fields). */
  1881. dtuple_set_n_fields(tuple, n_fields);
  1882. }
  1883. /**************************************************************//**
  1884. Stores the row id to the prebuilt struct. */
  1885. static
  1886. void
  1887. row_sel_store_row_id_to_prebuilt(
  1888. /*=============================*/
  1889. row_prebuilt_t* prebuilt, /*!< in/out: prebuilt */
  1890. const rec_t* index_rec, /*!< in: record */
  1891. const dict_index_t* index, /*!< in: index of the record */
  1892. const ulint* offsets) /*!< in: rec_get_offsets
  1893. (index_rec, index) */
  1894. {
  1895. const byte* data;
  1896. ulint len;
  1897. ut_ad(rec_offs_validate(index_rec, index, offsets));
  1898. data = rec_get_nth_field(
  1899. index_rec, offsets,
  1900. dict_index_get_sys_col_pos(index, DATA_ROW_ID), &len);
  1901. if (UNIV_UNLIKELY(len != DATA_ROW_ID_LEN)) {
  1902. fprintf(stderr,
  1903. "InnoDB: Error: Row id field is"
  1904. " wrong length %lu in ", (ulong) len);
  1905. dict_index_name_print(stderr, prebuilt->trx, index);
  1906. fprintf(stderr, "\n"
  1907. "InnoDB: Field number %lu, record:\n",
  1908. (ulong) dict_index_get_sys_col_pos(index,
  1909. DATA_ROW_ID));
  1910. rec_print_new(stderr, index_rec, offsets);
  1911. putc('\n', stderr);
  1912. ut_error;
  1913. }
  1914. ut_memcpy(prebuilt->row_id, data, len);
  1915. }
  1916. /**************************************************************//**
  1917. Stores a non-SQL-NULL field in the MySQL format. The counterpart of this
  1918. function is row_mysql_store_col_in_innobase_format() in row0mysql.c. */
  1919. static
  1920. void
  1921. row_sel_field_store_in_mysql_format(
  1922. /*================================*/
  1923. byte* dest, /*!< in/out: buffer where to store; NOTE
  1924. that BLOBs are not in themselves
  1925. stored here: the caller must allocate
  1926. and copy the BLOB into buffer before,
  1927. and pass the pointer to the BLOB in
  1928. 'data' */
  1929. const mysql_row_templ_t* templ,
  1930. /*!< in: MySQL column template.
  1931. Its following fields are referenced:
  1932. type, is_unsigned, mysql_col_len,
  1933. mbminlen, mbmaxlen */
  1934. const byte* data, /*!< in: data to store */
  1935. ulint len) /*!< in: length of the data */
  1936. {
  1937. byte* ptr;
  1938. byte* field_end;
  1939. byte* pad_ptr;
  1940. ut_ad(len != UNIV_SQL_NULL);
  1941. switch (templ->type) {
  1942. case DATA_INT:
  1943. /* Convert integer data from Innobase to a little-endian
  1944. format, sign bit restored to normal */
  1945. ptr = dest + len;
  1946. for (;;) {
  1947. ptr--;
  1948. *ptr = *data;
  1949. if (ptr == dest) {
  1950. break;
  1951. }
  1952. data++;
  1953. }
  1954. if (!templ->is_unsigned) {
  1955. dest[len - 1] = (byte) (dest[len - 1] ^ 128);
  1956. }
  1957. ut_ad(templ->mysql_col_len == len);
  1958. break;
  1959. case DATA_VARCHAR:
  1960. case DATA_VARMYSQL:
  1961. case DATA_BINARY:
  1962. field_end = dest + templ->mysql_col_len;
  1963. if (templ->mysql_type == DATA_MYSQL_TRUE_VARCHAR) {
  1964. /* This is a >= 5.0.3 type true VARCHAR. Store the
  1965. length of the data to the first byte or the first
  1966. two bytes of dest. */
  1967. dest = row_mysql_store_true_var_len(
  1968. dest, len, templ->mysql_length_bytes);
  1969. }
  1970. /* Copy the actual data */
  1971. ut_memcpy(dest, data, len);
  1972. /* Pad with trailing spaces. We pad with spaces also the
  1973. unused end of a >= 5.0.3 true VARCHAR column, just in case
  1974. MySQL expects its contents to be deterministic. */
  1975. pad_ptr = dest + len;
  1976. ut_ad(templ->mbminlen <= templ->mbmaxlen);
  1977. /* We handle UCS2 charset strings differently. */
  1978. if (templ->mbminlen == 2) {
  1979. /* A space char is two bytes, 0x0020 in UCS2 */
  1980. if (len & 1) {
  1981. /* A 0x20 has been stripped from the column.
  1982. Pad it back. */
  1983. if (pad_ptr < field_end) {
  1984. *pad_ptr = 0x20;
  1985. pad_ptr++;
  1986. }
  1987. }
  1988. /* Pad the rest of the string with 0x0020 */
  1989. while (pad_ptr < field_end) {
  1990. *pad_ptr = 0x00;
  1991. pad_ptr++;
  1992. *pad_ptr = 0x20;
  1993. pad_ptr++;
  1994. }
  1995. } else {
  1996. ut_ad(templ->mbminlen == 1);
  1997. /* space=0x20 */
  1998. memset(pad_ptr, 0x20, field_end - pad_ptr);
  1999. }
  2000. break;
  2001. case DATA_BLOB:
  2002. /* Store a pointer to the BLOB buffer to dest: the BLOB was
  2003. already copied to the buffer in row_sel_store_mysql_rec */
  2004. row_mysql_store_blob_ref(dest, templ->mysql_col_len, data,
  2005. len);
  2006. break;
  2007. case DATA_MYSQL:
  2008. memcpy(dest, data, len);
  2009. ut_ad(templ->mysql_col_len >= len);
  2010. ut_ad(templ->mbmaxlen >= templ->mbminlen);
  2011. ut_ad(templ->mbmaxlen > templ->mbminlen
  2012. || templ->mysql_col_len == len);
  2013. /* The following assertion would fail for old tables
  2014. containing UTF-8 ENUM columns due to Bug #9526. */
  2015. ut_ad(!templ->mbmaxlen
  2016. || !(templ->mysql_col_len % templ->mbmaxlen));
  2017. ut_ad(len * templ->mbmaxlen >= templ->mysql_col_len);
  2018. if (templ->mbminlen != templ->mbmaxlen) {
  2019. /* Pad with spaces. This undoes the stripping
  2020. done in row0mysql.ic, function
  2021. row_mysql_store_col_in_innobase_format(). */
  2022. memset(dest + len, 0x20, templ->mysql_col_len - len);
  2023. }
  2024. break;
  2025. default:
  2026. #ifdef UNIV_DEBUG
  2027. case DATA_SYS_CHILD:
  2028. case DATA_SYS:
  2029. /* These column types should never be shipped to MySQL. */
  2030. ut_ad(0);
  2031. case DATA_CHAR:
  2032. case DATA_FIXBINARY:
  2033. case DATA_FLOAT:
  2034. case DATA_DOUBLE:
  2035. case DATA_DECIMAL:
  2036. /* Above are the valid column types for MySQL data. */
  2037. #endif /* UNIV_DEBUG */
  2038. ut_ad(templ->mysql_col_len == len);
  2039. memcpy(dest, data, len);
  2040. }
  2041. }
  2042. /**************************************************************//**
  2043. Convert a row in the Innobase format to a row in the MySQL format.
  2044. Note that the template in prebuilt may advise us to copy only a few
  2045. columns to mysql_rec, other columns are left blank. All columns may not
  2046. be needed in the query.
  2047. @return TRUE if success, FALSE if could not allocate memory for a BLOB
  2048. (though we may also assert in that case) */
  2049. static
  2050. ibool
  2051. row_sel_store_mysql_rec(
  2052. /*====================*/
  2053. byte* mysql_rec, /*!< out: row in the MySQL format */
  2054. row_prebuilt_t* prebuilt, /*!< in: prebuilt struct */
  2055. const rec_t* rec, /*!< in: Innobase record in the index
  2056. which was described in prebuilt's
  2057. template; must be protected by
  2058. a page latch */
  2059. const ulint* offsets) /*!< in: array returned by
  2060. rec_get_offsets() */
  2061. {
  2062. mysql_row_templ_t* templ;
  2063. mem_heap_t* extern_field_heap = NULL;
  2064. mem_heap_t* heap;
  2065. const byte* data;
  2066. ulint len;
  2067. ulint i;
  2068. ut_ad(prebuilt->mysql_template);
  2069. ut_ad(prebuilt->default_rec);
  2070. ut_ad(rec_offs_validate(rec, NULL, offsets));
  2071. if (UNIV_LIKELY_NULL(prebuilt->blob_heap)) {
  2072. mem_heap_free(prebuilt->blob_heap);
  2073. prebuilt->blob_heap = NULL;
  2074. }
  2075. for (i = 0; i < prebuilt->n_template; i++) {
  2076. templ = prebuilt->mysql_template + i;
  2077. if (UNIV_UNLIKELY(rec_offs_nth_extern(offsets,
  2078. templ->rec_field_no))) {
  2079. /* Copy an externally stored field to the temporary
  2080. heap */
  2081. ut_a(!prebuilt->trx->has_search_latch);
  2082. if (UNIV_UNLIKELY(templ->type == DATA_BLOB)) {
  2083. if (prebuilt->blob_heap == NULL) {
  2084. prebuilt->blob_heap = mem_heap_create(
  2085. UNIV_PAGE_SIZE);
  2086. }
  2087. heap = prebuilt->blob_heap;
  2088. } else {
  2089. extern_field_heap
  2090. = mem_heap_create(UNIV_PAGE_SIZE);
  2091. heap = extern_field_heap;
  2092. }
  2093. /* NOTE: if we are retrieving a big BLOB, we may
  2094. already run out of memory in the next call, which
  2095. causes an assert */
  2096. data = btr_rec_copy_externally_stored_field(
  2097. rec, offsets,
  2098. dict_table_zip_size(prebuilt->table),
  2099. templ->rec_field_no, &len, heap);
  2100. ut_a(len != UNIV_SQL_NULL);
  2101. } else {
  2102. /* Field is stored in the row. */
  2103. data = rec_get_nth_field(rec, offsets,
  2104. templ->rec_field_no, &len);
  2105. if (UNIV_UNLIKELY(templ->type == DATA_BLOB)
  2106. && len != UNIV_SQL_NULL) {
  2107. /* It is a BLOB field locally stored in the
  2108. InnoDB record: we MUST copy its contents to
  2109. prebuilt->blob_heap here because later code
  2110. assumes all BLOB values have been copied to a
  2111. safe place. */
  2112. if (prebuilt->blob_heap == NULL) {
  2113. prebuilt->blob_heap = mem_heap_create(
  2114. UNIV_PAGE_SIZE);
  2115. }
  2116. data = memcpy(mem_heap_alloc(
  2117. prebuilt->blob_heap, len),
  2118. data, len);
  2119. }
  2120. }
  2121. if (len != UNIV_SQL_NULL) {
  2122. row_sel_field_store_in_mysql_format(
  2123. mysql_rec + templ->mysql_col_offset,
  2124. templ, data, len);
  2125. /* Cleanup */
  2126. if (extern_field_heap) {
  2127. mem_heap_free(extern_field_heap);
  2128. extern_field_heap = NULL;
  2129. }
  2130. if (templ->mysql_null_bit_mask) {
  2131. /* It is a nullable column with a non-NULL
  2132. value */
  2133. mysql_rec[templ->mysql_null_byte_offset]
  2134. &= ~(byte) templ->mysql_null_bit_mask;
  2135. }
  2136. } else {
  2137. /* MySQL assumes that the field for an SQL
  2138. NULL value is set to the default value. */
  2139. mysql_rec[templ->mysql_null_byte_offset]
  2140. |= (byte) templ->mysql_null_bit_mask;
  2141. memcpy(mysql_rec + templ->mysql_col_offset,
  2142. (const byte*) prebuilt->default_rec
  2143. + templ->mysql_col_offset,
  2144. templ->mysql_col_len);
  2145. }
  2146. }
  2147. return(TRUE);
  2148. }
  2149. /*********************************************************************//**
  2150. Builds a previous version of a clustered index record for a consistent read
  2151. @return DB_SUCCESS or error code */
  2152. static
  2153. ulint
  2154. row_sel_build_prev_vers_for_mysql(
  2155. /*==============================*/
  2156. read_view_t* read_view, /*!< in: read view */
  2157. dict_index_t* clust_index, /*!< in: clustered index */
  2158. row_prebuilt_t* prebuilt, /*!< in: prebuilt struct */
  2159. const rec_t* rec, /*!< in: record in a clustered index */
  2160. ulint** offsets, /*!< in/out: offsets returned by
  2161. rec_get_offsets(rec, clust_index) */
  2162. mem_heap_t** offset_heap, /*!< in/out: memory heap from which
  2163. the offsets are allocated */
  2164. rec_t** old_vers, /*!< out: old version, or NULL if the
  2165. record does not exist in the view:
  2166. i.e., it was freshly inserted
  2167. afterwards */
  2168. mtr_t* mtr) /*!< in: mtr */
  2169. {
  2170. ulint err;
  2171. if (prebuilt->old_vers_heap) {
  2172. mem_heap_empty(prebuilt->old_vers_heap);
  2173. } else {
  2174. prebuilt->old_vers_heap = mem_heap_create(200);
  2175. }
  2176. err = row_vers_build_for_consistent_read(
  2177. rec, mtr, clust_index, offsets, read_view, offset_heap,
  2178. prebuilt->old_vers_heap, old_vers);
  2179. return(err);
  2180. }
  2181. /*********************************************************************//**
  2182. Retrieves the clustered index record corresponding to a record in a
  2183. non-clustered index. Does the necessary locking. Used in the MySQL
  2184. interface.
  2185. @return DB_SUCCESS or error code */
  2186. static
  2187. ulint
  2188. row_sel_get_clust_rec_for_mysql(
  2189. /*============================*/
  2190. row_prebuilt_t* prebuilt,/*!< in: prebuilt struct in the handle */
  2191. dict_index_t* sec_index,/*!< in: secondary index where rec resides */
  2192. const rec_t* rec, /*!< in: record in a non-clustered index; if
  2193. this is a locking read, then rec is not
  2194. allowed to be delete-marked, and that would
  2195. not make sense either */
  2196. que_thr_t* thr, /*!< in: query thread */
  2197. const rec_t** out_rec,/*!< out: clustered record or an old version of
  2198. it, NULL if the old version did not exist
  2199. in the read view, i.e., it was a fresh
  2200. inserted version */
  2201. ulint** offsets,/*!< in: offsets returned by
  2202. rec_get_offsets(rec, sec_index);
  2203. out: offsets returned by
  2204. rec_get_offsets(out_rec, clust_index) */
  2205. mem_heap_t** offset_heap,/*!< in/out: memory heap from which
  2206. the offsets are allocated */
  2207. mtr_t* mtr) /*!< in: mtr used to get access to the
  2208. non-clustered record; the same mtr is used to
  2209. access the clustered index */
  2210. {
  2211. dict_index_t* clust_index;
  2212. const rec_t* clust_rec;
  2213. rec_t* old_vers;
  2214. ulint err;
  2215. trx_t* trx;
  2216. *out_rec = NULL;
  2217. trx = thr_get_trx(thr);
  2218. row_build_row_ref_in_tuple(prebuilt->clust_ref, rec,
  2219. sec_index, *offsets, trx);
  2220. clust_index = dict_table_get_first_index(sec_index->table);
  2221. btr_pcur_open_with_no_init(clust_index, prebuilt->clust_ref,
  2222. PAGE_CUR_LE, BTR_SEARCH_LEAF,
  2223. prebuilt->clust_pcur, 0, mtr);
  2224. clust_rec = btr_pcur_get_rec(prebuilt->clust_pcur);
  2225. prebuilt->clust_pcur->trx_if_known = trx;
  2226. /* Note: only if the search ends up on a non-infimum record is the
  2227. low_match value the real match to the search tuple */
  2228. if (!page_rec_is_user_rec(clust_rec)
  2229. || btr_pcur_get_low_match(prebuilt->clust_pcur)
  2230. < dict_index_get_n_unique(clust_index)) {
  2231. /* In a rare case it is possible that no clust rec is found
  2232. for a delete-marked secondary index record: if in row0umod.c
  2233. in row_undo_mod_remove_clust_low() we have already removed
  2234. the clust rec, while purge is still cleaning and removing
  2235. secondary index records associated with earlier versions of
  2236. the clustered index record. In that case we know that the
  2237. clustered index record did not exist in the read view of
  2238. trx. */
  2239. if (!rec_get_deleted_flag(rec,
  2240. dict_table_is_comp(sec_index->table))
  2241. || prebuilt->select_lock_type != LOCK_NONE) {
  2242. ut_print_timestamp(stderr);
  2243. fputs(" InnoDB: error clustered record"
  2244. " for sec rec not found\n"
  2245. "InnoDB: ", stderr);
  2246. dict_index_name_print(stderr, trx, sec_index);
  2247. fputs("\n"
  2248. "InnoDB: sec index record ", stderr);
  2249. rec_print(stderr, rec, sec_index);
  2250. fputs("\n"
  2251. "InnoDB: clust index record ", stderr);
  2252. rec_print(stderr, clust_rec, clust_index);
  2253. putc('\n', stderr);
  2254. trx_print(stderr, trx, 600);
  2255. fputs("\n"
  2256. "InnoDB: Submit a detailed bug report"
  2257. " to http://bugs.mysql.com\n", stderr);
  2258. }
  2259. clust_rec = NULL;
  2260. goto func_exit;
  2261. }
  2262. *offsets = rec_get_offsets(clust_rec, clust_index, *offsets,
  2263. ULINT_UNDEFINED, offset_heap);
  2264. if (prebuilt->select_lock_type != LOCK_NONE) {
  2265. /* Try to place a lock on the index record; we are searching
  2266. the clust rec with a unique condition, hence
  2267. we set a LOCK_REC_NOT_GAP type lock */
  2268. err = lock_clust_rec_read_check_and_lock(
  2269. 0, btr_pcur_get_block(prebuilt->clust_pcur),
  2270. clust_rec, clust_index, *offsets,
  2271. prebuilt->select_lock_type, LOCK_REC_NOT_GAP, thr);
  2272. if (err != DB_SUCCESS) {
  2273. goto err_exit;
  2274. }
  2275. } else {
  2276. /* This is a non-locking consistent read: if necessary, fetch
  2277. a previous version of the record */
  2278. old_vers = NULL;
  2279. /* If the isolation level allows reading of uncommitted data,
  2280. then we never look for an earlier version */
  2281. if (trx->isolation_level > TRX_ISO_READ_UNCOMMITTED
  2282. && !lock_clust_rec_cons_read_sees(
  2283. clust_rec, clust_index, *offsets,
  2284. trx->read_view)) {
  2285. /* The following call returns 'offsets' associated with
  2286. 'old_vers' */
  2287. err = row_sel_build_prev_vers_for_mysql(
  2288. trx->read_view, clust_index, prebuilt,
  2289. clust_rec, offsets, offset_heap, &old_vers,
  2290. mtr);
  2291. if (err != DB_SUCCESS || old_vers == NULL) {
  2292. goto err_exit;
  2293. }
  2294. clust_rec = old_vers;
  2295. }
  2296. /* If we had to go to an earlier version of row or the
  2297. secondary index record is delete marked, then it may be that
  2298. the secondary index record corresponding to clust_rec
  2299. (or old_vers) is not rec; in that case we must ignore
  2300. such row because in our snapshot rec would not have existed.
  2301. Remember that from rec we cannot see directly which transaction
  2302. id corresponds to it: we have to go to the clustered index
  2303. record. A query where we want to fetch all rows where
  2304. the secondary index value is in some interval would return
  2305. a wrong result if we would not drop rows which we come to
  2306. visit through secondary index records that would not really
  2307. exist in our snapshot. */
  2308. if (clust_rec
  2309. && (old_vers
  2310. || rec_get_deleted_flag(rec, dict_table_is_comp(
  2311. sec_index->table)))
  2312. && !row_sel_sec_rec_is_for_clust_rec(
  2313. rec, sec_index, clust_rec, clust_index)) {
  2314. clust_rec = NULL;
  2315. #ifdef UNIV_SEARCH_DEBUG
  2316. } else {
  2317. ut_a(clust_rec == NULL
  2318. || row_sel_sec_rec_is_for_clust_rec(
  2319. rec, sec_index, clust_rec, clust_index));
  2320. #endif
  2321. }
  2322. }
  2323. func_exit:
  2324. *out_rec = clust_rec;
  2325. if (prebuilt->select_lock_type != LOCK_NONE) {
  2326. /* We may use the cursor in update or in unlock_row():
  2327. store its position */
  2328. btr_pcur_store_position(prebuilt->clust_pcur, mtr);
  2329. }
  2330. err = DB_SUCCESS;
  2331. err_exit:
  2332. return(err);
  2333. }
  2334. /********************************************************************//**
  2335. Restores cursor position after it has been stored. We have to take into
  2336. account that the record cursor was positioned on may have been deleted.
  2337. Then we may have to move the cursor one step up or down.
  2338. @return TRUE if we may need to process the record the cursor is now
  2339. positioned on (i.e. we should not go to the next record yet) */
  2340. static
  2341. ibool
  2342. sel_restore_position_for_mysql(
  2343. /*===========================*/
  2344. ibool* same_user_rec, /*!< out: TRUE if we were able to restore
  2345. the cursor on a user record with the
  2346. same ordering prefix in in the
  2347. B-tree index */
  2348. ulint latch_mode, /*!< in: latch mode wished in
  2349. restoration */
  2350. btr_pcur_t* pcur, /*!< in: cursor whose position
  2351. has been stored */
  2352. ibool moves_up, /*!< in: TRUE if the cursor moves up
  2353. in the index */
  2354. mtr_t* mtr) /*!< in: mtr; CAUTION: may commit
  2355. mtr temporarily! */
  2356. {
  2357. ibool success;
  2358. ulint relative_position;
  2359. relative_position = pcur->rel_pos;
  2360. success = btr_pcur_restore_position(latch_mode, pcur, mtr);
  2361. *same_user_rec = success;
  2362. if (relative_position == BTR_PCUR_ON) {
  2363. if (success) {
  2364. return(FALSE);
  2365. }
  2366. if (moves_up) {
  2367. btr_pcur_move_to_next(pcur, mtr);
  2368. }
  2369. return(TRUE);
  2370. }
  2371. if (relative_position == BTR_PCUR_AFTER
  2372. || relative_position == BTR_PCUR_AFTER_LAST_IN_TREE) {
  2373. if (moves_up) {
  2374. return(TRUE);
  2375. }
  2376. if (btr_pcur_is_on_user_rec(pcur)) {
  2377. btr_pcur_move_to_prev(pcur, mtr);
  2378. }
  2379. return(TRUE);
  2380. }
  2381. ut_ad(relative_position == BTR_PCUR_BEFORE
  2382. || relative_position == BTR_PCUR_BEFORE_FIRST_IN_TREE);
  2383. if (moves_up && btr_pcur_is_on_user_rec(pcur)) {
  2384. btr_pcur_move_to_next(pcur, mtr);
  2385. }
  2386. return(TRUE);
  2387. }
  2388. /********************************************************************//**
  2389. Pops a cached row for MySQL from the fetch cache. */
  2390. UNIV_INLINE
  2391. void
  2392. row_sel_pop_cached_row_for_mysql(
  2393. /*=============================*/
  2394. byte* buf, /*!< in/out: buffer where to copy the
  2395. row */
  2396. row_prebuilt_t* prebuilt) /*!< in: prebuilt struct */
  2397. {
  2398. ulint i;
  2399. mysql_row_templ_t* templ;
  2400. byte* cached_rec;
  2401. ut_ad(prebuilt->n_fetch_cached > 0);
  2402. ut_ad(prebuilt->mysql_prefix_len <= prebuilt->mysql_row_len);
  2403. if (UNIV_UNLIKELY(prebuilt->keep_other_fields_on_keyread)) {
  2404. /* Copy cache record field by field, don't touch fields that
  2405. are not covered by current key */
  2406. cached_rec = prebuilt->fetch_cache[
  2407. prebuilt->fetch_cache_first];
  2408. for (i = 0; i < prebuilt->n_template; i++) {
  2409. templ = prebuilt->mysql_template + i;
  2410. ut_memcpy(buf + templ->mysql_col_offset,
  2411. cached_rec + templ->mysql_col_offset,
  2412. templ->mysql_col_len);
  2413. /* Copy NULL bit of the current field from cached_rec
  2414. to buf */
  2415. if (templ->mysql_null_bit_mask) {
  2416. buf[templ->mysql_null_byte_offset]
  2417. ^= (buf[templ->mysql_null_byte_offset]
  2418. ^ cached_rec[templ->mysql_null_byte_offset])
  2419. & (byte)templ->mysql_null_bit_mask;
  2420. }
  2421. }
  2422. }
  2423. else {
  2424. ut_memcpy(buf,
  2425. prebuilt->fetch_cache[prebuilt->fetch_cache_first],
  2426. prebuilt->mysql_prefix_len);
  2427. }
  2428. prebuilt->n_fetch_cached--;
  2429. prebuilt->fetch_cache_first++;
  2430. if (prebuilt->n_fetch_cached == 0) {
  2431. prebuilt->fetch_cache_first = 0;
  2432. }
  2433. }
  2434. /********************************************************************//**
  2435. Pushes a row for MySQL to the fetch cache. */
  2436. UNIV_INLINE
  2437. void
  2438. row_sel_push_cache_row_for_mysql(
  2439. /*=============================*/
  2440. row_prebuilt_t* prebuilt, /*!< in: prebuilt struct */
  2441. const rec_t* rec, /*!< in: record to push; must
  2442. be protected by a page latch */
  2443. const ulint* offsets) /*!< in: rec_get_offsets() */
  2444. {
  2445. byte* buf;
  2446. ulint i;
  2447. ut_ad(prebuilt->n_fetch_cached < MYSQL_FETCH_CACHE_SIZE);
  2448. ut_ad(rec_offs_validate(rec, NULL, offsets));
  2449. ut_a(!prebuilt->templ_contains_blob);
  2450. if (prebuilt->fetch_cache[0] == NULL) {
  2451. /* Allocate memory for the fetch cache */
  2452. for (i = 0; i < MYSQL_FETCH_CACHE_SIZE; i++) {
  2453. /* A user has reported memory corruption in these
  2454. buffers in Linux. Put magic numbers there to help
  2455. to track a possible bug. */
  2456. buf = mem_alloc(prebuilt->mysql_row_len + 8);
  2457. prebuilt->fetch_cache[i] = buf + 4;
  2458. mach_write_to_4(buf, ROW_PREBUILT_FETCH_MAGIC_N);
  2459. mach_write_to_4(buf + 4 + prebuilt->mysql_row_len,
  2460. ROW_PREBUILT_FETCH_MAGIC_N);
  2461. }
  2462. }
  2463. ut_ad(prebuilt->fetch_cache_first == 0);
  2464. if (UNIV_UNLIKELY(!row_sel_store_mysql_rec(
  2465. prebuilt->fetch_cache[
  2466. prebuilt->n_fetch_cached],
  2467. prebuilt, rec, offsets))) {
  2468. ut_error;
  2469. }
  2470. prebuilt->n_fetch_cached++;
  2471. }
  2472. /*********************************************************************//**
  2473. Tries to do a shortcut to fetch a clustered index record with a unique key,
  2474. using the hash index if possible (not always). We assume that the search
  2475. mode is PAGE_CUR_GE, it is a consistent read, there is a read view in trx,
  2476. btr search latch has been locked in S-mode.
  2477. @return SEL_FOUND, SEL_EXHAUSTED, SEL_RETRY */
  2478. static
  2479. ulint
  2480. row_sel_try_search_shortcut_for_mysql(
  2481. /*==================================*/
  2482. const rec_t** out_rec,/*!< out: record if found */
  2483. row_prebuilt_t* prebuilt,/*!< in: prebuilt struct */
  2484. ulint** offsets,/*!< in/out: for rec_get_offsets(*out_rec) */
  2485. mem_heap_t** heap, /*!< in/out: heap for rec_get_offsets() */
  2486. mtr_t* mtr) /*!< in: started mtr */
  2487. {
  2488. dict_index_t* index = prebuilt->index;
  2489. const dtuple_t* search_tuple = prebuilt->search_tuple;
  2490. btr_pcur_t* pcur = prebuilt->pcur;
  2491. trx_t* trx = prebuilt->trx;
  2492. const rec_t* rec;
  2493. ut_ad(dict_index_is_clust(index));
  2494. ut_ad(!prebuilt->templ_contains_blob);
  2495. btr_pcur_open_with_no_init(index, search_tuple, PAGE_CUR_GE,
  2496. BTR_SEARCH_LEAF, pcur,
  2497. #ifndef UNIV_SEARCH_DEBUG
  2498. RW_S_LATCH,
  2499. #else
  2500. 0,
  2501. #endif
  2502. mtr);
  2503. rec = btr_pcur_get_rec(pcur);
  2504. if (!page_rec_is_user_rec(rec)) {
  2505. return(SEL_RETRY);
  2506. }
  2507. /* As the cursor is now placed on a user record after a search with
  2508. the mode PAGE_CUR_GE, the up_match field in the cursor tells how many
  2509. fields in the user record matched to the search tuple */
  2510. if (btr_pcur_get_up_match(pcur) < dtuple_get_n_fields(search_tuple)) {
  2511. return(SEL_EXHAUSTED);
  2512. }
  2513. /* This is a non-locking consistent read: if necessary, fetch
  2514. a previous version of the record */
  2515. *offsets = rec_get_offsets(rec, index, *offsets,
  2516. ULINT_UNDEFINED, heap);
  2517. if (!lock_clust_rec_cons_read_sees(rec, index,
  2518. *offsets, trx->read_view)) {
  2519. return(SEL_RETRY);
  2520. }
  2521. if (rec_get_deleted_flag(rec, dict_table_is_comp(index->table))) {
  2522. return(SEL_EXHAUSTED);
  2523. }
  2524. *out_rec = rec;
  2525. return(SEL_FOUND);
  2526. }
  2527. /********************************************************************//**
  2528. Searches for rows in the database. This is used in the interface to
  2529. MySQL. This function opens a cursor, and also implements fetch next
  2530. and fetch prev. NOTE that if we do a search with a full key value
  2531. from a unique index (ROW_SEL_EXACT), then we will not store the cursor
  2532. position and fetch next or fetch prev must not be tried to the cursor!
  2533. @return DB_SUCCESS, DB_RECORD_NOT_FOUND, DB_END_OF_INDEX, DB_DEADLOCK,
  2534. DB_LOCK_TABLE_FULL, DB_CORRUPTION, or DB_TOO_BIG_RECORD */
  2535. UNIV_INTERN
  2536. ulint
  2537. row_search_for_mysql(
  2538. /*=================*/
  2539. byte* buf, /*!< in/out: buffer for the fetched
  2540. row in the MySQL format */
  2541. ulint mode, /*!< in: search mode PAGE_CUR_L, ... */
  2542. row_prebuilt_t* prebuilt, /*!< in: prebuilt struct for the
  2543. table handle; this contains the info
  2544. of search_tuple, index; if search
  2545. tuple contains 0 fields then we
  2546. position the cursor at the start or
  2547. the end of the index, depending on
  2548. 'mode' */
  2549. ulint match_mode, /*!< in: 0 or ROW_SEL_EXACT or
  2550. ROW_SEL_EXACT_PREFIX */
  2551. ulint direction) /*!< in: 0 or ROW_SEL_NEXT or
  2552. ROW_SEL_PREV; NOTE: if this is != 0,
  2553. then prebuilt must have a pcur
  2554. with stored position! In opening of a
  2555. cursor 'direction' should be 0. */
  2556. {
  2557. dict_index_t* index = prebuilt->index;
  2558. ibool comp = dict_table_is_comp(index->table);
  2559. const dtuple_t* search_tuple = prebuilt->search_tuple;
  2560. btr_pcur_t* pcur = prebuilt->pcur;
  2561. trx_t* trx = prebuilt->trx;
  2562. dict_index_t* clust_index;
  2563. que_thr_t* thr;
  2564. const rec_t* rec;
  2565. const rec_t* result_rec;
  2566. const rec_t* clust_rec;
  2567. ulint err = DB_SUCCESS;
  2568. ibool unique_search = FALSE;
  2569. ibool unique_search_from_clust_index = FALSE;
  2570. ibool mtr_has_extra_clust_latch = FALSE;
  2571. ibool moves_up = FALSE;
  2572. ibool set_also_gap_locks = TRUE;
  2573. /* if the query is a plain locking SELECT, and the isolation level
  2574. is <= TRX_ISO_READ_COMMITTED, then this is set to FALSE */
  2575. ibool did_semi_consistent_read = FALSE;
  2576. /* if the returned record was locked and we did a semi-consistent
  2577. read (fetch the newest committed version), then this is set to
  2578. TRUE */
  2579. #ifdef UNIV_SEARCH_DEBUG
  2580. ulint cnt = 0;
  2581. #endif /* UNIV_SEARCH_DEBUG */
  2582. ulint next_offs;
  2583. ibool same_user_rec;
  2584. mtr_t mtr;
  2585. mem_heap_t* heap = NULL;
  2586. ulint offsets_[REC_OFFS_NORMAL_SIZE];
  2587. ulint* offsets = offsets_;
  2588. rec_offs_init(offsets_);
  2589. ut_ad(index && pcur && search_tuple);
  2590. ut_ad(trx->mysql_thread_id == os_thread_get_curr_id());
  2591. if (UNIV_UNLIKELY(prebuilt->table->ibd_file_missing)) {
  2592. ut_print_timestamp(stderr);
  2593. fprintf(stderr, " InnoDB: Error:\n"
  2594. "InnoDB: MySQL is trying to use a table handle"
  2595. " but the .ibd file for\n"
  2596. "InnoDB: table %s does not exist.\n"
  2597. "InnoDB: Have you deleted the .ibd file"
  2598. " from the database directory under\n"
  2599. "InnoDB: the MySQL datadir, or have you used"
  2600. " DISCARD TABLESPACE?\n"
  2601. "InnoDB: Look from\n"
  2602. "InnoDB: " REFMAN "innodb-troubleshooting.html\n"
  2603. "InnoDB: how you can resolve the problem.\n",
  2604. prebuilt->table->name);
  2605. return(DB_ERROR);
  2606. }
  2607. if (UNIV_UNLIKELY(!prebuilt->index_usable)) {
  2608. return(DB_MISSING_HISTORY);
  2609. }
  2610. if (UNIV_UNLIKELY(prebuilt->magic_n != ROW_PREBUILT_ALLOCATED)) {
  2611. fprintf(stderr,
  2612. "InnoDB: Error: trying to free a corrupt\n"
  2613. "InnoDB: table handle. Magic n %lu, table name ",
  2614. (ulong) prebuilt->magic_n);
  2615. ut_print_name(stderr, trx, TRUE, prebuilt->table->name);
  2616. putc('\n', stderr);
  2617. mem_analyze_corruption(prebuilt);
  2618. ut_error;
  2619. }
  2620. #if 0
  2621. /* August 19, 2005 by Heikki: temporarily disable this error
  2622. print until the cursor lock count is done correctly.
  2623. See bugs #12263 and #12456!*/
  2624. if (trx->n_mysql_tables_in_use == 0
  2625. && UNIV_UNLIKELY(prebuilt->select_lock_type == LOCK_NONE)) {
  2626. /* Note that if MySQL uses an InnoDB temp table that it
  2627. created inside LOCK TABLES, then n_mysql_tables_in_use can
  2628. be zero; in that case select_lock_type is set to LOCK_X in
  2629. ::start_stmt. */
  2630. fputs("InnoDB: Error: MySQL is trying to perform a SELECT\n"
  2631. "InnoDB: but it has not locked"
  2632. " any tables in ::external_lock()!\n",
  2633. stderr);
  2634. trx_print(stderr, trx, 600);
  2635. fputc('\n', stderr);
  2636. }
  2637. #endif
  2638. #if 0
  2639. fprintf(stderr, "Match mode %lu\n search tuple ",
  2640. (ulong) match_mode);
  2641. dtuple_print(search_tuple);
  2642. fprintf(stderr, "N tables locked %lu\n",
  2643. (ulong) trx->mysql_n_tables_locked);
  2644. #endif
  2645. /*-------------------------------------------------------------*/
  2646. /* PHASE 0: Release a possible s-latch we are holding on the
  2647. adaptive hash index latch if there is someone waiting behind */
  2648. if (UNIV_UNLIKELY(rw_lock_get_writer(&btr_search_latch) != RW_LOCK_NOT_LOCKED)
  2649. && trx->has_search_latch) {
  2650. /* There is an x-latch request on the adaptive hash index:
  2651. release the s-latch to reduce starvation and wait for
  2652. BTR_SEA_TIMEOUT rounds before trying to keep it again over
  2653. calls from MySQL */
  2654. rw_lock_s_unlock(&btr_search_latch);
  2655. trx->has_search_latch = FALSE;
  2656. trx->search_latch_timeout = BTR_SEA_TIMEOUT;
  2657. }
  2658. /* Reset the new record lock info if srv_locks_unsafe_for_binlog
  2659. is set or session is using a READ COMMITED isolation level. Then
  2660. we are able to remove the record locks set here on an individual
  2661. row. */
  2662. prebuilt->new_rec_locks = 0;
  2663. /*-------------------------------------------------------------*/
  2664. /* PHASE 1: Try to pop the row from the prefetch cache */
  2665. if (UNIV_UNLIKELY(direction == 0)) {
  2666. trx->op_info = "starting index read";
  2667. prebuilt->n_rows_fetched = 0;
  2668. prebuilt->n_fetch_cached = 0;
  2669. prebuilt->fetch_cache_first = 0;
  2670. if (prebuilt->sel_graph == NULL) {
  2671. /* Build a dummy select query graph */
  2672. row_prebuild_sel_graph(prebuilt);
  2673. }
  2674. } else {
  2675. trx->op_info = "fetching rows";
  2676. if (prebuilt->n_rows_fetched == 0) {
  2677. prebuilt->fetch_direction = direction;
  2678. }
  2679. if (UNIV_UNLIKELY(direction != prebuilt->fetch_direction)) {
  2680. if (UNIV_UNLIKELY(prebuilt->n_fetch_cached > 0)) {
  2681. ut_error;
  2682. /* TODO: scrollable cursor: restore cursor to
  2683. the place of the latest returned row,
  2684. or better: prevent caching for a scroll
  2685. cursor! */
  2686. }
  2687. prebuilt->n_rows_fetched = 0;
  2688. prebuilt->n_fetch_cached = 0;
  2689. prebuilt->fetch_cache_first = 0;
  2690. } else if (UNIV_LIKELY(prebuilt->n_fetch_cached > 0)) {
  2691. row_sel_pop_cached_row_for_mysql(buf, prebuilt);
  2692. prebuilt->n_rows_fetched++;
  2693. srv_n_rows_read++;
  2694. err = DB_SUCCESS;
  2695. goto func_exit;
  2696. }
  2697. if (prebuilt->fetch_cache_first > 0
  2698. && prebuilt->fetch_cache_first < MYSQL_FETCH_CACHE_SIZE) {
  2699. /* The previous returned row was popped from the fetch
  2700. cache, but the cache was not full at the time of the
  2701. popping: no more rows can exist in the result set */
  2702. err = DB_RECORD_NOT_FOUND;
  2703. goto func_exit;
  2704. }
  2705. prebuilt->n_rows_fetched++;
  2706. if (prebuilt->n_rows_fetched > 1000000000) {
  2707. /* Prevent wrap-over */
  2708. prebuilt->n_rows_fetched = 500000000;
  2709. }
  2710. mode = pcur->search_mode;
  2711. }
  2712. /* In a search where at most one record in the index may match, we
  2713. can use a LOCK_REC_NOT_GAP type record lock when locking a
  2714. non-delete-marked matching record.
  2715. Note that in a unique secondary index there may be different
  2716. delete-marked versions of a record where only the primary key
  2717. values differ: thus in a secondary index we must use next-key
  2718. locks when locking delete-marked records. */
  2719. if (match_mode == ROW_SEL_EXACT
  2720. && dict_index_is_unique(index)
  2721. && dtuple_get_n_fields(search_tuple)
  2722. == dict_index_get_n_unique(index)
  2723. && (dict_index_is_clust(index)
  2724. || !dtuple_contains_null(search_tuple))) {
  2725. /* Note above that a UNIQUE secondary index can contain many
  2726. rows with the same key value if one of the columns is the SQL
  2727. null. A clustered index under MySQL can never contain null
  2728. columns because we demand that all the columns in primary key
  2729. are non-null. */
  2730. unique_search = TRUE;
  2731. /* Even if the condition is unique, MySQL seems to try to
  2732. retrieve also a second row if a primary key contains more than
  2733. 1 column. Return immediately if this is not a HANDLER
  2734. command. */
  2735. if (UNIV_UNLIKELY(direction != 0
  2736. && !prebuilt->used_in_HANDLER)) {
  2737. err = DB_RECORD_NOT_FOUND;
  2738. goto func_exit;
  2739. }
  2740. }
  2741. mtr_start(&mtr);
  2742. /*-------------------------------------------------------------*/
  2743. /* PHASE 2: Try fast adaptive hash index search if possible */
  2744. /* Next test if this is the special case where we can use the fast
  2745. adaptive hash index to try the search. Since we must release the
  2746. search system latch when we retrieve an externally stored field, we
  2747. cannot use the adaptive hash index in a search in the case the row
  2748. may be long and there may be externally stored fields */
  2749. if (UNIV_UNLIKELY(direction == 0)
  2750. && unique_search
  2751. && dict_index_is_clust(index)
  2752. && !prebuilt->templ_contains_blob
  2753. && !prebuilt->used_in_HANDLER
  2754. && (prebuilt->mysql_row_len < UNIV_PAGE_SIZE / 8)) {
  2755. mode = PAGE_CUR_GE;
  2756. unique_search_from_clust_index = TRUE;
  2757. if (trx->mysql_n_tables_locked == 0
  2758. && prebuilt->select_lock_type == LOCK_NONE
  2759. && trx->isolation_level > TRX_ISO_READ_UNCOMMITTED
  2760. && trx->read_view) {
  2761. /* This is a SELECT query done as a consistent read,
  2762. and the read view has already been allocated:
  2763. let us try a search shortcut through the hash
  2764. index.
  2765. NOTE that we must also test that
  2766. mysql_n_tables_locked == 0, because this might
  2767. also be INSERT INTO ... SELECT ... or
  2768. CREATE TABLE ... SELECT ... . Our algorithm is
  2769. NOT prepared to inserts interleaved with the SELECT,
  2770. and if we try that, we can deadlock on the adaptive
  2771. hash index semaphore! */
  2772. #ifndef UNIV_SEARCH_DEBUG
  2773. if (!trx->has_search_latch) {
  2774. rw_lock_s_lock(&btr_search_latch);
  2775. trx->has_search_latch = TRUE;
  2776. }
  2777. #endif
  2778. switch (row_sel_try_search_shortcut_for_mysql(
  2779. &rec, prebuilt, &offsets, &heap,
  2780. &mtr)) {
  2781. case SEL_FOUND:
  2782. #ifdef UNIV_SEARCH_DEBUG
  2783. ut_a(0 == cmp_dtuple_rec(search_tuple,
  2784. rec, offsets));
  2785. #endif
  2786. /* At this point, rec is protected by
  2787. a page latch that was acquired by
  2788. row_sel_try_search_shortcut_for_mysql().
  2789. The latch will not be released until
  2790. mtr_commit(&mtr). */
  2791. if (!row_sel_store_mysql_rec(buf, prebuilt,
  2792. rec, offsets)) {
  2793. err = DB_TOO_BIG_RECORD;
  2794. /* We let the main loop to do the
  2795. error handling */
  2796. goto shortcut_fails_too_big_rec;
  2797. }
  2798. mtr_commit(&mtr);
  2799. /* ut_print_name(stderr, index->name);
  2800. fputs(" shortcut\n", stderr); */
  2801. srv_n_rows_read++;
  2802. err = DB_SUCCESS;
  2803. goto release_search_latch_if_needed;
  2804. case SEL_EXHAUSTED:
  2805. mtr_commit(&mtr);
  2806. /* ut_print_name(stderr, index->name);
  2807. fputs(" record not found 2\n", stderr); */
  2808. err = DB_RECORD_NOT_FOUND;
  2809. release_search_latch_if_needed:
  2810. if (trx->search_latch_timeout > 0
  2811. && trx->has_search_latch) {
  2812. trx->search_latch_timeout--;
  2813. rw_lock_s_unlock(&btr_search_latch);
  2814. trx->has_search_latch = FALSE;
  2815. }
  2816. /* NOTE that we do NOT store the cursor
  2817. position */
  2818. goto func_exit;
  2819. case SEL_RETRY:
  2820. break;
  2821. default:
  2822. ut_ad(0);
  2823. }
  2824. shortcut_fails_too_big_rec:
  2825. mtr_commit(&mtr);
  2826. mtr_start(&mtr);
  2827. }
  2828. }
  2829. /*-------------------------------------------------------------*/
  2830. /* PHASE 3: Open or restore index cursor position */
  2831. if (trx->has_search_latch) {
  2832. rw_lock_s_unlock(&btr_search_latch);
  2833. trx->has_search_latch = FALSE;
  2834. }
  2835. trx_start_if_not_started(trx);
  2836. if (trx->isolation_level <= TRX_ISO_READ_COMMITTED
  2837. && prebuilt->select_lock_type != LOCK_NONE
  2838. && trx->mysql_thd != NULL
  2839. && thd_is_select(trx->mysql_thd)) {
  2840. /* It is a plain locking SELECT and the isolation
  2841. level is low: do not lock gaps */
  2842. set_also_gap_locks = FALSE;
  2843. }
  2844. /* Note that if the search mode was GE or G, then the cursor
  2845. naturally moves upward (in fetch next) in alphabetical order,
  2846. otherwise downward */
  2847. if (UNIV_UNLIKELY(direction == 0)) {
  2848. if (mode == PAGE_CUR_GE || mode == PAGE_CUR_G) {
  2849. moves_up = TRUE;
  2850. }
  2851. } else if (direction == ROW_SEL_NEXT) {
  2852. moves_up = TRUE;
  2853. }
  2854. thr = que_fork_get_first_thr(prebuilt->sel_graph);
  2855. que_thr_move_to_run_state_for_mysql(thr, trx);
  2856. clust_index = dict_table_get_first_index(index->table);
  2857. if (UNIV_LIKELY(direction != 0)) {
  2858. ibool need_to_process = sel_restore_position_for_mysql(
  2859. &same_user_rec, BTR_SEARCH_LEAF,
  2860. pcur, moves_up, &mtr);
  2861. if (UNIV_UNLIKELY(need_to_process)) {
  2862. if (UNIV_UNLIKELY(prebuilt->row_read_type
  2863. == ROW_READ_DID_SEMI_CONSISTENT)) {
  2864. /* We did a semi-consistent read,
  2865. but the record was removed in
  2866. the meantime. */
  2867. prebuilt->row_read_type
  2868. = ROW_READ_TRY_SEMI_CONSISTENT;
  2869. }
  2870. } else if (UNIV_LIKELY(prebuilt->row_read_type
  2871. != ROW_READ_DID_SEMI_CONSISTENT)) {
  2872. /* The cursor was positioned on the record
  2873. that we returned previously. If we need
  2874. to repeat a semi-consistent read as a
  2875. pessimistic locking read, the record
  2876. cannot be skipped. */
  2877. goto next_rec;
  2878. }
  2879. } else if (dtuple_get_n_fields(search_tuple) > 0) {
  2880. btr_pcur_open_with_no_init(index, search_tuple, mode,
  2881. BTR_SEARCH_LEAF,
  2882. pcur, 0, &mtr);
  2883. pcur->trx_if_known = trx;
  2884. rec = btr_pcur_get_rec(pcur);
  2885. if (!moves_up
  2886. && !page_rec_is_supremum(rec)
  2887. && set_also_gap_locks
  2888. && !(srv_locks_unsafe_for_binlog
  2889. || trx->isolation_level == TRX_ISO_READ_COMMITTED)
  2890. && prebuilt->select_lock_type != LOCK_NONE) {
  2891. /* Try to place a gap lock on the next index record
  2892. to prevent phantoms in ORDER BY ... DESC queries */
  2893. const rec_t* next = page_rec_get_next_const(rec);
  2894. offsets = rec_get_offsets(next, index, offsets,
  2895. ULINT_UNDEFINED, &heap);
  2896. err = sel_set_rec_lock(btr_pcur_get_block(pcur),
  2897. next, index, offsets,
  2898. prebuilt->select_lock_type,
  2899. LOCK_GAP, thr);
  2900. if (err != DB_SUCCESS) {
  2901. goto lock_wait_or_error;
  2902. }
  2903. }
  2904. } else {
  2905. if (mode == PAGE_CUR_G) {
  2906. btr_pcur_open_at_index_side(
  2907. TRUE, index, BTR_SEARCH_LEAF, pcur, FALSE,
  2908. &mtr);
  2909. } else if (mode == PAGE_CUR_L) {
  2910. btr_pcur_open_at_index_side(
  2911. FALSE, index, BTR_SEARCH_LEAF, pcur, FALSE,
  2912. &mtr);
  2913. }
  2914. }
  2915. if (!prebuilt->sql_stat_start) {
  2916. /* No need to set an intention lock or assign a read view */
  2917. if (trx->read_view == NULL
  2918. && prebuilt->select_lock_type == LOCK_NONE) {
  2919. fputs("InnoDB: Error: MySQL is trying to"
  2920. " perform a consistent read\n"
  2921. "InnoDB: but the read view is not assigned!\n",
  2922. stderr);
  2923. trx_print(stderr, trx, 600);
  2924. fputc('\n', stderr);
  2925. ut_a(0);
  2926. }
  2927. } else if (prebuilt->select_lock_type == LOCK_NONE) {
  2928. /* This is a consistent read */
  2929. /* Assign a read view for the query */
  2930. trx_assign_read_view(trx);
  2931. prebuilt->sql_stat_start = FALSE;
  2932. } else {
  2933. ulint lock_mode;
  2934. if (prebuilt->select_lock_type == LOCK_S) {
  2935. lock_mode = LOCK_IS;
  2936. } else {
  2937. lock_mode = LOCK_IX;
  2938. }
  2939. err = lock_table(0, index->table, lock_mode, thr);
  2940. if (err != DB_SUCCESS) {
  2941. goto lock_wait_or_error;
  2942. }
  2943. prebuilt->sql_stat_start = FALSE;
  2944. }
  2945. rec_loop:
  2946. /*-------------------------------------------------------------*/
  2947. /* PHASE 4: Look for matching records in a loop */
  2948. rec = btr_pcur_get_rec(pcur);
  2949. ut_ad(!!page_rec_is_comp(rec) == comp);
  2950. #ifdef UNIV_SEARCH_DEBUG
  2951. /*
  2952. fputs("Using ", stderr);
  2953. dict_index_name_print(stderr, index);
  2954. fprintf(stderr, " cnt %lu ; Page no %lu\n", cnt,
  2955. page_get_page_no(page_align(rec)));
  2956. rec_print(rec);
  2957. */
  2958. #endif /* UNIV_SEARCH_DEBUG */
  2959. if (page_rec_is_infimum(rec)) {
  2960. /* The infimum record on a page cannot be in the result set,
  2961. and neither can a record lock be placed on it: we skip such
  2962. a record. */
  2963. goto next_rec;
  2964. }
  2965. if (page_rec_is_supremum(rec)) {
  2966. if (set_also_gap_locks
  2967. && !(srv_locks_unsafe_for_binlog
  2968. || trx->isolation_level == TRX_ISO_READ_COMMITTED)
  2969. && prebuilt->select_lock_type != LOCK_NONE) {
  2970. /* Try to place a lock on the index record */
  2971. /* If innodb_locks_unsafe_for_binlog option is used
  2972. or this session is using a READ COMMITTED isolation
  2973. level we do not lock gaps. Supremum record is really
  2974. a gap and therefore we do not set locks there. */
  2975. offsets = rec_get_offsets(rec, index, offsets,
  2976. ULINT_UNDEFINED, &heap);
  2977. err = sel_set_rec_lock(btr_pcur_get_block(pcur),
  2978. rec, index, offsets,
  2979. prebuilt->select_lock_type,
  2980. LOCK_ORDINARY, thr);
  2981. if (err != DB_SUCCESS) {
  2982. goto lock_wait_or_error;
  2983. }
  2984. }
  2985. /* A page supremum record cannot be in the result set: skip
  2986. it now that we have placed a possible lock on it */
  2987. goto next_rec;
  2988. }
  2989. /*-------------------------------------------------------------*/
  2990. /* Do sanity checks in case our cursor has bumped into page
  2991. corruption */
  2992. if (comp) {
  2993. next_offs = rec_get_next_offs(rec, TRUE);
  2994. if (UNIV_UNLIKELY(next_offs < PAGE_NEW_SUPREMUM)) {
  2995. goto wrong_offs;
  2996. }
  2997. } else {
  2998. next_offs = rec_get_next_offs(rec, FALSE);
  2999. if (UNIV_UNLIKELY(next_offs < PAGE_OLD_SUPREMUM)) {
  3000. goto wrong_offs;
  3001. }
  3002. }
  3003. if (UNIV_UNLIKELY(next_offs >= UNIV_PAGE_SIZE - PAGE_DIR)) {
  3004. wrong_offs:
  3005. if (srv_force_recovery == 0 || moves_up == FALSE) {
  3006. ut_print_timestamp(stderr);
  3007. buf_page_print(page_align(rec), 0);
  3008. fprintf(stderr,
  3009. "\nInnoDB: rec address %p,"
  3010. " buf block fix count %lu\n",
  3011. (void*) rec, (ulong)
  3012. btr_cur_get_block(btr_pcur_get_btr_cur(pcur))
  3013. ->page.buf_fix_count);
  3014. fprintf(stderr,
  3015. "InnoDB: Index corruption: rec offs %lu"
  3016. " next offs %lu, page no %lu,\n"
  3017. "InnoDB: ",
  3018. (ulong) page_offset(rec),
  3019. (ulong) next_offs,
  3020. (ulong) page_get_page_no(page_align(rec)));
  3021. dict_index_name_print(stderr, trx, index);
  3022. fputs(". Run CHECK TABLE. You may need to\n"
  3023. "InnoDB: restore from a backup, or"
  3024. " dump + drop + reimport the table.\n",
  3025. stderr);
  3026. err = DB_CORRUPTION;
  3027. goto lock_wait_or_error;
  3028. } else {
  3029. /* The user may be dumping a corrupt table. Jump
  3030. over the corruption to recover as much as possible. */
  3031. fprintf(stderr,
  3032. "InnoDB: Index corruption: rec offs %lu"
  3033. " next offs %lu, page no %lu,\n"
  3034. "InnoDB: ",
  3035. (ulong) page_offset(rec),
  3036. (ulong) next_offs,
  3037. (ulong) page_get_page_no(page_align(rec)));
  3038. dict_index_name_print(stderr, trx, index);
  3039. fputs(". We try to skip the rest of the page.\n",
  3040. stderr);
  3041. btr_pcur_move_to_last_on_page(pcur, &mtr);
  3042. goto next_rec;
  3043. }
  3044. }
  3045. /*-------------------------------------------------------------*/
  3046. /* Calculate the 'offsets' associated with 'rec' */
  3047. offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap);
  3048. if (UNIV_UNLIKELY(srv_force_recovery > 0)) {
  3049. if (!rec_validate(rec, offsets)
  3050. || !btr_index_rec_validate(rec, index, FALSE)) {
  3051. fprintf(stderr,
  3052. "InnoDB: Index corruption: rec offs %lu"
  3053. " next offs %lu, page no %lu,\n"
  3054. "InnoDB: ",
  3055. (ulong) page_offset(rec),
  3056. (ulong) next_offs,
  3057. (ulong) page_get_page_no(page_align(rec)));
  3058. dict_index_name_print(stderr, trx, index);
  3059. fputs(". We try to skip the record.\n",
  3060. stderr);
  3061. goto next_rec;
  3062. }
  3063. }
  3064. /* Note that we cannot trust the up_match value in the cursor at this
  3065. place because we can arrive here after moving the cursor! Thus
  3066. we have to recompare rec and search_tuple to determine if they
  3067. match enough. */
  3068. if (match_mode == ROW_SEL_EXACT) {
  3069. /* Test if the index record matches completely to search_tuple
  3070. in prebuilt: if not, then we return with DB_RECORD_NOT_FOUND */
  3071. /* fputs("Comparing rec and search tuple\n", stderr); */
  3072. if (0 != cmp_dtuple_rec(search_tuple, rec, offsets)) {
  3073. if (set_also_gap_locks
  3074. && !(srv_locks_unsafe_for_binlog
  3075. || trx->isolation_level
  3076. == TRX_ISO_READ_COMMITTED)
  3077. && prebuilt->select_lock_type != LOCK_NONE) {
  3078. /* Try to place a gap lock on the index
  3079. record only if innodb_locks_unsafe_for_binlog
  3080. option is not set or this session is not
  3081. using a READ COMMITTED isolation level. */
  3082. err = sel_set_rec_lock(
  3083. btr_pcur_get_block(pcur),
  3084. rec, index, offsets,
  3085. prebuilt->select_lock_type, LOCK_GAP,
  3086. thr);
  3087. if (err != DB_SUCCESS) {
  3088. goto lock_wait_or_error;
  3089. }
  3090. }
  3091. btr_pcur_store_position(pcur, &mtr);
  3092. err = DB_RECORD_NOT_FOUND;
  3093. /* ut_print_name(stderr, index->name);
  3094. fputs(" record not found 3\n", stderr); */
  3095. goto normal_return;
  3096. }
  3097. } else if (match_mode == ROW_SEL_EXACT_PREFIX) {
  3098. if (!cmp_dtuple_is_prefix_of_rec(search_tuple, rec, offsets)) {
  3099. if (set_also_gap_locks
  3100. && !(srv_locks_unsafe_for_binlog
  3101. || trx->isolation_level
  3102. == TRX_ISO_READ_COMMITTED)
  3103. && prebuilt->select_lock_type != LOCK_NONE) {
  3104. /* Try to place a gap lock on the index
  3105. record only if innodb_locks_unsafe_for_binlog
  3106. option is not set or this session is not
  3107. using a READ COMMITTED isolation level. */
  3108. err = sel_set_rec_lock(
  3109. btr_pcur_get_block(pcur),
  3110. rec, index, offsets,
  3111. prebuilt->select_lock_type, LOCK_GAP,
  3112. thr);
  3113. if (err != DB_SUCCESS) {
  3114. goto lock_wait_or_error;
  3115. }
  3116. }
  3117. btr_pcur_store_position(pcur, &mtr);
  3118. err = DB_RECORD_NOT_FOUND;
  3119. /* ut_print_name(stderr, index->name);
  3120. fputs(" record not found 4\n", stderr); */
  3121. goto normal_return;
  3122. }
  3123. }
  3124. /* We are ready to look at a possible new index entry in the result
  3125. set: the cursor is now placed on a user record */
  3126. if (prebuilt->select_lock_type != LOCK_NONE) {
  3127. /* Try to place a lock on the index record; note that delete
  3128. marked records are a special case in a unique search. If there
  3129. is a non-delete marked record, then it is enough to lock its
  3130. existence with LOCK_REC_NOT_GAP. */
  3131. /* If innodb_locks_unsafe_for_binlog option is used
  3132. or this session is using a READ COMMITED isolation
  3133. level we lock only the record, i.e., next-key locking is
  3134. not used. */
  3135. ulint lock_type;
  3136. if (!set_also_gap_locks
  3137. || srv_locks_unsafe_for_binlog
  3138. || trx->isolation_level == TRX_ISO_READ_COMMITTED
  3139. || (unique_search
  3140. && !UNIV_UNLIKELY(rec_get_deleted_flag(rec, comp)))) {
  3141. goto no_gap_lock;
  3142. } else {
  3143. lock_type = LOCK_ORDINARY;
  3144. }
  3145. /* If we are doing a 'greater or equal than a primary key
  3146. value' search from a clustered index, and we find a record
  3147. that has that exact primary key value, then there is no need
  3148. to lock the gap before the record, because no insert in the
  3149. gap can be in our search range. That is, no phantom row can
  3150. appear that way.
  3151. An example: if col1 is the primary key, the search is WHERE
  3152. col1 >= 100, and we find a record where col1 = 100, then no
  3153. need to lock the gap before that record. */
  3154. if (index == clust_index
  3155. && mode == PAGE_CUR_GE
  3156. && direction == 0
  3157. && dtuple_get_n_fields_cmp(search_tuple)
  3158. == dict_index_get_n_unique(index)
  3159. && 0 == cmp_dtuple_rec(search_tuple, rec, offsets)) {
  3160. no_gap_lock:
  3161. lock_type = LOCK_REC_NOT_GAP;
  3162. }
  3163. err = sel_set_rec_lock(btr_pcur_get_block(pcur),
  3164. rec, index, offsets,
  3165. prebuilt->select_lock_type,
  3166. lock_type, thr);
  3167. switch (err) {
  3168. const rec_t* old_vers;
  3169. case DB_SUCCESS:
  3170. if (srv_locks_unsafe_for_binlog
  3171. || trx->isolation_level == TRX_ISO_READ_COMMITTED) {
  3172. /* Note that a record of
  3173. prebuilt->index was locked. */
  3174. prebuilt->new_rec_locks = 1;
  3175. }
  3176. break;
  3177. case DB_LOCK_WAIT:
  3178. if (UNIV_LIKELY(prebuilt->row_read_type
  3179. != ROW_READ_TRY_SEMI_CONSISTENT)
  3180. || index != clust_index) {
  3181. goto lock_wait_or_error;
  3182. }
  3183. /* The following call returns 'offsets'
  3184. associated with 'old_vers' */
  3185. err = row_sel_build_committed_vers_for_mysql(
  3186. clust_index, prebuilt, rec,
  3187. &offsets, &heap, &old_vers, &mtr);
  3188. if (err != DB_SUCCESS) {
  3189. goto lock_wait_or_error;
  3190. }
  3191. mutex_enter(&kernel_mutex);
  3192. if (trx->was_chosen_as_deadlock_victim) {
  3193. mutex_exit(&kernel_mutex);
  3194. err = DB_DEADLOCK;
  3195. goto lock_wait_or_error;
  3196. }
  3197. if (UNIV_LIKELY(trx->wait_lock != NULL)) {
  3198. lock_cancel_waiting_and_release(
  3199. trx->wait_lock);
  3200. prebuilt->new_rec_locks = 0;
  3201. } else {
  3202. mutex_exit(&kernel_mutex);
  3203. /* The lock was granted while we were
  3204. searching for the last committed version.
  3205. Do a normal locking read. */
  3206. offsets = rec_get_offsets(rec, index, offsets,
  3207. ULINT_UNDEFINED,
  3208. &heap);
  3209. err = DB_SUCCESS;
  3210. /* Note that a record of
  3211. prebuilt->index was locked. */
  3212. prebuilt->new_rec_locks = 1;
  3213. break;
  3214. }
  3215. mutex_exit(&kernel_mutex);
  3216. if (old_vers == NULL) {
  3217. /* The row was not yet committed */
  3218. goto next_rec;
  3219. }
  3220. did_semi_consistent_read = TRUE;
  3221. rec = old_vers;
  3222. break;
  3223. default:
  3224. goto lock_wait_or_error;
  3225. }
  3226. } else {
  3227. /* This is a non-locking consistent read: if necessary, fetch
  3228. a previous version of the record */
  3229. if (trx->isolation_level == TRX_ISO_READ_UNCOMMITTED) {
  3230. /* Do nothing: we let a non-locking SELECT read the
  3231. latest version of the record */
  3232. } else if (index == clust_index) {
  3233. /* Fetch a previous version of the row if the current
  3234. one is not visible in the snapshot; if we have a very
  3235. high force recovery level set, we try to avoid crashes
  3236. by skipping this lookup */
  3237. if (UNIV_LIKELY(srv_force_recovery < 5)
  3238. && !lock_clust_rec_cons_read_sees(
  3239. rec, index, offsets, trx->read_view)) {
  3240. rec_t* old_vers;
  3241. /* The following call returns 'offsets'
  3242. associated with 'old_vers' */
  3243. err = row_sel_build_prev_vers_for_mysql(
  3244. trx->read_view, clust_index,
  3245. prebuilt, rec, &offsets, &heap,
  3246. &old_vers, &mtr);
  3247. if (err != DB_SUCCESS) {
  3248. goto lock_wait_or_error;
  3249. }
  3250. if (old_vers == NULL) {
  3251. /* The row did not exist yet in
  3252. the read view */
  3253. goto next_rec;
  3254. }
  3255. rec = old_vers;
  3256. }
  3257. } else if (!lock_sec_rec_cons_read_sees(rec, trx->read_view)) {
  3258. /* We are looking into a non-clustered index,
  3259. and to get the right version of the record we
  3260. have to look also into the clustered index: this
  3261. is necessary, because we can only get the undo
  3262. information via the clustered index record. */
  3263. ut_ad(index != clust_index);
  3264. goto requires_clust_rec;
  3265. }
  3266. }
  3267. /* NOTE that at this point rec can be an old version of a clustered
  3268. index record built for a consistent read. We cannot assume after this
  3269. point that rec is on a buffer pool page. Functions like
  3270. page_rec_is_comp() cannot be used! */
  3271. if (UNIV_UNLIKELY(rec_get_deleted_flag(rec, comp))) {
  3272. /* The record is delete-marked: we can skip it */
  3273. if ((srv_locks_unsafe_for_binlog
  3274. || trx->isolation_level == TRX_ISO_READ_COMMITTED)
  3275. && prebuilt->select_lock_type != LOCK_NONE
  3276. && !did_semi_consistent_read) {
  3277. /* No need to keep a lock on a delete-marked record
  3278. if we do not want to use next-key locking. */
  3279. row_unlock_for_mysql(prebuilt, TRUE);
  3280. }
  3281. /* This is an optimization to skip setting the next key lock
  3282. on the record that follows this delete-marked record. This
  3283. optimization works because of the unique search criteria
  3284. which precludes the presence of a range lock between this
  3285. delete marked record and the record following it.
  3286. For now this is applicable only to clustered indexes while
  3287. doing a unique search. There is scope for further optimization
  3288. applicable to unique secondary indexes. Current behaviour is
  3289. to widen the scope of a lock on an already delete marked record
  3290. if the same record is deleted twice by the same transaction */
  3291. if (index == clust_index && unique_search) {
  3292. err = DB_RECORD_NOT_FOUND;
  3293. goto normal_return;
  3294. }
  3295. goto next_rec;
  3296. }
  3297. /* Get the clustered index record if needed, if we did not do the
  3298. search using the clustered index. */
  3299. if (index != clust_index && prebuilt->need_to_access_clustered) {
  3300. requires_clust_rec:
  3301. /* We use a 'goto' to the preceding label if a consistent
  3302. read of a secondary index record requires us to look up old
  3303. versions of the associated clustered index record. */
  3304. ut_ad(rec_offs_validate(rec, index, offsets));
  3305. /* It was a non-clustered index and we must fetch also the
  3306. clustered index record */
  3307. mtr_has_extra_clust_latch = TRUE;
  3308. /* The following call returns 'offsets' associated with
  3309. 'clust_rec'. Note that 'clust_rec' can be an old version
  3310. built for a consistent read. */
  3311. err = row_sel_get_clust_rec_for_mysql(prebuilt, index, rec,
  3312. thr, &clust_rec,
  3313. &offsets, &heap, &mtr);
  3314. if (err != DB_SUCCESS) {
  3315. goto lock_wait_or_error;
  3316. }
  3317. if (clust_rec == NULL) {
  3318. /* The record did not exist in the read view */
  3319. ut_ad(prebuilt->select_lock_type == LOCK_NONE);
  3320. goto next_rec;
  3321. }
  3322. if ((srv_locks_unsafe_for_binlog
  3323. || trx->isolation_level == TRX_ISO_READ_COMMITTED)
  3324. && prebuilt->select_lock_type != LOCK_NONE) {
  3325. /* Note that both the secondary index record
  3326. and the clustered index record were locked. */
  3327. ut_ad(prebuilt->new_rec_locks == 1);
  3328. prebuilt->new_rec_locks = 2;
  3329. }
  3330. if (UNIV_UNLIKELY(rec_get_deleted_flag(clust_rec, comp))) {
  3331. /* The record is delete marked: we can skip it */
  3332. if ((srv_locks_unsafe_for_binlog
  3333. || trx->isolation_level == TRX_ISO_READ_COMMITTED)
  3334. && prebuilt->select_lock_type != LOCK_NONE) {
  3335. /* No need to keep a lock on a delete-marked
  3336. record if we do not want to use next-key
  3337. locking. */
  3338. row_unlock_for_mysql(prebuilt, TRUE);
  3339. }
  3340. goto next_rec;
  3341. }
  3342. if (prebuilt->need_to_access_clustered) {
  3343. result_rec = clust_rec;
  3344. ut_ad(rec_offs_validate(result_rec, clust_index,
  3345. offsets));
  3346. } else {
  3347. /* We used 'offsets' for the clust rec, recalculate
  3348. them for 'rec' */
  3349. offsets = rec_get_offsets(rec, index, offsets,
  3350. ULINT_UNDEFINED, &heap);
  3351. result_rec = rec;
  3352. }
  3353. } else {
  3354. result_rec = rec;
  3355. }
  3356. /* We found a qualifying record 'result_rec'. At this point,
  3357. 'offsets' are associated with 'result_rec'. */
  3358. ut_ad(rec_offs_validate(result_rec,
  3359. result_rec != rec ? clust_index : index,
  3360. offsets));
  3361. /* At this point, the clustered index record is protected
  3362. by a page latch that was acquired when pcur was positioned.
  3363. The latch will not be released until mtr_commit(&mtr). */
  3364. if ((match_mode == ROW_SEL_EXACT
  3365. || prebuilt->n_rows_fetched >= MYSQL_FETCH_CACHE_THRESHOLD)
  3366. && prebuilt->select_lock_type == LOCK_NONE
  3367. && !prebuilt->templ_contains_blob
  3368. && !prebuilt->clust_index_was_generated
  3369. && !prebuilt->used_in_HANDLER
  3370. && prebuilt->template_type
  3371. != ROW_MYSQL_DUMMY_TEMPLATE) {
  3372. /* Inside an update, for example, we do not cache rows,
  3373. since we may use the cursor position to do the actual
  3374. update, that is why we require ...lock_type == LOCK_NONE.
  3375. Since we keep space in prebuilt only for the BLOBs of
  3376. a single row, we cannot cache rows in the case there
  3377. are BLOBs in the fields to be fetched. In HANDLER we do
  3378. not cache rows because there the cursor is a scrollable
  3379. cursor. */
  3380. row_sel_push_cache_row_for_mysql(prebuilt, result_rec,
  3381. offsets);
  3382. if (prebuilt->n_fetch_cached == MYSQL_FETCH_CACHE_SIZE) {
  3383. goto got_row;
  3384. }
  3385. goto next_rec;
  3386. } else {
  3387. if (prebuilt->template_type == ROW_MYSQL_DUMMY_TEMPLATE) {
  3388. memcpy(buf + 4, result_rec
  3389. - rec_offs_extra_size(offsets),
  3390. rec_offs_size(offsets));
  3391. mach_write_to_4(buf,
  3392. rec_offs_extra_size(offsets) + 4);
  3393. } else {
  3394. if (!row_sel_store_mysql_rec(buf, prebuilt,
  3395. result_rec, offsets)) {
  3396. err = DB_TOO_BIG_RECORD;
  3397. goto lock_wait_or_error;
  3398. }
  3399. }
  3400. if (prebuilt->clust_index_was_generated) {
  3401. if (result_rec != rec) {
  3402. offsets = rec_get_offsets(
  3403. rec, index, offsets, ULINT_UNDEFINED,
  3404. &heap);
  3405. }
  3406. row_sel_store_row_id_to_prebuilt(prebuilt, rec,
  3407. index, offsets);
  3408. }
  3409. }
  3410. /* From this point on, 'offsets' are invalid. */
  3411. got_row:
  3412. /* We have an optimization to save CPU time: if this is a consistent
  3413. read on a unique condition on the clustered index, then we do not
  3414. store the pcur position, because any fetch next or prev will anyway
  3415. return 'end of file'. Exceptions are locking reads and the MySQL
  3416. HANDLER command where the user can move the cursor with PREV or NEXT
  3417. even after a unique search. */
  3418. if (!unique_search_from_clust_index
  3419. || prebuilt->select_lock_type != LOCK_NONE
  3420. || prebuilt->used_in_HANDLER) {
  3421. /* Inside an update always store the cursor position */
  3422. btr_pcur_store_position(pcur, &mtr);
  3423. }
  3424. err = DB_SUCCESS;
  3425. goto normal_return;
  3426. next_rec:
  3427. /* Reset the old and new "did semi-consistent read" flags. */
  3428. if (UNIV_UNLIKELY(prebuilt->row_read_type
  3429. == ROW_READ_DID_SEMI_CONSISTENT)) {
  3430. prebuilt->row_read_type = ROW_READ_TRY_SEMI_CONSISTENT;
  3431. }
  3432. did_semi_consistent_read = FALSE;
  3433. prebuilt->new_rec_locks = 0;
  3434. /*-------------------------------------------------------------*/
  3435. /* PHASE 5: Move the cursor to the next index record */
  3436. if (UNIV_UNLIKELY(mtr_has_extra_clust_latch)) {
  3437. /* We must commit mtr if we are moving to the next
  3438. non-clustered index record, because we could break the
  3439. latching order if we would access a different clustered
  3440. index page right away without releasing the previous. */
  3441. btr_pcur_store_position(pcur, &mtr);
  3442. mtr_commit(&mtr);
  3443. mtr_has_extra_clust_latch = FALSE;
  3444. mtr_start(&mtr);
  3445. if (sel_restore_position_for_mysql(&same_user_rec,
  3446. BTR_SEARCH_LEAF,
  3447. pcur, moves_up, &mtr)) {
  3448. #ifdef UNIV_SEARCH_DEBUG
  3449. cnt++;
  3450. #endif /* UNIV_SEARCH_DEBUG */
  3451. goto rec_loop;
  3452. }
  3453. }
  3454. if (moves_up) {
  3455. if (UNIV_UNLIKELY(!btr_pcur_move_to_next(pcur, &mtr))) {
  3456. not_moved:
  3457. btr_pcur_store_position(pcur, &mtr);
  3458. if (match_mode != 0) {
  3459. err = DB_RECORD_NOT_FOUND;
  3460. } else {
  3461. err = DB_END_OF_INDEX;
  3462. }
  3463. goto normal_return;
  3464. }
  3465. } else {
  3466. if (UNIV_UNLIKELY(!btr_pcur_move_to_prev(pcur, &mtr))) {
  3467. goto not_moved;
  3468. }
  3469. }
  3470. #ifdef UNIV_SEARCH_DEBUG
  3471. cnt++;
  3472. #endif /* UNIV_SEARCH_DEBUG */
  3473. goto rec_loop;
  3474. lock_wait_or_error:
  3475. /* Reset the old and new "did semi-consistent read" flags. */
  3476. if (UNIV_UNLIKELY(prebuilt->row_read_type
  3477. == ROW_READ_DID_SEMI_CONSISTENT)) {
  3478. prebuilt->row_read_type = ROW_READ_TRY_SEMI_CONSISTENT;
  3479. }
  3480. did_semi_consistent_read = FALSE;
  3481. /*-------------------------------------------------------------*/
  3482. btr_pcur_store_position(pcur, &mtr);
  3483. mtr_commit(&mtr);
  3484. mtr_has_extra_clust_latch = FALSE;
  3485. trx->error_state = err;
  3486. /* The following is a patch for MySQL */
  3487. que_thr_stop_for_mysql(thr);
  3488. thr->lock_state = QUE_THR_LOCK_ROW;
  3489. if (row_mysql_handle_errors(&err, trx, thr, NULL)) {
  3490. /* It was a lock wait, and it ended */
  3491. thr->lock_state = QUE_THR_LOCK_NOLOCK;
  3492. mtr_start(&mtr);
  3493. sel_restore_position_for_mysql(&same_user_rec,
  3494. BTR_SEARCH_LEAF, pcur,
  3495. moves_up, &mtr);
  3496. if ((srv_locks_unsafe_for_binlog
  3497. || trx->isolation_level == TRX_ISO_READ_COMMITTED)
  3498. && !same_user_rec) {
  3499. /* Since we were not able to restore the cursor
  3500. on the same user record, we cannot use
  3501. row_unlock_for_mysql() to unlock any records, and
  3502. we must thus reset the new rec lock info. Since
  3503. in lock0lock.c we have blocked the inheriting of gap
  3504. X-locks, we actually do not have any new record locks
  3505. set in this case.
  3506. Note that if we were able to restore on the 'same'
  3507. user record, it is still possible that we were actually
  3508. waiting on a delete-marked record, and meanwhile
  3509. it was removed by purge and inserted again by some
  3510. other user. But that is no problem, because in
  3511. rec_loop we will again try to set a lock, and
  3512. new_rec_lock_info in trx will be right at the end. */
  3513. prebuilt->new_rec_locks = 0;
  3514. }
  3515. mode = pcur->search_mode;
  3516. goto rec_loop;
  3517. }
  3518. thr->lock_state = QUE_THR_LOCK_NOLOCK;
  3519. #ifdef UNIV_SEARCH_DEBUG
  3520. /* fputs("Using ", stderr);
  3521. dict_index_name_print(stderr, index);
  3522. fprintf(stderr, " cnt %lu ret value %lu err\n", cnt, err); */
  3523. #endif /* UNIV_SEARCH_DEBUG */
  3524. goto func_exit;
  3525. normal_return:
  3526. /*-------------------------------------------------------------*/
  3527. que_thr_stop_for_mysql_no_error(thr, trx);
  3528. mtr_commit(&mtr);
  3529. if (prebuilt->n_fetch_cached > 0) {
  3530. row_sel_pop_cached_row_for_mysql(buf, prebuilt);
  3531. err = DB_SUCCESS;
  3532. }
  3533. #ifdef UNIV_SEARCH_DEBUG
  3534. /* fputs("Using ", stderr);
  3535. dict_index_name_print(stderr, index);
  3536. fprintf(stderr, " cnt %lu ret value %lu err\n", cnt, err); */
  3537. #endif /* UNIV_SEARCH_DEBUG */
  3538. if (err == DB_SUCCESS) {
  3539. srv_n_rows_read++;
  3540. }
  3541. func_exit:
  3542. trx->op_info = "";
  3543. if (UNIV_LIKELY_NULL(heap)) {
  3544. mem_heap_free(heap);
  3545. }
  3546. /* Set or reset the "did semi-consistent read" flag on return.
  3547. The flag did_semi_consistent_read is set if and only if
  3548. the record being returned was fetched with a semi-consistent read. */
  3549. ut_ad(prebuilt->row_read_type != ROW_READ_WITH_LOCKS
  3550. || !did_semi_consistent_read);
  3551. if (UNIV_UNLIKELY(prebuilt->row_read_type != ROW_READ_WITH_LOCKS)) {
  3552. if (UNIV_UNLIKELY(did_semi_consistent_read)) {
  3553. prebuilt->row_read_type = ROW_READ_DID_SEMI_CONSISTENT;
  3554. } else {
  3555. prebuilt->row_read_type = ROW_READ_TRY_SEMI_CONSISTENT;
  3556. }
  3557. }
  3558. return(err);
  3559. }
  3560. /*******************************************************************//**
  3561. Checks if MySQL at the moment is allowed for this table to retrieve a
  3562. consistent read result, or store it to the query cache.
  3563. @return TRUE if storing or retrieving from the query cache is permitted */
  3564. UNIV_INTERN
  3565. ibool
  3566. row_search_check_if_query_cache_permitted(
  3567. /*======================================*/
  3568. trx_t* trx, /*!< in: transaction object */
  3569. const char* norm_name) /*!< in: concatenation of database name,
  3570. '/' char, table name */
  3571. {
  3572. dict_table_t* table;
  3573. ibool ret = FALSE;
  3574. table = dict_table_get(norm_name, FALSE);
  3575. if (table == NULL) {
  3576. return(FALSE);
  3577. }
  3578. mutex_enter(&kernel_mutex);
  3579. /* Start the transaction if it is not started yet */
  3580. trx_start_if_not_started_low(trx);
  3581. /* If there are locks on the table or some trx has invalidated the
  3582. cache up to our trx id, then ret = FALSE.
  3583. We do not check what type locks there are on the table, though only
  3584. IX type locks actually would require ret = FALSE. */
  3585. if (UT_LIST_GET_LEN(table->locks) == 0
  3586. && ut_dulint_cmp(trx->id,
  3587. table->query_cache_inv_trx_id) >= 0) {
  3588. ret = TRUE;
  3589. /* If the isolation level is high, assign a read view for the
  3590. transaction if it does not yet have one */
  3591. if (trx->isolation_level >= TRX_ISO_REPEATABLE_READ
  3592. && !trx->read_view) {
  3593. trx->read_view = read_view_open_now(
  3594. trx->id, trx->global_read_view_heap);
  3595. trx->global_read_view = trx->read_view;
  3596. }
  3597. }
  3598. mutex_exit(&kernel_mutex);
  3599. return(ret);
  3600. }
  3601. /*******************************************************************//**
  3602. Read the AUTOINC column from the current row. If the value is less than
  3603. 0 and the type is not unsigned then we reset the value to 0.
  3604. @return value read from the column */
  3605. static
  3606. ib_uint64_t
  3607. row_search_autoinc_read_column(
  3608. /*===========================*/
  3609. dict_index_t* index, /*!< in: index to read from */
  3610. const rec_t* rec, /*!< in: current rec */
  3611. ulint col_no, /*!< in: column number */
  3612. ibool unsigned_type) /*!< in: signed or unsigned flag */
  3613. {
  3614. ulint len;
  3615. const byte* data;
  3616. ib_uint64_t value;
  3617. mem_heap_t* heap = NULL;
  3618. ulint offsets_[REC_OFFS_NORMAL_SIZE];
  3619. ulint* offsets = offsets_;
  3620. rec_offs_init(offsets_);
  3621. offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap);
  3622. data = rec_get_nth_field(rec, offsets, col_no, &len);
  3623. ut_a(len != UNIV_SQL_NULL);
  3624. ut_a(len <= sizeof value);
  3625. /* we assume AUTOINC value cannot be negative */
  3626. value = mach_read_int_type(data, len, unsigned_type);
  3627. if (UNIV_LIKELY_NULL(heap)) {
  3628. mem_heap_free(heap);
  3629. }
  3630. if (!unsigned_type && (ib_int64_t) value < 0) {
  3631. value = 0;
  3632. }
  3633. return(value);
  3634. }
  3635. /*******************************************************************//**
  3636. Get the last row.
  3637. @return current rec or NULL */
  3638. static
  3639. const rec_t*
  3640. row_search_autoinc_get_rec(
  3641. /*=======================*/
  3642. btr_pcur_t* pcur, /*!< in: the current cursor */
  3643. mtr_t* mtr) /*!< in: mini transaction */
  3644. {
  3645. do {
  3646. const rec_t* rec = btr_pcur_get_rec(pcur);
  3647. if (page_rec_is_user_rec(rec)) {
  3648. return(rec);
  3649. }
  3650. } while (btr_pcur_move_to_prev(pcur, mtr));
  3651. return(NULL);
  3652. }
  3653. /*******************************************************************//**
  3654. Read the max AUTOINC value from an index.
  3655. @return DB_SUCCESS if all OK else error code, DB_RECORD_NOT_FOUND if
  3656. column name can't be found in index */
  3657. UNIV_INTERN
  3658. ulint
  3659. row_search_max_autoinc(
  3660. /*===================*/
  3661. dict_index_t* index, /*!< in: index to search */
  3662. const char* col_name, /*!< in: name of autoinc column */
  3663. ib_uint64_t* value) /*!< out: AUTOINC value read */
  3664. {
  3665. ulint i;
  3666. ulint n_cols;
  3667. dict_field_t* dfield = NULL;
  3668. ulint error = DB_SUCCESS;
  3669. n_cols = dict_index_get_n_ordering_defined_by_user(index);
  3670. /* Search the index for the AUTOINC column name */
  3671. for (i = 0; i < n_cols; ++i) {
  3672. dfield = dict_index_get_nth_field(index, i);
  3673. if (strcmp(col_name, dfield->name) == 0) {
  3674. break;
  3675. }
  3676. }
  3677. *value = 0;
  3678. /* Must find the AUTOINC column name */
  3679. if (i < n_cols && dfield) {
  3680. mtr_t mtr;
  3681. btr_pcur_t pcur;
  3682. mtr_start(&mtr);
  3683. /* Open at the high/right end (FALSE), and INIT
  3684. cursor (TRUE) */
  3685. btr_pcur_open_at_index_side(
  3686. FALSE, index, BTR_SEARCH_LEAF, &pcur, TRUE, &mtr);
  3687. if (page_get_n_recs(btr_pcur_get_page(&pcur)) > 0) {
  3688. const rec_t* rec;
  3689. rec = row_search_autoinc_get_rec(&pcur, &mtr);
  3690. if (rec != NULL) {
  3691. ibool unsigned_type = (
  3692. dfield->col->prtype & DATA_UNSIGNED);
  3693. *value = row_search_autoinc_read_column(
  3694. index, rec, i, unsigned_type);
  3695. }
  3696. }
  3697. btr_pcur_close(&pcur);
  3698. mtr_commit(&mtr);
  3699. } else {
  3700. error = DB_RECORD_NOT_FOUND;
  3701. }
  3702. return(error);
  3703. }