You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1822 lines
49 KiB

9 years ago
MDEV-12634: Uninitialised ROW_MERGE_RESERVE_SIZE bytes written to tem… …porary file Fixed by removing writing key version to start of every block that was encrypted. Instead we will use single key version from log_sys crypt info. After this MDEV also blocks writen to row log are encrypted and blocks read from row log aren decrypted if encryption is configured for the table. innodb_status_variables[], struct srv_stats_t Added status variables for merge block and row log block encryption and decryption amounts. Removed ROW_MERGE_RESERVE_SIZE define. row_merge_fts_doc_tokenize Remove ROW_MERGE_RESERVE_SIZE row_log_t Add index, crypt_tail, crypt_head to be used in case of encryption. row_log_online_op, row_log_table_close_func Before writing a block encrypt it if encryption is enabled row_log_table_apply_ops, row_log_apply_ops After reading a block decrypt it if encryption is enabled row_log_allocate Allocate temporary buffers crypt_head and crypt_tail if needed. row_log_free Free temporary buffers crypt_head and crypt_tail if they exist. row_merge_encrypt_buf, row_merge_decrypt_buf Removed. row_merge_buf_create, row_merge_buf_write Remove ROW_MERGE_RESERVE_SIZE row_merge_build_indexes Allocate temporary buffer used in decryption and encryption if needed. log_tmp_blocks_crypt, log_tmp_block_encrypt, log_temp_block_decrypt New functions used in block encryption and decryption log_tmp_is_encrypted New function to check is encryption enabled. Added test case innodb-rowlog to force creating a row log and verify that operations are done using introduced status variables.
8 years ago
MDEV-11233 CREATE FULLTEXT INDEX with a token longer than 127 bytes crashes server This bug is the result of merging the Oracle MySQL follow-up fix BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX without merging the base bug fix: Bug#79475 Insert a token of 84 4-bytes chars into fts index causes server crash. Unlike the above mentioned fixes in MySQL, our fix will not change the storage format of fulltext indexes in InnoDB or XtraDB when a character encoding with mbmaxlen=2 or mbmaxlen=3 and the length of a word is between 128 and 84*mbmaxlen bytes. The Oracle fix would allocate 2 length bytes for these cases. Compatibility with other MySQL and MariaDB releases is ensured by persisting the used maximum length in the SYS_COLUMNS table in the InnoDB data dictionary. This fix also removes some unnecessary strcmp() calls when checking for the legacy default collation my_charset_latin1 (my_charset_latin1.name=="latin1_swedish_ci"). fts_create_one_index_table(): Store the actual length in bytes. This metadata will be written to the SYS_COLUMNS table. fts_zip_initialize(): Initialize only the first byte of the buffer. Actually the code should not even care about this first byte, because the length is set as 0. FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes, not as 254 bytes. row_merge_create_fts_sort_index(): Set the actual maximum length of the column in bytes, similar to fts_create_one_index_table(). row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype. Use the actual maximum length of the column. Calculate the extra_size in the same way as row_merge_buf_encode() does.
9 years ago
MDEV-11233 CREATE FULLTEXT INDEX with a token longer than 127 bytes crashes server This bug is the result of merging the Oracle MySQL follow-up fix BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX without merging the base bug fix: Bug#79475 Insert a token of 84 4-bytes chars into fts index causes server crash. Unlike the above mentioned fixes in MySQL, our fix will not change the storage format of fulltext indexes in InnoDB or XtraDB when a character encoding with mbmaxlen=2 or mbmaxlen=3 and the length of a word is between 128 and 84*mbmaxlen bytes. The Oracle fix would allocate 2 length bytes for these cases. Compatibility with other MySQL and MariaDB releases is ensured by persisting the used maximum length in the SYS_COLUMNS table in the InnoDB data dictionary. This fix also removes some unnecessary strcmp() calls when checking for the legacy default collation my_charset_latin1 (my_charset_latin1.name=="latin1_swedish_ci"). fts_create_one_index_table(): Store the actual length in bytes. This metadata will be written to the SYS_COLUMNS table. fts_zip_initialize(): Initialize only the first byte of the buffer. Actually the code should not even care about this first byte, because the length is set as 0. FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes, not as 254 bytes. row_merge_create_fts_sort_index(): Set the actual maximum length of the column in bytes, similar to fts_create_one_index_table(). row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype. Use the actual maximum length of the column. Calculate the extra_size in the same way as row_merge_buf_encode() does.
9 years ago
MDEV-11233 CREATE FULLTEXT INDEX with a token longer than 127 bytes crashes server This bug is the result of merging the Oracle MySQL follow-up fix BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX without merging the base bug fix: Bug#79475 Insert a token of 84 4-bytes chars into fts index causes server crash. Unlike the above mentioned fixes in MySQL, our fix will not change the storage format of fulltext indexes in InnoDB or XtraDB when a character encoding with mbmaxlen=2 or mbmaxlen=3 and the length of a word is between 128 and 84*mbmaxlen bytes. The Oracle fix would allocate 2 length bytes for these cases. Compatibility with other MySQL and MariaDB releases is ensured by persisting the used maximum length in the SYS_COLUMNS table in the InnoDB data dictionary. This fix also removes some unnecessary strcmp() calls when checking for the legacy default collation my_charset_latin1 (my_charset_latin1.name=="latin1_swedish_ci"). fts_create_one_index_table(): Store the actual length in bytes. This metadata will be written to the SYS_COLUMNS table. fts_zip_initialize(): Initialize only the first byte of the buffer. Actually the code should not even care about this first byte, because the length is set as 0. FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes, not as 254 bytes. row_merge_create_fts_sort_index(): Set the actual maximum length of the column in bytes, similar to fts_create_one_index_table(). row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype. Use the actual maximum length of the column. Calculate the extra_size in the same way as row_merge_buf_encode() does.
9 years ago
MDEV-11738: Mariadb uses 100% of several of my 8 cpus doing nothing MDEV-11581: Mariadb starts InnoDB encryption threads when key has not changed or data scrubbing turned off Background: Key rotation is based on background threads (innodb-encryption-threads) periodically going through all tablespaces on fil_system. For each tablespace current used key version is compared to max key age (innodb-encryption-rotate-key-age). This process naturally takes CPU. Similarly, in same time need for scrubbing is investigated. Currently, key rotation is fully supported on Amazon AWS key management plugin only but InnoDB does not have knowledge what key management plugin is used. This patch re-purposes innodb-encryption-rotate-key-age=0 to disable key rotation and background data scrubbing. All new tables are added to special list for key rotation and key rotation is based on sending a event to background encryption threads instead of using periodic checking (i.e. timeout). fil0fil.cc: Added functions fil_space_acquire_low() to acquire a tablespace when it could be dropped concurrently. This function is used from fil_space_acquire() or fil_space_acquire_silent() that will not print any messages if we try to acquire space that does not exist. fil_space_release() to release a acquired tablespace. fil_space_next() to iterate tablespaces in fil_system using fil_space_acquire() and fil_space_release(). Similarly, fil_space_keyrotation_next() to iterate new list fil_system->rotation_list where new tables. are added if key rotation is disabled. Removed unnecessary functions fil_get_first_space_safe() fil_get_next_space_safe() fil_node_open_file(): After page 0 is read read also crypt_info if it is not yet read. btr_scrub_lock_dict_func() buf_page_check_corrupt() buf_page_encrypt_before_write() buf_merge_or_delete_for_page() lock_print_info_all_transactions() row_fts_psort_info_init() row_truncate_table_for_mysql() row_drop_table_for_mysql() Use fil_space_acquire()/release() to access fil_space_t. buf_page_decrypt_after_read(): Use fil_space_get_crypt_data() because at this point we might not yet have read page 0. fil0crypt.cc/fil0fil.h: Lot of changes. Pass fil_space_t* directly to functions needing it and store fil_space_t* to rotation state. Use fil_space_acquire()/release() when iterating tablespaces and removed unnecessary is_closing from fil_crypt_t. Use fil_space_t::is_stopping() to detect when access to tablespace should be stopped. Removed unnecessary fil_space_get_crypt_data(). fil_space_create(): Inform key rotation that there could be something to do if key rotation is disabled and new table with encryption enabled is created. Remove unnecessary functions fil_get_first_space_safe() and fil_get_next_space_safe(). fil_space_acquire() and fil_space_release() are used instead. Moved fil_space_get_crypt_data() and fil_space_set_crypt_data() to fil0crypt.cc. fsp_header_init(): Acquire fil_space_t*, write crypt_data and release space. check_table_options() Renamed FIL_SPACE_ENCRYPTION_* TO FIL_ENCRYPTION_* i_s.cc: Added ROTATING_OR_FLUSHING field to information_schema.innodb_tablespace_encryption to show current status of key rotation.
9 years ago
MDEV-12634: Uninitialised ROW_MERGE_RESERVE_SIZE bytes written to tem… …porary file Fixed by removing writing key version to start of every block that was encrypted. Instead we will use single key version from log_sys crypt info. After this MDEV also blocks writen to row log are encrypted and blocks read from row log aren decrypted if encryption is configured for the table. innodb_status_variables[], struct srv_stats_t Added status variables for merge block and row log block encryption and decryption amounts. Removed ROW_MERGE_RESERVE_SIZE define. row_merge_fts_doc_tokenize Remove ROW_MERGE_RESERVE_SIZE row_log_t Add index, crypt_tail, crypt_head to be used in case of encryption. row_log_online_op, row_log_table_close_func Before writing a block encrypt it if encryption is enabled row_log_table_apply_ops, row_log_apply_ops After reading a block decrypt it if encryption is enabled row_log_allocate Allocate temporary buffers crypt_head and crypt_tail if needed. row_log_free Free temporary buffers crypt_head and crypt_tail if they exist. row_merge_encrypt_buf, row_merge_decrypt_buf Removed. row_merge_buf_create, row_merge_buf_write Remove ROW_MERGE_RESERVE_SIZE row_merge_build_indexes Allocate temporary buffer used in decryption and encryption if needed. log_tmp_blocks_crypt, log_tmp_block_encrypt, log_temp_block_decrypt New functions used in block encryption and decryption log_tmp_is_encrypted New function to check is encryption enabled. Added test case innodb-rowlog to force creating a row log and verify that operations are done using introduced status variables.
8 years ago
12 years ago
12 years ago
MDEV-11233 CREATE FULLTEXT INDEX with a token longer than 127 bytes crashes server This bug is the result of merging the Oracle MySQL follow-up fix BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX without merging the base bug fix: Bug#79475 Insert a token of 84 4-bytes chars into fts index causes server crash. Unlike the above mentioned fixes in MySQL, our fix will not change the storage format of fulltext indexes in InnoDB or XtraDB when a character encoding with mbmaxlen=2 or mbmaxlen=3 and the length of a word is between 128 and 84*mbmaxlen bytes. The Oracle fix would allocate 2 length bytes for these cases. Compatibility with other MySQL and MariaDB releases is ensured by persisting the used maximum length in the SYS_COLUMNS table in the InnoDB data dictionary. This fix also removes some unnecessary strcmp() calls when checking for the legacy default collation my_charset_latin1 (my_charset_latin1.name=="latin1_swedish_ci"). fts_create_one_index_table(): Store the actual length in bytes. This metadata will be written to the SYS_COLUMNS table. fts_zip_initialize(): Initialize only the first byte of the buffer. Actually the code should not even care about this first byte, because the length is set as 0. FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes, not as 254 bytes. row_merge_create_fts_sort_index(): Set the actual maximum length of the column in bytes, similar to fts_create_one_index_table(). row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype. Use the actual maximum length of the column. Calculate the extra_size in the same way as row_merge_buf_encode() does.
9 years ago
MDEV-11233 CREATE FULLTEXT INDEX with a token longer than 127 bytes crashes server This bug is the result of merging the Oracle MySQL follow-up fix BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX without merging the base bug fix: Bug#79475 Insert a token of 84 4-bytes chars into fts index causes server crash. Unlike the above mentioned fixes in MySQL, our fix will not change the storage format of fulltext indexes in InnoDB or XtraDB when a character encoding with mbmaxlen=2 or mbmaxlen=3 and the length of a word is between 128 and 84*mbmaxlen bytes. The Oracle fix would allocate 2 length bytes for these cases. Compatibility with other MySQL and MariaDB releases is ensured by persisting the used maximum length in the SYS_COLUMNS table in the InnoDB data dictionary. This fix also removes some unnecessary strcmp() calls when checking for the legacy default collation my_charset_latin1 (my_charset_latin1.name=="latin1_swedish_ci"). fts_create_one_index_table(): Store the actual length in bytes. This metadata will be written to the SYS_COLUMNS table. fts_zip_initialize(): Initialize only the first byte of the buffer. Actually the code should not even care about this first byte, because the length is set as 0. FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes, not as 254 bytes. row_merge_create_fts_sort_index(): Set the actual maximum length of the column in bytes, similar to fts_create_one_index_table(). row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype. Use the actual maximum length of the column. Calculate the extra_size in the same way as row_merge_buf_encode() does.
9 years ago
MDEV-11233 CREATE FULLTEXT INDEX with a token longer than 127 bytes crashes server This bug is the result of merging the Oracle MySQL follow-up fix BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX without merging the base bug fix: Bug#79475 Insert a token of 84 4-bytes chars into fts index causes server crash. Unlike the above mentioned fixes in MySQL, our fix will not change the storage format of fulltext indexes in InnoDB or XtraDB when a character encoding with mbmaxlen=2 or mbmaxlen=3 and the length of a word is between 128 and 84*mbmaxlen bytes. The Oracle fix would allocate 2 length bytes for these cases. Compatibility with other MySQL and MariaDB releases is ensured by persisting the used maximum length in the SYS_COLUMNS table in the InnoDB data dictionary. This fix also removes some unnecessary strcmp() calls when checking for the legacy default collation my_charset_latin1 (my_charset_latin1.name=="latin1_swedish_ci"). fts_create_one_index_table(): Store the actual length in bytes. This metadata will be written to the SYS_COLUMNS table. fts_zip_initialize(): Initialize only the first byte of the buffer. Actually the code should not even care about this first byte, because the length is set as 0. FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes, not as 254 bytes. row_merge_create_fts_sort_index(): Set the actual maximum length of the column in bytes, similar to fts_create_one_index_table(). row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype. Use the actual maximum length of the column. Calculate the extra_size in the same way as row_merge_buf_encode() does.
9 years ago
MDEV-11233 CREATE FULLTEXT INDEX with a token longer than 127 bytes crashes server This bug is the result of merging the Oracle MySQL follow-up fix BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX without merging the base bug fix: Bug#79475 Insert a token of 84 4-bytes chars into fts index causes server crash. Unlike the above mentioned fixes in MySQL, our fix will not change the storage format of fulltext indexes in InnoDB or XtraDB when a character encoding with mbmaxlen=2 or mbmaxlen=3 and the length of a word is between 128 and 84*mbmaxlen bytes. The Oracle fix would allocate 2 length bytes for these cases. Compatibility with other MySQL and MariaDB releases is ensured by persisting the used maximum length in the SYS_COLUMNS table in the InnoDB data dictionary. This fix also removes some unnecessary strcmp() calls when checking for the legacy default collation my_charset_latin1 (my_charset_latin1.name=="latin1_swedish_ci"). fts_create_one_index_table(): Store the actual length in bytes. This metadata will be written to the SYS_COLUMNS table. fts_zip_initialize(): Initialize only the first byte of the buffer. Actually the code should not even care about this first byte, because the length is set as 0. FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes, not as 254 bytes. row_merge_create_fts_sort_index(): Set the actual maximum length of the column in bytes, similar to fts_create_one_index_table(). row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype. Use the actual maximum length of the column. Calculate the extra_size in the same way as row_merge_buf_encode() does.
9 years ago
MDEV-12634: Uninitialised ROW_MERGE_RESERVE_SIZE bytes written to tem… …porary file Fixed by removing writing key version to start of every block that was encrypted. Instead we will use single key version from log_sys crypt info. After this MDEV also blocks writen to row log are encrypted and blocks read from row log aren decrypted if encryption is configured for the table. innodb_status_variables[], struct srv_stats_t Added status variables for merge block and row log block encryption and decryption amounts. Removed ROW_MERGE_RESERVE_SIZE define. row_merge_fts_doc_tokenize Remove ROW_MERGE_RESERVE_SIZE row_log_t Add index, crypt_tail, crypt_head to be used in case of encryption. row_log_online_op, row_log_table_close_func Before writing a block encrypt it if encryption is enabled row_log_table_apply_ops, row_log_apply_ops After reading a block decrypt it if encryption is enabled row_log_allocate Allocate temporary buffers crypt_head and crypt_tail if needed. row_log_free Free temporary buffers crypt_head and crypt_tail if they exist. row_merge_encrypt_buf, row_merge_decrypt_buf Removed. row_merge_buf_create, row_merge_buf_write Remove ROW_MERGE_RESERVE_SIZE row_merge_build_indexes Allocate temporary buffer used in decryption and encryption if needed. log_tmp_blocks_crypt, log_tmp_block_encrypt, log_temp_block_decrypt New functions used in block encryption and decryption log_tmp_is_encrypted New function to check is encryption enabled. Added test case innodb-rowlog to force creating a row log and verify that operations are done using introduced status variables.
8 years ago
12 years ago
12 years ago
12 years ago
10 years ago
10 years ago
10 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
MDEV-12634: Uninitialised ROW_MERGE_RESERVE_SIZE bytes written to tem… …porary file Fixed by removing writing key version to start of every block that was encrypted. Instead we will use single key version from log_sys crypt info. After this MDEV also blocks writen to row log are encrypted and blocks read from row log aren decrypted if encryption is configured for the table. innodb_status_variables[], struct srv_stats_t Added status variables for merge block and row log block encryption and decryption amounts. Removed ROW_MERGE_RESERVE_SIZE define. row_merge_fts_doc_tokenize Remove ROW_MERGE_RESERVE_SIZE row_log_t Add index, crypt_tail, crypt_head to be used in case of encryption. row_log_online_op, row_log_table_close_func Before writing a block encrypt it if encryption is enabled row_log_table_apply_ops, row_log_apply_ops After reading a block decrypt it if encryption is enabled row_log_allocate Allocate temporary buffers crypt_head and crypt_tail if needed. row_log_free Free temporary buffers crypt_head and crypt_tail if they exist. row_merge_encrypt_buf, row_merge_decrypt_buf Removed. row_merge_buf_create, row_merge_buf_write Remove ROW_MERGE_RESERVE_SIZE row_merge_build_indexes Allocate temporary buffer used in decryption and encryption if needed. log_tmp_blocks_crypt, log_tmp_block_encrypt, log_temp_block_decrypt New functions used in block encryption and decryption log_tmp_is_encrypted New function to check is encryption enabled. Added test case innodb-rowlog to force creating a row log and verify that operations are done using introduced status variables.
8 years ago
12 years ago
12 years ago
12 years ago
12 years ago
9 years ago
12 years ago
12 years ago
12 years ago
12 years ago
12 years ago
  1. /*****************************************************************************
  2. Copyright (c) 2010, 2016, Oracle and/or its affiliates. All Rights Reserved.
  3. Copyright (c) 2015, 2018, MariaDB Corporation.
  4. This program is free software; you can redistribute it and/or modify it under
  5. the terms of the GNU General Public License as published by the Free Software
  6. Foundation; version 2 of the License.
  7. This program is distributed in the hope that it will be useful, but WITHOUT
  8. ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
  9. FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
  10. You should have received a copy of the GNU General Public License along with
  11. this program; if not, write to the Free Software Foundation, Inc.,
  12. 51 Franklin Street, Suite 500, Boston, MA 02110-1335 USA
  13. *****************************************************************************/
  14. /**************************************************//**
  15. @file row/row0ftsort.cc
  16. Create Full Text Index with (parallel) merge sort
  17. Created 10/13/2010 Jimmy Yang
  18. *******************************************************/
  19. #include "row0ftsort.h"
  20. #include "dict0dict.h"
  21. #include "row0merge.h"
  22. #include "row0row.h"
  23. #include "btr0cur.h"
  24. #include "fts0plugin.h"
  25. #include "log0crypt.h"
  26. /** Read the next record to buffer N.
  27. @param N index into array of merge info structure */
  28. #define ROW_MERGE_READ_GET_NEXT(N) \
  29. do { \
  30. b[N] = row_merge_read_rec( \
  31. block[N], buf[N], b[N], index, \
  32. fd[N], &foffs[N], &mrec[N], offsets[N], \
  33. crypt_block[N], space); \
  34. if (UNIV_UNLIKELY(!b[N])) { \
  35. if (mrec[N]) { \
  36. goto exit; \
  37. } \
  38. } \
  39. } while (0)
  40. /** Parallel sort degree */
  41. ulong fts_sort_pll_degree = 2;
  42. /*********************************************************************//**
  43. Create a temporary "fts sort index" used to merge sort the
  44. tokenized doc string. The index has three "fields":
  45. 1) Tokenized word,
  46. 2) Doc ID (depend on number of records to sort, it can be a 4 bytes or 8 bytes
  47. integer value)
  48. 3) Word's position in original doc.
  49. @see fts_create_one_index_table()
  50. @return dict_index_t structure for the fts sort index */
  51. dict_index_t*
  52. row_merge_create_fts_sort_index(
  53. /*============================*/
  54. dict_index_t* index, /*!< in: Original FTS index
  55. based on which this sort index
  56. is created */
  57. const dict_table_t* table, /*!< in: table that FTS index
  58. is being created on */
  59. ibool* opt_doc_id_size)
  60. /*!< out: whether to use 4 bytes
  61. instead of 8 bytes integer to
  62. store Doc ID during sort */
  63. {
  64. dict_index_t* new_index;
  65. dict_field_t* field;
  66. dict_field_t* idx_field;
  67. CHARSET_INFO* charset;
  68. // FIXME: This name shouldn't be hard coded here.
  69. new_index = dict_mem_index_create(
  70. index->table->name.m_name, "tmp_fts_idx", 0, DICT_FTS, 3);
  71. new_index->id = index->id;
  72. new_index->table = (dict_table_t*) table;
  73. new_index->n_uniq = FTS_NUM_FIELDS_SORT;
  74. new_index->n_def = FTS_NUM_FIELDS_SORT;
  75. new_index->cached = TRUE;
  76. new_index->parser = index->parser;
  77. idx_field = dict_index_get_nth_field(index, 0);
  78. charset = fts_index_get_charset(index);
  79. /* The first field is on the Tokenized Word */
  80. field = dict_index_get_nth_field(new_index, 0);
  81. field->name = NULL;
  82. field->prefix_len = 0;
  83. field->col = static_cast<dict_col_t*>(
  84. mem_heap_alloc(new_index->heap, sizeof(dict_col_t)));
  85. field->col->prtype = idx_field->col->prtype | DATA_NOT_NULL;
  86. field->col->mtype = charset == &my_charset_latin1
  87. ? DATA_VARCHAR : DATA_VARMYSQL;
  88. field->col->mbminlen = idx_field->col->mbminlen;
  89. field->col->mbmaxlen = idx_field->col->mbmaxlen;
  90. field->col->len = HA_FT_MAXCHARLEN * field->col->mbmaxlen;
  91. field->fixed_len = 0;
  92. /* Doc ID */
  93. field = dict_index_get_nth_field(new_index, 1);
  94. field->name = NULL;
  95. field->prefix_len = 0;
  96. field->col = static_cast<dict_col_t*>(
  97. mem_heap_alloc(new_index->heap, sizeof(dict_col_t)));
  98. field->col->mtype = DATA_INT;
  99. *opt_doc_id_size = FALSE;
  100. /* Check whether we can use 4 bytes instead of 8 bytes integer
  101. field to hold the Doc ID, thus reduce the overall sort size */
  102. if (DICT_TF2_FLAG_IS_SET(table, DICT_TF2_FTS_ADD_DOC_ID)) {
  103. /* If Doc ID column is being added by this create
  104. index, then just check the number of rows in the table */
  105. if (dict_table_get_n_rows(table) < MAX_DOC_ID_OPT_VAL) {
  106. *opt_doc_id_size = TRUE;
  107. }
  108. } else {
  109. doc_id_t max_doc_id;
  110. /* If the Doc ID column is supplied by user, then
  111. check the maximum Doc ID in the table */
  112. max_doc_id = fts_get_max_doc_id((dict_table_t*) table);
  113. if (max_doc_id && max_doc_id < MAX_DOC_ID_OPT_VAL) {
  114. *opt_doc_id_size = TRUE;
  115. }
  116. }
  117. if (*opt_doc_id_size) {
  118. field->col->len = sizeof(ib_uint32_t);
  119. field->fixed_len = sizeof(ib_uint32_t);
  120. } else {
  121. field->col->len = FTS_DOC_ID_LEN;
  122. field->fixed_len = FTS_DOC_ID_LEN;
  123. }
  124. field->col->prtype = DATA_NOT_NULL | DATA_BINARY_TYPE;
  125. field->col->mbminlen = 0;
  126. field->col->mbmaxlen = 0;
  127. /* The third field is on the word's position in the original doc */
  128. field = dict_index_get_nth_field(new_index, 2);
  129. field->name = NULL;
  130. field->prefix_len = 0;
  131. field->col = static_cast<dict_col_t*>(
  132. mem_heap_alloc(new_index->heap, sizeof(dict_col_t)));
  133. field->col->mtype = DATA_INT;
  134. field->col->len = 4 ;
  135. field->fixed_len = 4;
  136. field->col->prtype = DATA_NOT_NULL;
  137. field->col->mbminlen = 0;
  138. field->col->mbmaxlen = 0;
  139. return(new_index);
  140. }
  141. /*********************************************************************//**
  142. Initialize FTS parallel sort structures.
  143. @return TRUE if all successful */
  144. ibool
  145. row_fts_psort_info_init(
  146. /*====================*/
  147. trx_t* trx, /*!< in: transaction */
  148. row_merge_dup_t* dup, /*!< in,own: descriptor of
  149. FTS index being created */
  150. const dict_table_t* new_table,/*!< in: table on which indexes are
  151. created */
  152. ibool opt_doc_id_size,
  153. /*!< in: whether to use 4 bytes
  154. instead of 8 bytes integer to
  155. store Doc ID during sort */
  156. fts_psort_t** psort, /*!< out: parallel sort info to be
  157. instantiated */
  158. fts_psort_t** merge) /*!< out: parallel merge info
  159. to be instantiated */
  160. {
  161. ulint i;
  162. ulint j;
  163. fts_psort_common_t* common_info = NULL;
  164. fts_psort_t* psort_info = NULL;
  165. fts_psort_t* merge_info = NULL;
  166. ulint block_size;
  167. ibool ret = TRUE;
  168. bool encrypted = false;
  169. block_size = 3 * srv_sort_buf_size;
  170. *psort = psort_info = static_cast<fts_psort_t*>(ut_zalloc_nokey(
  171. fts_sort_pll_degree * sizeof *psort_info));
  172. if (!psort_info) {
  173. ut_free(dup);
  174. return(FALSE);
  175. }
  176. /* Common Info for all sort threads */
  177. common_info = static_cast<fts_psort_common_t*>(
  178. ut_malloc_nokey(sizeof *common_info));
  179. if (!common_info) {
  180. ut_free(dup);
  181. ut_free(psort_info);
  182. return(FALSE);
  183. }
  184. common_info->dup = dup;
  185. common_info->new_table = (dict_table_t*) new_table;
  186. common_info->trx = trx;
  187. common_info->all_info = psort_info;
  188. common_info->sort_event = os_event_create(0);
  189. common_info->merge_event = os_event_create(0);
  190. common_info->opt_doc_id_size = opt_doc_id_size;
  191. if (log_tmp_is_encrypted()) {
  192. encrypted = true;
  193. }
  194. ut_ad(trx->mysql_thd != NULL);
  195. const char* path = thd_innodb_tmpdir(trx->mysql_thd);
  196. /* There will be FTS_NUM_AUX_INDEX number of "sort buckets" for
  197. each parallel sort thread. Each "sort bucket" holds records for
  198. a particular "FTS index partition" */
  199. for (j = 0; j < fts_sort_pll_degree; j++) {
  200. UT_LIST_INIT(
  201. psort_info[j].fts_doc_list, &fts_doc_item_t::doc_list);
  202. for (i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  203. psort_info[j].merge_file[i] =
  204. static_cast<merge_file_t*>(
  205. ut_zalloc_nokey(sizeof(merge_file_t)));
  206. if (!psort_info[j].merge_file[i]) {
  207. ret = FALSE;
  208. goto func_exit;
  209. }
  210. psort_info[j].merge_buf[i] = row_merge_buf_create(
  211. dup->index);
  212. if (row_merge_file_create(psort_info[j].merge_file[i],
  213. path) < 0) {
  214. goto func_exit;
  215. }
  216. /* Need to align memory for O_DIRECT write */
  217. psort_info[j].block_alloc[i] =
  218. static_cast<row_merge_block_t*>(ut_malloc_nokey(
  219. block_size + 1024));
  220. psort_info[j].merge_block[i] =
  221. static_cast<row_merge_block_t*>(
  222. ut_align(
  223. psort_info[j].block_alloc[i], 1024));
  224. if (!psort_info[j].merge_block[i]) {
  225. ret = FALSE;
  226. goto func_exit;
  227. }
  228. /* If tablespace is encrypted, allocate additional buffer for
  229. encryption/decryption. */
  230. if (encrypted) {
  231. /* Need to align memory for O_DIRECT write */
  232. psort_info[j].crypt_alloc[i] =
  233. static_cast<row_merge_block_t*>(ut_malloc_nokey(
  234. block_size + 1024));
  235. psort_info[j].crypt_block[i] =
  236. static_cast<row_merge_block_t*>(
  237. ut_align(
  238. psort_info[j].crypt_alloc[i], 1024));
  239. if (!psort_info[j].crypt_block[i]) {
  240. ret = FALSE;
  241. goto func_exit;
  242. }
  243. } else {
  244. psort_info[j].crypt_alloc[i] = NULL;
  245. psort_info[j].crypt_block[i] = NULL;
  246. }
  247. }
  248. psort_info[j].child_status = 0;
  249. psort_info[j].state = 0;
  250. psort_info[j].psort_common = common_info;
  251. psort_info[j].error = DB_SUCCESS;
  252. psort_info[j].memory_used = 0;
  253. mutex_create(LATCH_ID_FTS_PLL_TOKENIZE, &psort_info[j].mutex);
  254. }
  255. /* Initialize merge_info structures parallel merge and insert
  256. into auxiliary FTS tables (FTS_INDEX_TABLE) */
  257. *merge = merge_info = static_cast<fts_psort_t*>(
  258. ut_malloc_nokey(FTS_NUM_AUX_INDEX * sizeof *merge_info));
  259. for (j = 0; j < FTS_NUM_AUX_INDEX; j++) {
  260. merge_info[j].child_status = 0;
  261. merge_info[j].state = 0;
  262. merge_info[j].psort_common = common_info;
  263. }
  264. func_exit:
  265. if (!ret) {
  266. row_fts_psort_info_destroy(psort_info, merge_info);
  267. }
  268. return(ret);
  269. }
  270. /*********************************************************************//**
  271. Clean up and deallocate FTS parallel sort structures, and close the
  272. merge sort files */
  273. void
  274. row_fts_psort_info_destroy(
  275. /*=======================*/
  276. fts_psort_t* psort_info, /*!< parallel sort info */
  277. fts_psort_t* merge_info) /*!< parallel merge info */
  278. {
  279. ulint i;
  280. ulint j;
  281. if (psort_info) {
  282. for (j = 0; j < fts_sort_pll_degree; j++) {
  283. for (i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  284. if (psort_info[j].merge_file[i]) {
  285. row_merge_file_destroy(
  286. psort_info[j].merge_file[i]);
  287. }
  288. ut_free(psort_info[j].block_alloc[i]);
  289. ut_free(psort_info[j].merge_file[i]);
  290. if (psort_info[j].crypt_alloc[i]) {
  291. ut_free(psort_info[j].crypt_alloc[i]);
  292. }
  293. }
  294. mutex_free(&psort_info[j].mutex);
  295. }
  296. os_event_destroy(merge_info[0].psort_common->sort_event);
  297. os_event_destroy(merge_info[0].psort_common->merge_event);
  298. ut_free(merge_info[0].psort_common->dup);
  299. ut_free(merge_info[0].psort_common);
  300. ut_free(psort_info);
  301. }
  302. ut_free(merge_info);
  303. }
  304. /*********************************************************************//**
  305. Free up merge buffers when merge sort is done */
  306. void
  307. row_fts_free_pll_merge_buf(
  308. /*=======================*/
  309. fts_psort_t* psort_info) /*!< in: parallel sort info */
  310. {
  311. ulint j;
  312. ulint i;
  313. if (!psort_info) {
  314. return;
  315. }
  316. for (j = 0; j < fts_sort_pll_degree; j++) {
  317. for (i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  318. row_merge_buf_free(psort_info[j].merge_buf[i]);
  319. }
  320. }
  321. return;
  322. }
  323. /*********************************************************************//**
  324. FTS plugin parser 'myql_add_word' callback function for row merge.
  325. Refer to 'st_mysql_ftparser_param' for more detail.
  326. @return always returns 0 */
  327. static
  328. int
  329. row_merge_fts_doc_add_word_for_parser(
  330. /*==================================*/
  331. MYSQL_FTPARSER_PARAM *param, /* in: parser paramter */
  332. const char *word, /* in: token word */
  333. int word_len, /* in: word len */
  334. MYSQL_FTPARSER_BOOLEAN_INFO* boolean_info) /* in: boolean info */
  335. {
  336. fts_string_t str;
  337. fts_tokenize_ctx_t* t_ctx;
  338. row_fts_token_t* fts_token;
  339. byte* ptr;
  340. ut_ad(param);
  341. ut_ad(param->mysql_ftparam);
  342. ut_ad(word);
  343. ut_ad(boolean_info);
  344. t_ctx = static_cast<fts_tokenize_ctx_t*>(param->mysql_ftparam);
  345. ut_ad(t_ctx);
  346. str.f_str = (byte*)(word);
  347. str.f_len = word_len;
  348. str.f_n_char = fts_get_token_size(
  349. (CHARSET_INFO*)param->cs, word, word_len);
  350. /* JAN: TODO: MySQL 5.7 FTS
  351. ut_ad(boolean_info->position >= 0);
  352. */
  353. ptr = static_cast<byte*>(ut_malloc_nokey(sizeof(row_fts_token_t)
  354. + sizeof(fts_string_t) + str.f_len));
  355. fts_token = reinterpret_cast<row_fts_token_t*>(ptr);
  356. fts_token->text = reinterpret_cast<fts_string_t*>(
  357. ptr + sizeof(row_fts_token_t));
  358. fts_token->text->f_str = static_cast<byte*>(
  359. ptr + sizeof(row_fts_token_t) + sizeof(fts_string_t));
  360. fts_token->text->f_len = str.f_len;
  361. fts_token->text->f_n_char = str.f_n_char;
  362. memcpy(fts_token->text->f_str, str.f_str, str.f_len);
  363. /* JAN: TODO: MySQL 5.7 FTS
  364. fts_token->position = boolean_info->position;
  365. */
  366. /* Add token to list */
  367. UT_LIST_ADD_LAST(t_ctx->fts_token_list, fts_token);
  368. return(0);
  369. }
  370. /*********************************************************************//**
  371. Tokenize by fts plugin parser */
  372. static
  373. void
  374. row_merge_fts_doc_tokenize_by_parser(
  375. /*=================================*/
  376. fts_doc_t* doc, /* in: doc to tokenize */
  377. st_mysql_ftparser* parser, /* in: plugin parser instance */
  378. fts_tokenize_ctx_t* t_ctx) /* in/out: tokenize ctx instance */
  379. {
  380. MYSQL_FTPARSER_PARAM param;
  381. ut_a(parser);
  382. /* Set paramters for param */
  383. param.mysql_parse = fts_tokenize_document_internal;
  384. param.mysql_add_word = row_merge_fts_doc_add_word_for_parser;
  385. param.mysql_ftparam = t_ctx;
  386. param.cs = doc->charset;
  387. param.doc = reinterpret_cast<char*>(doc->text.f_str);
  388. param.length = static_cast<int>(doc->text.f_len);
  389. param.mode= MYSQL_FTPARSER_SIMPLE_MODE;
  390. PARSER_INIT(parser, &param);
  391. /* We assume parse returns successfully here. */
  392. parser->parse(&param);
  393. PARSER_DEINIT(parser, &param);
  394. }
  395. /*********************************************************************//**
  396. Tokenize incoming text data and add to the sort buffer.
  397. @see row_merge_buf_encode()
  398. @return TRUE if the record passed, FALSE if out of space */
  399. static
  400. ibool
  401. row_merge_fts_doc_tokenize(
  402. /*=======================*/
  403. row_merge_buf_t** sort_buf, /*!< in/out: sort buffer */
  404. doc_id_t doc_id, /*!< in: Doc ID */
  405. fts_doc_t* doc, /*!< in: Doc to be tokenized */
  406. merge_file_t** merge_file, /*!< in/out: merge file */
  407. ibool opt_doc_id_size,/*!< in: whether to use 4 bytes
  408. instead of 8 bytes integer to
  409. store Doc ID during sort*/
  410. fts_tokenize_ctx_t* t_ctx) /*!< in/out: tokenize context */
  411. {
  412. ulint inc = 0;
  413. fts_string_t str;
  414. ulint len;
  415. row_merge_buf_t* buf;
  416. dfield_t* field;
  417. fts_string_t t_str;
  418. ibool buf_full = FALSE;
  419. byte str_buf[FTS_MAX_WORD_LEN + 1];
  420. ulint data_size[FTS_NUM_AUX_INDEX];
  421. ulint n_tuple[FTS_NUM_AUX_INDEX];
  422. st_mysql_ftparser* parser;
  423. t_str.f_n_char = 0;
  424. t_ctx->buf_used = 0;
  425. memset(n_tuple, 0, FTS_NUM_AUX_INDEX * sizeof(ulint));
  426. memset(data_size, 0, FTS_NUM_AUX_INDEX * sizeof(ulint));
  427. parser = sort_buf[0]->index->parser;
  428. /* Tokenize the data and add each word string, its corresponding
  429. doc id and position to sort buffer */
  430. while (t_ctx->processed_len < doc->text.f_len) {
  431. ulint idx = 0;
  432. ulint cur_len;
  433. doc_id_t write_doc_id;
  434. row_fts_token_t* fts_token = NULL;
  435. if (parser != NULL) {
  436. if (t_ctx->processed_len == 0) {
  437. UT_LIST_INIT(t_ctx->fts_token_list, &row_fts_token_t::token_list);
  438. /* Parse the whole doc and cache tokens */
  439. row_merge_fts_doc_tokenize_by_parser(doc,
  440. parser, t_ctx);
  441. /* Just indictate we have parsed all the word */
  442. t_ctx->processed_len += 1;
  443. }
  444. /* Then get a token */
  445. fts_token = UT_LIST_GET_FIRST(t_ctx->fts_token_list);
  446. if (fts_token) {
  447. str.f_len = fts_token->text->f_len;
  448. str.f_n_char = fts_token->text->f_n_char;
  449. str.f_str = fts_token->text->f_str;
  450. } else {
  451. ut_ad(UT_LIST_GET_LEN(t_ctx->fts_token_list) == 0);
  452. /* Reach the end of the list */
  453. t_ctx->processed_len = doc->text.f_len;
  454. break;
  455. }
  456. } else {
  457. inc = innobase_mysql_fts_get_token(
  458. doc->charset,
  459. doc->text.f_str + t_ctx->processed_len,
  460. doc->text.f_str + doc->text.f_len, &str);
  461. ut_a(inc > 0);
  462. }
  463. /* Ignore string whose character number is less than
  464. "fts_min_token_size" or more than "fts_max_token_size" */
  465. if (!fts_check_token(&str, NULL, NULL)) {
  466. if (parser != NULL) {
  467. UT_LIST_REMOVE(t_ctx->fts_token_list, fts_token);
  468. ut_free(fts_token);
  469. } else {
  470. t_ctx->processed_len += inc;
  471. }
  472. continue;
  473. }
  474. t_str.f_len = innobase_fts_casedn_str(
  475. doc->charset, (char*) str.f_str, str.f_len,
  476. (char*) &str_buf, FTS_MAX_WORD_LEN + 1);
  477. t_str.f_str = (byte*) &str_buf;
  478. /* if "cached_stopword" is defined, ignore words in the
  479. stopword list */
  480. if (!fts_check_token(&str, t_ctx->cached_stopword,
  481. doc->charset)) {
  482. if (parser != NULL) {
  483. UT_LIST_REMOVE(t_ctx->fts_token_list, fts_token);
  484. ut_free(fts_token);
  485. } else {
  486. t_ctx->processed_len += inc;
  487. }
  488. continue;
  489. }
  490. /* There are FTS_NUM_AUX_INDEX auxiliary tables, find
  491. out which sort buffer to put this word record in */
  492. t_ctx->buf_used = fts_select_index(
  493. doc->charset, t_str.f_str, t_str.f_len);
  494. buf = sort_buf[t_ctx->buf_used];
  495. ut_a(t_ctx->buf_used < FTS_NUM_AUX_INDEX);
  496. idx = t_ctx->buf_used;
  497. mtuple_t* mtuple = &buf->tuples[buf->n_tuples + n_tuple[idx]];
  498. field = mtuple->fields = static_cast<dfield_t*>(
  499. mem_heap_alloc(buf->heap,
  500. FTS_NUM_FIELDS_SORT * sizeof *field));
  501. /* The first field is the tokenized word */
  502. dfield_set_data(field, t_str.f_str, t_str.f_len);
  503. len = dfield_get_len(field);
  504. dict_col_copy_type(dict_index_get_nth_col(buf->index, 0), &field->type);
  505. field->type.prtype |= DATA_NOT_NULL;
  506. ut_ad(len <= field->type.len);
  507. /* For the temporary file, row_merge_buf_encode() uses
  508. 1 byte for representing the number of extra_size bytes.
  509. This number will always be 1, because for this 3-field index
  510. consisting of one variable-size column, extra_size will always
  511. be 1 or 2, which can be encoded in one byte.
  512. The extra_size is 1 byte if the length of the
  513. variable-length column is less than 128 bytes or the
  514. maximum length is less than 256 bytes. */
  515. /* One variable length column, word with its lenght less than
  516. fts_max_token_size, add one extra size and one extra byte.
  517. Since the max length for FTS token now is larger than 255,
  518. so we will need to signify length byte itself, so only 1 to 128
  519. bytes can be used for 1 bytes, larger than that 2 bytes. */
  520. if (len < 128 || field->type.len < 256) {
  521. /* Extra size is one byte. */
  522. cur_len = 2 + len;
  523. } else {
  524. /* Extra size is two bytes. */
  525. cur_len = 3 + len;
  526. }
  527. dfield_dup(field, buf->heap);
  528. field++;
  529. /* The second field is the Doc ID */
  530. ib_uint32_t doc_id_32_bit;
  531. if (!opt_doc_id_size) {
  532. fts_write_doc_id((byte*) &write_doc_id, doc_id);
  533. dfield_set_data(
  534. field, &write_doc_id, sizeof(write_doc_id));
  535. } else {
  536. mach_write_to_4(
  537. (byte*) &doc_id_32_bit, (ib_uint32_t) doc_id);
  538. dfield_set_data(
  539. field, &doc_id_32_bit, sizeof(doc_id_32_bit));
  540. }
  541. len = field->len;
  542. ut_ad(len == FTS_DOC_ID_LEN || len == sizeof(ib_uint32_t));
  543. field->type.mtype = DATA_INT;
  544. field->type.prtype = DATA_NOT_NULL | DATA_BINARY_TYPE;
  545. field->type.len = len;
  546. field->type.mbminlen = 0;
  547. field->type.mbmaxlen = 0;
  548. cur_len += len;
  549. dfield_dup(field, buf->heap);
  550. ++field;
  551. /* The third field is the position.
  552. MySQL 5.7 changed the fulltext parser plugin interface
  553. by adding MYSQL_FTPARSER_BOOLEAN_INFO::position.
  554. Below we assume that the field is always 0. */
  555. unsigned pos = t_ctx->init_pos;
  556. byte position[4];
  557. if (parser == NULL) {
  558. pos += t_ctx->processed_len + inc - str.f_len;
  559. }
  560. len = 4;
  561. mach_write_to_4(position, pos);
  562. dfield_set_data(field, &position, len);
  563. field->type.mtype = DATA_INT;
  564. field->type.prtype = DATA_NOT_NULL;
  565. field->type.len = len;
  566. field->type.mbminlen = 0;
  567. field->type.mbmaxlen = 0;
  568. cur_len += len;
  569. dfield_dup(field, buf->heap);
  570. /* Reserve one byte for the end marker of row_merge_block_t */
  571. if (buf->total_size + data_size[idx] + cur_len
  572. >= srv_sort_buf_size - 1) {
  573. buf_full = TRUE;
  574. break;
  575. }
  576. /* Increment the number of tuples */
  577. n_tuple[idx]++;
  578. if (parser != NULL) {
  579. UT_LIST_REMOVE(t_ctx->fts_token_list, fts_token);
  580. ut_free(fts_token);
  581. } else {
  582. t_ctx->processed_len += inc;
  583. }
  584. data_size[idx] += cur_len;
  585. }
  586. /* Update the data length and the number of new word tuples
  587. added in this round of tokenization */
  588. for (ulint i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  589. /* The computation of total_size below assumes that no
  590. delete-mark flags will be stored and that all fields
  591. are NOT NULL and fixed-length. */
  592. sort_buf[i]->total_size += data_size[i];
  593. sort_buf[i]->n_tuples += n_tuple[i];
  594. merge_file[i]->n_rec += n_tuple[i];
  595. t_ctx->rows_added[i] += n_tuple[i];
  596. }
  597. if (!buf_full) {
  598. /* we pad one byte between text accross two fields */
  599. t_ctx->init_pos += doc->text.f_len + 1;
  600. }
  601. return(!buf_full);
  602. }
  603. /*********************************************************************//**
  604. Get next doc item from fts_doc_list */
  605. UNIV_INLINE
  606. void
  607. row_merge_fts_get_next_doc_item(
  608. /*============================*/
  609. fts_psort_t* psort_info, /*!< in: psort_info */
  610. fts_doc_item_t** doc_item) /*!< in/out: doc item */
  611. {
  612. if (*doc_item != NULL) {
  613. ut_free(*doc_item);
  614. }
  615. mutex_enter(&psort_info->mutex);
  616. *doc_item = UT_LIST_GET_FIRST(psort_info->fts_doc_list);
  617. if (*doc_item != NULL) {
  618. UT_LIST_REMOVE(psort_info->fts_doc_list, *doc_item);
  619. ut_ad(psort_info->memory_used >= sizeof(fts_doc_item_t)
  620. + (*doc_item)->field->len);
  621. psort_info->memory_used -= sizeof(fts_doc_item_t)
  622. + (*doc_item)->field->len;
  623. }
  624. mutex_exit(&psort_info->mutex);
  625. }
  626. /*********************************************************************//**
  627. Function performs parallel tokenization of the incoming doc strings.
  628. It also performs the initial in memory sort of the parsed records.
  629. @return OS_THREAD_DUMMY_RETURN */
  630. static
  631. os_thread_ret_t
  632. fts_parallel_tokenization(
  633. /*======================*/
  634. void* arg) /*!< in: psort_info for the thread */
  635. {
  636. fts_psort_t* psort_info = (fts_psort_t*) arg;
  637. ulint i;
  638. fts_doc_item_t* doc_item = NULL;
  639. row_merge_buf_t** buf;
  640. ibool processed = FALSE;
  641. merge_file_t** merge_file;
  642. row_merge_block_t** block;
  643. row_merge_block_t** crypt_block;
  644. int tmpfd[FTS_NUM_AUX_INDEX];
  645. ulint mycount[FTS_NUM_AUX_INDEX];
  646. ib_uint64_t total_rec = 0;
  647. ulint num_doc_processed = 0;
  648. doc_id_t last_doc_id = 0;
  649. mem_heap_t* blob_heap = NULL;
  650. fts_doc_t doc;
  651. dict_table_t* table = psort_info->psort_common->new_table;
  652. fts_tokenize_ctx_t t_ctx;
  653. ulint retried = 0;
  654. dberr_t error = DB_SUCCESS;
  655. ut_ad(psort_info->psort_common->trx->mysql_thd != NULL);
  656. /* const char* path = thd_innodb_tmpdir(
  657. psort_info->psort_common->trx->mysql_thd);
  658. */
  659. ut_ad(psort_info->psort_common->trx->mysql_thd != NULL);
  660. const char* path = thd_innodb_tmpdir(
  661. psort_info->psort_common->trx->mysql_thd);
  662. ut_ad(psort_info);
  663. buf = psort_info->merge_buf;
  664. merge_file = psort_info->merge_file;
  665. blob_heap = mem_heap_create(512);
  666. memset(&doc, 0, sizeof(doc));
  667. memset(mycount, 0, FTS_NUM_AUX_INDEX * sizeof(int));
  668. doc.charset = fts_index_get_charset(
  669. psort_info->psort_common->dup->index);
  670. block = psort_info->merge_block;
  671. crypt_block = psort_info->crypt_block;
  672. const page_size_t& page_size = dict_table_page_size(table);
  673. row_merge_fts_get_next_doc_item(psort_info, &doc_item);
  674. t_ctx.cached_stopword = table->fts->cache->stopword_info.cached_stopword;
  675. processed = TRUE;
  676. loop:
  677. while (doc_item) {
  678. dfield_t* dfield = doc_item->field;
  679. last_doc_id = doc_item->doc_id;
  680. ut_ad (dfield->data != NULL
  681. && dfield_get_len(dfield) != UNIV_SQL_NULL);
  682. /* If finish processing the last item, update "doc" with
  683. strings in the doc_item, otherwise continue processing last
  684. item */
  685. if (processed) {
  686. byte* data;
  687. ulint data_len;
  688. dfield = doc_item->field;
  689. data = static_cast<byte*>(dfield_get_data(dfield));
  690. data_len = dfield_get_len(dfield);
  691. if (dfield_is_ext(dfield)) {
  692. doc.text.f_str =
  693. btr_copy_externally_stored_field(
  694. &doc.text.f_len, data,
  695. page_size, data_len, blob_heap);
  696. } else {
  697. doc.text.f_str = data;
  698. doc.text.f_len = data_len;
  699. }
  700. doc.tokens = 0;
  701. t_ctx.processed_len = 0;
  702. } else {
  703. /* Not yet finish processing the "doc" on hand,
  704. continue processing it */
  705. ut_ad(doc.text.f_str);
  706. ut_ad(t_ctx.processed_len < doc.text.f_len);
  707. }
  708. processed = row_merge_fts_doc_tokenize(
  709. buf, doc_item->doc_id, &doc,
  710. merge_file, psort_info->psort_common->opt_doc_id_size,
  711. &t_ctx);
  712. /* Current sort buffer full, need to recycle */
  713. if (!processed) {
  714. ut_ad(t_ctx.processed_len < doc.text.f_len);
  715. ut_ad(t_ctx.rows_added[t_ctx.buf_used]);
  716. break;
  717. }
  718. num_doc_processed++;
  719. if (fts_enable_diag_print && num_doc_processed % 10000 == 1) {
  720. ib::info() << "Number of documents processed: "
  721. << num_doc_processed;
  722. #ifdef FTS_INTERNAL_DIAG_PRINT
  723. for (i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  724. ib::info() << "ID " << psort_info->psort_id
  725. << ", partition " << i << ", word "
  726. << mycount[i];
  727. }
  728. #endif
  729. }
  730. mem_heap_empty(blob_heap);
  731. row_merge_fts_get_next_doc_item(psort_info, &doc_item);
  732. if (doc_item && last_doc_id != doc_item->doc_id) {
  733. t_ctx.init_pos = 0;
  734. }
  735. }
  736. /* If we run out of current sort buffer, need to sort
  737. and flush the sort buffer to disk */
  738. if (t_ctx.rows_added[t_ctx.buf_used] && !processed) {
  739. row_merge_buf_sort(buf[t_ctx.buf_used], NULL);
  740. row_merge_buf_write(buf[t_ctx.buf_used],
  741. merge_file[t_ctx.buf_used],
  742. block[t_ctx.buf_used]);
  743. if (!row_merge_write(merge_file[t_ctx.buf_used]->fd,
  744. merge_file[t_ctx.buf_used]->offset++,
  745. block[t_ctx.buf_used],
  746. crypt_block[t_ctx.buf_used],
  747. table->space)) {
  748. error = DB_TEMP_FILE_WRITE_FAIL;
  749. goto func_exit;
  750. }
  751. UNIV_MEM_INVALID(block[t_ctx.buf_used][0], srv_sort_buf_size);
  752. buf[t_ctx.buf_used] = row_merge_buf_empty(buf[t_ctx.buf_used]);
  753. mycount[t_ctx.buf_used] += t_ctx.rows_added[t_ctx.buf_used];
  754. t_ctx.rows_added[t_ctx.buf_used] = 0;
  755. ut_a(doc_item);
  756. goto loop;
  757. }
  758. /* Parent done scanning, and if finish processing all the docs, exit */
  759. if (psort_info->state == FTS_PARENT_COMPLETE) {
  760. if (UT_LIST_GET_LEN(psort_info->fts_doc_list) == 0) {
  761. goto exit;
  762. } else if (retried > 10000) {
  763. ut_ad(!doc_item);
  764. /* retried too many times and cannot get new record */
  765. ib::error() << "FTS parallel sort processed "
  766. << num_doc_processed
  767. << " records, the sort queue has "
  768. << UT_LIST_GET_LEN(psort_info->fts_doc_list)
  769. << " records. But sort cannot get the next"
  770. " records";
  771. goto exit;
  772. }
  773. } else if (psort_info->state == FTS_PARENT_EXITING) {
  774. /* Parent abort */
  775. goto func_exit;
  776. }
  777. if (doc_item == NULL) {
  778. os_thread_yield();
  779. }
  780. row_merge_fts_get_next_doc_item(psort_info, &doc_item);
  781. if (doc_item != NULL) {
  782. if (last_doc_id != doc_item->doc_id) {
  783. t_ctx.init_pos = 0;
  784. }
  785. retried = 0;
  786. } else if (psort_info->state == FTS_PARENT_COMPLETE) {
  787. retried++;
  788. }
  789. goto loop;
  790. exit:
  791. /* Do a final sort of the last (or latest) batch of records
  792. in block memory. Flush them to temp file if records cannot
  793. be hold in one block memory */
  794. for (i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  795. if (t_ctx.rows_added[i]) {
  796. row_merge_buf_sort(buf[i], NULL);
  797. row_merge_buf_write(
  798. buf[i], merge_file[i], block[i]);
  799. /* Write to temp file, only if records have
  800. been flushed to temp file before (offset > 0):
  801. The pseudo code for sort is following:
  802. while (there are rows) {
  803. tokenize rows, put result in block[]
  804. if (block[] runs out) {
  805. sort rows;
  806. write to temp file with
  807. row_merge_write();
  808. offset++;
  809. }
  810. }
  811. # write out the last batch
  812. if (offset > 0) {
  813. row_merge_write();
  814. offset++;
  815. } else {
  816. # no need to write anything
  817. offset stay as 0
  818. }
  819. so if merge_file[i]->offset is 0 when we come to
  820. here as the last batch, this means rows have
  821. never flush to temp file, it can be held all in
  822. memory */
  823. if (merge_file[i]->offset != 0) {
  824. if (!row_merge_write(merge_file[i]->fd,
  825. merge_file[i]->offset++,
  826. block[i],
  827. crypt_block[i],
  828. table->space)) {
  829. error = DB_TEMP_FILE_WRITE_FAIL;
  830. goto func_exit;
  831. }
  832. UNIV_MEM_INVALID(block[i][0],
  833. srv_sort_buf_size);
  834. if (crypt_block[i]) {
  835. UNIV_MEM_INVALID(crypt_block[i][0],
  836. srv_sort_buf_size);
  837. }
  838. }
  839. buf[i] = row_merge_buf_empty(buf[i]);
  840. t_ctx.rows_added[i] = 0;
  841. }
  842. }
  843. if (fts_enable_diag_print) {
  844. DEBUG_FTS_SORT_PRINT(" InnoDB_FTS: start merge sort\n");
  845. }
  846. for (i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  847. if (!merge_file[i]->offset) {
  848. continue;
  849. }
  850. tmpfd[i] = row_merge_file_create_low(path);
  851. if (tmpfd[i] < 0) {
  852. error = DB_OUT_OF_MEMORY;
  853. goto func_exit;
  854. }
  855. error = row_merge_sort(psort_info->psort_common->trx,
  856. psort_info->psort_common->dup,
  857. merge_file[i], block[i], &tmpfd[i],
  858. false, 0.0/* pct_progress */, 0.0/* pct_cost */,
  859. crypt_block[i], table->space);
  860. if (error != DB_SUCCESS) {
  861. close(tmpfd[i]);
  862. goto func_exit;
  863. }
  864. total_rec += merge_file[i]->n_rec;
  865. close(tmpfd[i]);
  866. }
  867. func_exit:
  868. if (fts_enable_diag_print) {
  869. DEBUG_FTS_SORT_PRINT(" InnoDB_FTS: complete merge sort\n");
  870. }
  871. mem_heap_free(blob_heap);
  872. mutex_enter(&psort_info->mutex);
  873. psort_info->error = error;
  874. mutex_exit(&psort_info->mutex);
  875. if (UT_LIST_GET_LEN(psort_info->fts_doc_list) > 0) {
  876. /* child can exit either with error or told by parent. */
  877. ut_ad(error != DB_SUCCESS
  878. || psort_info->state == FTS_PARENT_EXITING);
  879. }
  880. /* Free fts doc list in case of error. */
  881. do {
  882. row_merge_fts_get_next_doc_item(psort_info, &doc_item);
  883. } while (doc_item != NULL);
  884. psort_info->child_status = FTS_CHILD_COMPLETE;
  885. os_event_set(psort_info->psort_common->sort_event);
  886. psort_info->child_status = FTS_CHILD_EXITING;
  887. os_thread_exit();
  888. OS_THREAD_DUMMY_RETURN;
  889. }
  890. /*********************************************************************//**
  891. Start the parallel tokenization and parallel merge sort */
  892. void
  893. row_fts_start_psort(
  894. /*================*/
  895. fts_psort_t* psort_info) /*!< parallel sort structure */
  896. {
  897. ulint i = 0;
  898. os_thread_id_t thd_id;
  899. for (i = 0; i < fts_sort_pll_degree; i++) {
  900. psort_info[i].psort_id = i;
  901. psort_info[i].thread_hdl =
  902. os_thread_create(fts_parallel_tokenization,
  903. (void*) &psort_info[i],
  904. &thd_id);
  905. }
  906. }
  907. /*********************************************************************//**
  908. Function performs the merge and insertion of the sorted records.
  909. @return OS_THREAD_DUMMY_RETURN */
  910. static
  911. os_thread_ret_t
  912. fts_parallel_merge(
  913. /*===============*/
  914. void* arg) /*!< in: parallel merge info */
  915. {
  916. fts_psort_t* psort_info = (fts_psort_t*) arg;
  917. ulint id;
  918. ut_ad(psort_info);
  919. id = psort_info->psort_id;
  920. row_fts_merge_insert(psort_info->psort_common->dup->index,
  921. psort_info->psort_common->new_table,
  922. psort_info->psort_common->all_info, id);
  923. psort_info->child_status = FTS_CHILD_COMPLETE;
  924. os_event_set(psort_info->psort_common->merge_event);
  925. psort_info->child_status = FTS_CHILD_EXITING;
  926. os_thread_exit(false);
  927. OS_THREAD_DUMMY_RETURN;
  928. }
  929. /*********************************************************************//**
  930. Kick off the parallel merge and insert thread */
  931. void
  932. row_fts_start_parallel_merge(
  933. /*=========================*/
  934. fts_psort_t* merge_info) /*!< in: parallel sort info */
  935. {
  936. int i = 0;
  937. /* Kick off merge/insert threads */
  938. for (i = 0; i < FTS_NUM_AUX_INDEX; i++) {
  939. merge_info[i].psort_id = i;
  940. merge_info[i].child_status = 0;
  941. merge_info[i].thread_hdl = os_thread_create(
  942. fts_parallel_merge,
  943. (void*) &merge_info[i],
  944. &merge_info[i].thread_hdl);
  945. }
  946. }
  947. /**
  948. Write out a single word's data as new entry/entries in the INDEX table.
  949. @param[in] ins_ctx insert context
  950. @param[in] word word string
  951. @param[in] node node colmns
  952. @return DB_SUCCUESS if insertion runs fine, otherwise error code */
  953. static
  954. dberr_t
  955. row_merge_write_fts_node(
  956. const fts_psort_insert_t* ins_ctx,
  957. const fts_string_t* word,
  958. const fts_node_t* node)
  959. {
  960. dtuple_t* tuple;
  961. dfield_t* field;
  962. dberr_t ret = DB_SUCCESS;
  963. doc_id_t write_first_doc_id[8];
  964. doc_id_t write_last_doc_id[8];
  965. ib_uint32_t write_doc_count;
  966. tuple = ins_ctx->tuple;
  967. /* The first field is the tokenized word */
  968. field = dtuple_get_nth_field(tuple, 0);
  969. dfield_set_data(field, word->f_str, word->f_len);
  970. /* The second field is first_doc_id */
  971. field = dtuple_get_nth_field(tuple, 1);
  972. fts_write_doc_id((byte*)&write_first_doc_id, node->first_doc_id);
  973. dfield_set_data(field, &write_first_doc_id, sizeof(doc_id_t));
  974. /* The third and fourth fileds(TRX_ID, ROLL_PTR) are filled already.*/
  975. /* The fifth field is last_doc_id */
  976. field = dtuple_get_nth_field(tuple, 4);
  977. fts_write_doc_id((byte*)&write_last_doc_id, node->last_doc_id);
  978. dfield_set_data(field, &write_last_doc_id, sizeof(doc_id_t));
  979. /* The sixth field is doc_count */
  980. field = dtuple_get_nth_field(tuple, 5);
  981. mach_write_to_4((byte*)&write_doc_count, (ib_uint32_t)node->doc_count);
  982. dfield_set_data(field, &write_doc_count, sizeof(ib_uint32_t));
  983. /* The seventh field is ilist */
  984. field = dtuple_get_nth_field(tuple, 6);
  985. dfield_set_data(field, node->ilist, node->ilist_size);
  986. ret = ins_ctx->btr_bulk->insert(tuple);
  987. return(ret);
  988. }
  989. /********************************************************************//**
  990. Insert processed FTS data to auxillary index tables.
  991. @return DB_SUCCESS if insertion runs fine */
  992. static MY_ATTRIBUTE((nonnull))
  993. dberr_t
  994. row_merge_write_fts_word(
  995. /*=====================*/
  996. fts_psort_insert_t* ins_ctx, /*!< in: insert context */
  997. fts_tokenizer_word_t* word) /*!< in: sorted and tokenized
  998. word */
  999. {
  1000. dberr_t ret = DB_SUCCESS;
  1001. ut_ad(ins_ctx->aux_index_id == fts_select_index(
  1002. ins_ctx->charset, word->text.f_str, word->text.f_len));
  1003. /* Pop out each fts_node in word->nodes write them to auxiliary table */
  1004. for (ulint i = 0; i < ib_vector_size(word->nodes); i++) {
  1005. dberr_t error;
  1006. fts_node_t* fts_node;
  1007. fts_node = static_cast<fts_node_t*>(ib_vector_get(word->nodes, i));
  1008. error = row_merge_write_fts_node(ins_ctx, &word->text, fts_node);
  1009. if (error != DB_SUCCESS) {
  1010. ib::error() << "Failed to write word "
  1011. << word->text.f_str << " to FTS auxiliary"
  1012. " index table, error (" << ut_strerr(error)
  1013. << ")";
  1014. ret = error;
  1015. }
  1016. ut_free(fts_node->ilist);
  1017. fts_node->ilist = NULL;
  1018. }
  1019. ib_vector_reset(word->nodes);
  1020. return(ret);
  1021. }
  1022. /*********************************************************************//**
  1023. Read sorted FTS data files and insert data tuples to auxillary tables.
  1024. @return DB_SUCCESS or error number */
  1025. static
  1026. void
  1027. row_fts_insert_tuple(
  1028. /*=================*/
  1029. fts_psort_insert_t*
  1030. ins_ctx, /*!< in: insert context */
  1031. fts_tokenizer_word_t* word, /*!< in: last processed
  1032. tokenized word */
  1033. ib_vector_t* positions, /*!< in: word position */
  1034. doc_id_t* in_doc_id, /*!< in: last item doc id */
  1035. dtuple_t* dtuple) /*!< in: entry to insert */
  1036. {
  1037. fts_node_t* fts_node = NULL;
  1038. dfield_t* dfield;
  1039. doc_id_t doc_id;
  1040. ulint position;
  1041. fts_string_t token_word;
  1042. ulint i;
  1043. /* Get fts_node for the FTS auxillary INDEX table */
  1044. if (ib_vector_size(word->nodes) > 0) {
  1045. fts_node = static_cast<fts_node_t*>(
  1046. ib_vector_last(word->nodes));
  1047. }
  1048. if (fts_node == NULL
  1049. || fts_node->ilist_size > FTS_ILIST_MAX_SIZE) {
  1050. fts_node = static_cast<fts_node_t*>(
  1051. ib_vector_push(word->nodes, NULL));
  1052. memset(fts_node, 0x0, sizeof(*fts_node));
  1053. }
  1054. /* If dtuple == NULL, this is the last word to be processed */
  1055. if (!dtuple) {
  1056. if (fts_node && ib_vector_size(positions) > 0) {
  1057. fts_cache_node_add_positions(
  1058. NULL, fts_node, *in_doc_id,
  1059. positions);
  1060. /* Write out the current word */
  1061. row_merge_write_fts_word(ins_ctx, word);
  1062. }
  1063. return;
  1064. }
  1065. /* Get the first field for the tokenized word */
  1066. dfield = dtuple_get_nth_field(dtuple, 0);
  1067. token_word.f_n_char = 0;
  1068. token_word.f_len = dfield->len;
  1069. token_word.f_str = static_cast<byte*>(dfield_get_data(dfield));
  1070. if (!word->text.f_str) {
  1071. fts_string_dup(&word->text, &token_word, ins_ctx->heap);
  1072. }
  1073. /* compare to the last word, to see if they are the same
  1074. word */
  1075. if (innobase_fts_text_cmp(ins_ctx->charset,
  1076. &word->text, &token_word) != 0) {
  1077. ulint num_item;
  1078. /* Getting a new word, flush the last position info
  1079. for the currnt word in fts_node */
  1080. if (ib_vector_size(positions) > 0) {
  1081. fts_cache_node_add_positions(
  1082. NULL, fts_node, *in_doc_id, positions);
  1083. }
  1084. /* Write out the current word */
  1085. row_merge_write_fts_word(ins_ctx, word);
  1086. /* Copy the new word */
  1087. fts_string_dup(&word->text, &token_word, ins_ctx->heap);
  1088. num_item = ib_vector_size(positions);
  1089. /* Clean up position queue */
  1090. for (i = 0; i < num_item; i++) {
  1091. ib_vector_pop(positions);
  1092. }
  1093. /* Reset Doc ID */
  1094. *in_doc_id = 0;
  1095. memset(fts_node, 0x0, sizeof(*fts_node));
  1096. }
  1097. /* Get the word's Doc ID */
  1098. dfield = dtuple_get_nth_field(dtuple, 1);
  1099. if (!ins_ctx->opt_doc_id_size) {
  1100. doc_id = fts_read_doc_id(
  1101. static_cast<byte*>(dfield_get_data(dfield)));
  1102. } else {
  1103. doc_id = (doc_id_t) mach_read_from_4(
  1104. static_cast<byte*>(dfield_get_data(dfield)));
  1105. }
  1106. /* Get the word's position info */
  1107. dfield = dtuple_get_nth_field(dtuple, 2);
  1108. position = mach_read_from_4(static_cast<byte*>(dfield_get_data(dfield)));
  1109. /* If this is the same word as the last word, and they
  1110. have the same Doc ID, we just need to add its position
  1111. info. Otherwise, we will flush position info to the
  1112. fts_node and initiate a new position vector */
  1113. if (!(*in_doc_id) || *in_doc_id == doc_id) {
  1114. ib_vector_push(positions, &position);
  1115. } else {
  1116. ulint num_pos = ib_vector_size(positions);
  1117. fts_cache_node_add_positions(NULL, fts_node,
  1118. *in_doc_id, positions);
  1119. for (i = 0; i < num_pos; i++) {
  1120. ib_vector_pop(positions);
  1121. }
  1122. ib_vector_push(positions, &position);
  1123. }
  1124. /* record the current Doc ID */
  1125. *in_doc_id = doc_id;
  1126. }
  1127. /*********************************************************************//**
  1128. Propagate a newly added record up one level in the selection tree
  1129. @return parent where this value propagated to */
  1130. static
  1131. int
  1132. row_fts_sel_tree_propagate(
  1133. /*=======================*/
  1134. int propogated, /*<! in: tree node propagated */
  1135. int* sel_tree, /*<! in: selection tree */
  1136. const mrec_t** mrec, /*<! in: sort record */
  1137. ulint** offsets, /*<! in: record offsets */
  1138. dict_index_t* index) /*<! in/out: FTS index */
  1139. {
  1140. ulint parent;
  1141. int child_left;
  1142. int child_right;
  1143. int selected;
  1144. /* Find which parent this value will be propagated to */
  1145. parent = (propogated - 1) / 2;
  1146. /* Find out which value is smaller, and to propagate */
  1147. child_left = sel_tree[parent * 2 + 1];
  1148. child_right = sel_tree[parent * 2 + 2];
  1149. if (child_left == -1 || mrec[child_left] == NULL) {
  1150. if (child_right == -1
  1151. || mrec[child_right] == NULL) {
  1152. selected = -1;
  1153. } else {
  1154. selected = child_right ;
  1155. }
  1156. } else if (child_right == -1
  1157. || mrec[child_right] == NULL) {
  1158. selected = child_left;
  1159. } else if (cmp_rec_rec_simple(mrec[child_left], mrec[child_right],
  1160. offsets[child_left],
  1161. offsets[child_right],
  1162. index, NULL) < 0) {
  1163. selected = child_left;
  1164. } else {
  1165. selected = child_right;
  1166. }
  1167. sel_tree[parent] = selected;
  1168. return(static_cast<int>(parent));
  1169. }
  1170. /*********************************************************************//**
  1171. Readjust selection tree after popping the root and read a new value
  1172. @return the new root */
  1173. static
  1174. int
  1175. row_fts_sel_tree_update(
  1176. /*====================*/
  1177. int* sel_tree, /*<! in/out: selection tree */
  1178. ulint propagated, /*<! in: node to propagate up */
  1179. ulint height, /*<! in: tree height */
  1180. const mrec_t** mrec, /*<! in: sort record */
  1181. ulint** offsets, /*<! in: record offsets */
  1182. dict_index_t* index) /*<! in: index dictionary */
  1183. {
  1184. ulint i;
  1185. for (i = 1; i <= height; i++) {
  1186. propagated = static_cast<ulint>(row_fts_sel_tree_propagate(
  1187. static_cast<int>(propagated), sel_tree, mrec, offsets, index));
  1188. }
  1189. return(sel_tree[0]);
  1190. }
  1191. /*********************************************************************//**
  1192. Build selection tree at a specified level */
  1193. static
  1194. void
  1195. row_fts_build_sel_tree_level(
  1196. /*=========================*/
  1197. int* sel_tree, /*<! in/out: selection tree */
  1198. ulint level, /*<! in: selection tree level */
  1199. const mrec_t** mrec, /*<! in: sort record */
  1200. ulint** offsets, /*<! in: record offsets */
  1201. dict_index_t* index) /*<! in: index dictionary */
  1202. {
  1203. ulint start;
  1204. int child_left;
  1205. int child_right;
  1206. ulint i;
  1207. ulint num_item = ulint(1) << level;
  1208. start = num_item - 1;
  1209. for (i = 0; i < num_item; i++) {
  1210. child_left = sel_tree[(start + i) * 2 + 1];
  1211. child_right = sel_tree[(start + i) * 2 + 2];
  1212. if (child_left == -1) {
  1213. if (child_right == -1) {
  1214. sel_tree[start + i] = -1;
  1215. } else {
  1216. sel_tree[start + i] = child_right;
  1217. }
  1218. continue;
  1219. } else if (child_right == -1) {
  1220. sel_tree[start + i] = child_left;
  1221. continue;
  1222. }
  1223. /* Deal with NULL child conditions */
  1224. if (!mrec[child_left]) {
  1225. if (!mrec[child_right]) {
  1226. sel_tree[start + i] = -1;
  1227. } else {
  1228. sel_tree[start + i] = child_right;
  1229. }
  1230. continue;
  1231. } else if (!mrec[child_right]) {
  1232. sel_tree[start + i] = child_left;
  1233. continue;
  1234. }
  1235. /* Select the smaller one to set parent pointer */
  1236. int cmp = cmp_rec_rec_simple(
  1237. mrec[child_left], mrec[child_right],
  1238. offsets[child_left], offsets[child_right],
  1239. index, NULL);
  1240. sel_tree[start + i] = cmp < 0 ? child_left : child_right;
  1241. }
  1242. }
  1243. /*********************************************************************//**
  1244. Build a selection tree for merge. The selection tree is a binary tree
  1245. and should have fts_sort_pll_degree / 2 levels. With root as level 0
  1246. @return number of tree levels */
  1247. static
  1248. ulint
  1249. row_fts_build_sel_tree(
  1250. /*===================*/
  1251. int* sel_tree, /*<! in/out: selection tree */
  1252. const mrec_t** mrec, /*<! in: sort record */
  1253. ulint** offsets, /*<! in: record offsets */
  1254. dict_index_t* index) /*<! in: index dictionary */
  1255. {
  1256. ulint treelevel = 1;
  1257. ulint num = 2;
  1258. int i = 0;
  1259. ulint start;
  1260. /* No need to build selection tree if we only have two merge threads */
  1261. if (fts_sort_pll_degree <= 2) {
  1262. return(0);
  1263. }
  1264. while (num < fts_sort_pll_degree) {
  1265. num = num << 1;
  1266. treelevel++;
  1267. }
  1268. start = (ulint(1) << treelevel) - 1;
  1269. for (i = 0; i < (int) fts_sort_pll_degree; i++) {
  1270. sel_tree[i + start] = i;
  1271. }
  1272. for (i = static_cast<int>(treelevel) - 1; i >= 0; i--) {
  1273. row_fts_build_sel_tree_level(
  1274. sel_tree, static_cast<ulint>(i), mrec, offsets, index);
  1275. }
  1276. return(treelevel);
  1277. }
  1278. /*********************************************************************//**
  1279. Read sorted file containing index data tuples and insert these data
  1280. tuples to the index
  1281. @return DB_SUCCESS or error number */
  1282. dberr_t
  1283. row_fts_merge_insert(
  1284. /*=================*/
  1285. dict_index_t* index, /*!< in: index */
  1286. dict_table_t* table, /*!< in: new table */
  1287. fts_psort_t* psort_info, /*!< parallel sort info */
  1288. ulint id) /* !< in: which auxiliary table's data
  1289. to insert to */
  1290. {
  1291. const byte** b;
  1292. mem_heap_t* tuple_heap;
  1293. mem_heap_t* heap;
  1294. dberr_t error = DB_SUCCESS;
  1295. ulint* foffs;
  1296. ulint** offsets;
  1297. fts_tokenizer_word_t new_word;
  1298. ib_vector_t* positions;
  1299. doc_id_t last_doc_id;
  1300. ib_alloc_t* heap_alloc;
  1301. ulint i;
  1302. mrec_buf_t** buf;
  1303. int* fd;
  1304. byte** block;
  1305. byte** crypt_block;
  1306. const mrec_t** mrec;
  1307. ulint count = 0;
  1308. int* sel_tree;
  1309. ulint height;
  1310. ulint start;
  1311. fts_psort_insert_t ins_ctx;
  1312. ulint count_diag = 0;
  1313. fts_table_t fts_table;
  1314. char aux_table_name[MAX_FULL_NAME_LEN];
  1315. dict_table_t* aux_table;
  1316. dict_index_t* aux_index;
  1317. trx_t* trx;
  1318. byte trx_id_buf[6];
  1319. roll_ptr_t roll_ptr = 0;
  1320. dfield_t* field;
  1321. ut_ad(index);
  1322. ut_ad(table);
  1323. /* We use the insert query graph as the dummy graph
  1324. needed in the row module call */
  1325. trx = trx_allocate_for_background();
  1326. trx_start_if_not_started(trx, true);
  1327. trx->op_info = "inserting index entries";
  1328. ins_ctx.opt_doc_id_size = psort_info[0].psort_common->opt_doc_id_size;
  1329. heap = mem_heap_create(500 + sizeof(mrec_buf_t));
  1330. b = (const byte**) mem_heap_alloc(
  1331. heap, sizeof (*b) * fts_sort_pll_degree);
  1332. foffs = (ulint*) mem_heap_alloc(
  1333. heap, sizeof(*foffs) * fts_sort_pll_degree);
  1334. offsets = (ulint**) mem_heap_alloc(
  1335. heap, sizeof(*offsets) * fts_sort_pll_degree);
  1336. buf = (mrec_buf_t**) mem_heap_alloc(
  1337. heap, sizeof(*buf) * fts_sort_pll_degree);
  1338. fd = (int*) mem_heap_alloc(heap, sizeof(*fd) * fts_sort_pll_degree);
  1339. block = (byte**) mem_heap_alloc(
  1340. heap, sizeof(*block) * fts_sort_pll_degree);
  1341. crypt_block = (byte**) mem_heap_alloc(
  1342. heap, sizeof(*block) * fts_sort_pll_degree);
  1343. mrec = (const mrec_t**) mem_heap_alloc(
  1344. heap, sizeof(*mrec) * fts_sort_pll_degree);
  1345. sel_tree = (int*) mem_heap_alloc(
  1346. heap, sizeof(*sel_tree) * (fts_sort_pll_degree * 2));
  1347. tuple_heap = mem_heap_create(1000);
  1348. ins_ctx.charset = fts_index_get_charset(index);
  1349. ins_ctx.heap = heap;
  1350. for (i = 0; i < fts_sort_pll_degree; i++) {
  1351. ulint num;
  1352. num = 1 + REC_OFFS_HEADER_SIZE
  1353. + dict_index_get_n_fields(index);
  1354. offsets[i] = static_cast<ulint*>(mem_heap_zalloc(
  1355. heap, num * sizeof *offsets[i]));
  1356. offsets[i][0] = num;
  1357. offsets[i][1] = dict_index_get_n_fields(index);
  1358. block[i] = psort_info[i].merge_block[id];
  1359. crypt_block[i] = psort_info[i].crypt_block[id];
  1360. b[i] = psort_info[i].merge_block[id];
  1361. fd[i] = psort_info[i].merge_file[id]->fd;
  1362. foffs[i] = 0;
  1363. buf[i] = static_cast<mrec_buf_t*>(
  1364. mem_heap_alloc(heap, sizeof *buf[i]));
  1365. count_diag += (int) psort_info[i].merge_file[id]->n_rec;
  1366. }
  1367. if (fts_enable_diag_print) {
  1368. ib::info() << "InnoDB_FTS: to insert " << count_diag
  1369. << " records";
  1370. }
  1371. /* Initialize related variables if creating FTS indexes */
  1372. heap_alloc = ib_heap_allocator_create(heap);
  1373. memset(&new_word, 0, sizeof(new_word));
  1374. new_word.nodes = ib_vector_create(heap_alloc, sizeof(fts_node_t), 4);
  1375. positions = ib_vector_create(heap_alloc, sizeof(ulint), 32);
  1376. last_doc_id = 0;
  1377. /* We should set the flags2 with aux_table_name here,
  1378. in order to get the correct aux table names. */
  1379. index->table->flags2 |= DICT_TF2_FTS_AUX_HEX_NAME;
  1380. DBUG_EXECUTE_IF("innodb_test_wrong_fts_aux_table_name",
  1381. index->table->flags2 &= ~DICT_TF2_FTS_AUX_HEX_NAME;);
  1382. fts_table.type = FTS_INDEX_TABLE;
  1383. fts_table.index_id = index->id;
  1384. fts_table.table_id = table->id;
  1385. fts_table.parent = index->table->name.m_name;
  1386. fts_table.table = index->table;
  1387. fts_table.suffix = fts_get_suffix(id);
  1388. /* Get aux index */
  1389. fts_get_table_name(&fts_table, aux_table_name);
  1390. aux_table = dict_table_open_on_name(aux_table_name, FALSE, FALSE,
  1391. DICT_ERR_IGNORE_NONE);
  1392. ut_ad(aux_table != NULL);
  1393. dict_table_close(aux_table, FALSE, FALSE);
  1394. aux_index = dict_table_get_first_index(aux_table);
  1395. /* Create bulk load instance */
  1396. ins_ctx.btr_bulk = UT_NEW_NOKEY(
  1397. BtrBulk(aux_index, trx, psort_info[0].psort_common->trx
  1398. ->flush_observer));
  1399. /* Create tuple for insert */
  1400. ins_ctx.tuple = dtuple_create(heap, dict_index_get_n_fields(aux_index));
  1401. dict_index_copy_types(ins_ctx.tuple, aux_index,
  1402. dict_index_get_n_fields(aux_index));
  1403. /* Set TRX_ID and ROLL_PTR */
  1404. trx_write_trx_id(trx_id_buf, trx->id);
  1405. field = dtuple_get_nth_field(ins_ctx.tuple, 2);
  1406. dfield_set_data(field, &trx_id_buf, 6);
  1407. field = dtuple_get_nth_field(ins_ctx.tuple, 3);
  1408. dfield_set_data(field, &roll_ptr, 7);
  1409. #ifdef UNIV_DEBUG
  1410. ins_ctx.aux_index_id = id;
  1411. #endif
  1412. const ulint space = table->space;
  1413. for (i = 0; i < fts_sort_pll_degree; i++) {
  1414. if (psort_info[i].merge_file[id]->n_rec == 0) {
  1415. /* No Rows to read */
  1416. mrec[i] = b[i] = NULL;
  1417. } else {
  1418. /* Read from temp file only if it has been
  1419. written to. Otherwise, block memory holds
  1420. all the sorted records */
  1421. if (psort_info[i].merge_file[id]->offset > 0
  1422. && (!row_merge_read(
  1423. fd[i], foffs[i],
  1424. (row_merge_block_t*) block[i],
  1425. (row_merge_block_t*) crypt_block[i],
  1426. space))) {
  1427. error = DB_CORRUPTION;
  1428. goto exit;
  1429. }
  1430. ROW_MERGE_READ_GET_NEXT(i);
  1431. }
  1432. }
  1433. height = row_fts_build_sel_tree(sel_tree, (const mrec_t **) mrec,
  1434. offsets, index);
  1435. start = (1 << height) - 1;
  1436. /* Fetch sorted records from sort buffer and insert them into
  1437. corresponding FTS index auxiliary tables */
  1438. for (;;) {
  1439. dtuple_t* dtuple;
  1440. ulint n_ext;
  1441. int min_rec = 0;
  1442. if (fts_sort_pll_degree <= 2) {
  1443. while (!mrec[min_rec]) {
  1444. min_rec++;
  1445. if (min_rec >= (int) fts_sort_pll_degree) {
  1446. row_fts_insert_tuple(
  1447. &ins_ctx, &new_word,
  1448. positions, &last_doc_id,
  1449. NULL);
  1450. goto exit;
  1451. }
  1452. }
  1453. for (i = min_rec + 1; i < fts_sort_pll_degree; i++) {
  1454. if (!mrec[i]) {
  1455. continue;
  1456. }
  1457. if (cmp_rec_rec_simple(
  1458. mrec[i], mrec[min_rec],
  1459. offsets[i], offsets[min_rec],
  1460. index, NULL) < 0) {
  1461. min_rec = static_cast<int>(i);
  1462. }
  1463. }
  1464. } else {
  1465. min_rec = sel_tree[0];
  1466. if (min_rec == -1) {
  1467. row_fts_insert_tuple(
  1468. &ins_ctx, &new_word,
  1469. positions, &last_doc_id,
  1470. NULL);
  1471. goto exit;
  1472. }
  1473. }
  1474. dtuple = row_rec_to_index_entry_low(
  1475. mrec[min_rec], index, offsets[min_rec], &n_ext,
  1476. tuple_heap);
  1477. row_fts_insert_tuple(
  1478. &ins_ctx, &new_word, positions,
  1479. &last_doc_id, dtuple);
  1480. ROW_MERGE_READ_GET_NEXT(min_rec);
  1481. if (fts_sort_pll_degree > 2) {
  1482. if (!mrec[min_rec]) {
  1483. sel_tree[start + min_rec] = -1;
  1484. }
  1485. row_fts_sel_tree_update(sel_tree, start + min_rec,
  1486. height, mrec,
  1487. offsets, index);
  1488. }
  1489. count++;
  1490. mem_heap_empty(tuple_heap);
  1491. }
  1492. exit:
  1493. fts_sql_commit(trx);
  1494. trx->op_info = "";
  1495. mem_heap_free(tuple_heap);
  1496. error = ins_ctx.btr_bulk->finish(error);
  1497. UT_DELETE(ins_ctx.btr_bulk);
  1498. trx_free_for_background(trx);
  1499. mem_heap_free(heap);
  1500. if (fts_enable_diag_print) {
  1501. ib::info() << "InnoDB_FTS: inserted " << count << " records";
  1502. }
  1503. return(error);
  1504. }