You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1130 lines
28 KiB

branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
branches/zip: Implement the reporting of duplicate key values to MySQL. innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
18 years ago
  1. /******************************************************
  2. Smart ALTER TABLE
  3. (c) 2005-2007 Innobase Oy
  4. *******************************************************/
  5. #include <mysql_priv.h>
  6. #include <mysqld_error.h>
  7. extern "C" {
  8. #include "log0log.h"
  9. #include "row0merge.h"
  10. #include "srv0srv.h"
  11. #include "trx0trx.h"
  12. #include "trx0roll.h"
  13. #include "ha_prototypes.h"
  14. #include "handler0alter.h"
  15. }
  16. #include "ha_innodb.h"
  17. /*****************************************************************
  18. Copies an InnoDB column to a MySQL field. This function is
  19. adapted from row_sel_field_store_in_mysql_format(). */
  20. static
  21. void
  22. innobase_col_to_mysql(
  23. /*==================*/
  24. const dict_col_t* col, /* in: InnoDB column */
  25. const uchar* data, /* in: InnoDB column data */
  26. ulint len, /* in: length of data, in bytes */
  27. Field* field) /* in/out: MySQL field */
  28. {
  29. uchar* ptr;
  30. uchar* dest = field->ptr;
  31. ulint flen = field->pack_length();
  32. switch (col->mtype) {
  33. case DATA_INT:
  34. ut_ad(len == flen);
  35. /* Convert integer data from Innobase to little-endian
  36. format, sign bit restored to normal */
  37. for (ptr = dest + len; ptr != dest; ) {
  38. *--ptr = *data++;
  39. }
  40. if (!(field->flags & UNSIGNED_FLAG)) {
  41. ((byte*) dest)[len - 1] ^= 0x80;
  42. }
  43. break;
  44. case DATA_VARCHAR:
  45. case DATA_VARMYSQL:
  46. case DATA_BINARY:
  47. field->reset();
  48. if (field->type() == MYSQL_TYPE_VARCHAR) {
  49. /* This is a >= 5.0.3 type true VARCHAR. Store the
  50. length of the data to the first byte or the first
  51. two bytes of dest. */
  52. dest = row_mysql_store_true_var_len(
  53. dest, len, flen - field->key_length());
  54. }
  55. /* Copy the actual data */
  56. memcpy(dest, data, len);
  57. break;
  58. case DATA_BLOB:
  59. /* Store a pointer to the BLOB buffer to dest: the BLOB was
  60. already copied to the buffer in row_sel_store_mysql_rec */
  61. row_mysql_store_blob_ref(dest, flen, data, len);
  62. break;
  63. #ifdef UNIV_DEBUG
  64. case DATA_MYSQL:
  65. ut_ad(flen >= len);
  66. ut_ad(col->mbmaxlen >= col->mbminlen);
  67. ut_ad(col->mbmaxlen > col->mbminlen || flen == len);
  68. memcpy(dest, data, len);
  69. break;
  70. default:
  71. case DATA_SYS_CHILD:
  72. case DATA_SYS:
  73. /* These column types should never be shipped to MySQL. */
  74. ut_ad(0);
  75. case DATA_CHAR:
  76. case DATA_FIXBINARY:
  77. case DATA_FLOAT:
  78. case DATA_DOUBLE:
  79. case DATA_DECIMAL:
  80. /* Above are the valid column types for MySQL data. */
  81. ut_ad(flen == len);
  82. #else /* UNIV_DEBUG */
  83. default:
  84. #endif /* UNIV_DEBUG */
  85. memcpy(dest, data, len);
  86. }
  87. }
  88. /*****************************************************************
  89. Copies an InnoDB record to table->record[0]. */
  90. extern "C"
  91. void
  92. innobase_rec_to_mysql(
  93. /*==================*/
  94. TABLE* table, /* in/out: MySQL table */
  95. const rec_t* rec, /* in: record */
  96. const dict_index_t* index, /* in: index */
  97. const ulint* offsets) /* in: rec_get_offsets(
  98. rec, index, ...) */
  99. {
  100. uint n_fields = table->s->fields;
  101. uint i;
  102. ut_ad(n_fields == dict_table_get_n_user_cols(index->table));
  103. for (i = 0; i < n_fields; i++) {
  104. Field* field = table->field[i];
  105. ulint ipos;
  106. ulint ilen;
  107. const uchar* ifield;
  108. field->reset();
  109. ipos = dict_index_get_nth_col_pos(index, i);
  110. if (UNIV_UNLIKELY(ipos == ULINT_UNDEFINED)) {
  111. null_field:
  112. field->set_null();
  113. continue;
  114. }
  115. ifield = rec_get_nth_field(rec, offsets, ipos, &ilen);
  116. /* Assign the NULL flag */
  117. if (ilen == UNIV_SQL_NULL) {
  118. ut_ad(field->real_maybe_null());
  119. goto null_field;
  120. }
  121. field->set_notnull();
  122. innobase_col_to_mysql(
  123. dict_field_get_col(
  124. dict_index_get_nth_field(index, ipos)),
  125. ifield, ilen, field);
  126. }
  127. }
  128. /*****************************************************************
  129. Resets table->record[0]. */
  130. extern "C"
  131. void
  132. innobase_rec_reset(
  133. /*===============*/
  134. TABLE* table) /* in/out: MySQL table */
  135. {
  136. uint n_fields = table->s->fields;
  137. uint i;
  138. for (i = 0; i < n_fields; i++) {
  139. table->field[i]->set_default();
  140. }
  141. }
  142. /**********************************************************************
  143. Removes the filename encoding of a database and table name. */
  144. static
  145. void
  146. innobase_convert_tablename(
  147. /*=======================*/
  148. char* s) /* in: identifier; out: decoded identifier */
  149. {
  150. uint errors;
  151. char* slash = strchr(s, '/');
  152. if (slash) {
  153. char* t;
  154. /* Temporarily replace the '/' with NUL. */
  155. *slash = 0;
  156. /* Convert the database name. */
  157. strconvert(&my_charset_filename, s, system_charset_info,
  158. s, slash - s + 1, &errors);
  159. t = s + strlen(s);
  160. ut_ad(slash >= t);
  161. /* Append a '.' after the database name. */
  162. *t++ = '.';
  163. slash++;
  164. /* Convert the table name. */
  165. strconvert(&my_charset_filename, slash, system_charset_info,
  166. t, slash - t + strlen(slash), &errors);
  167. } else {
  168. strconvert(&my_charset_filename, s,
  169. system_charset_info, s, strlen(s), &errors);
  170. }
  171. }
  172. /***********************************************************************
  173. This function checks that index keys are sensible. */
  174. static
  175. int
  176. innobase_check_index_keys(
  177. /*======================*/
  178. /* out: 0 or error number */
  179. TABLE* table, /* in: MySQL table */
  180. dict_table_t* innodb_table, /* in: InnoDB table */
  181. trx_t* trx, /* in: transaction */
  182. KEY* key_info, /* in: Indexes to be created */
  183. ulint num_of_keys) /* in: Number of indexes to
  184. be created */
  185. {
  186. Field* field;
  187. ulint key_num;
  188. int error = 0;
  189. ibool is_unsigned;
  190. ut_ad(table && innodb_table && trx && key_info && num_of_keys);
  191. for (key_num = 0; key_num < num_of_keys; key_num++) {
  192. KEY* key;
  193. key = &(key_info[key_num]);
  194. /* Check that the same index name does not appear
  195. twice in indexes to be created. */
  196. for (ulint i = 0; i < key_num; i++) {
  197. KEY* key2;
  198. key2 = &key_info[i];
  199. if (0 == strcmp(key->name, key2->name)) {
  200. ut_print_timestamp(stderr);
  201. fputs(" InnoDB: Error: index ", stderr);
  202. ut_print_name(stderr, trx, FALSE, key->name);
  203. fputs(" appears twice in create index\n",
  204. stderr);
  205. error = ER_WRONG_NAME_FOR_INDEX;
  206. return(error);
  207. }
  208. }
  209. /* Check that MySQL does not try to create a column
  210. prefix index field on an inappropriate data type and
  211. that the same colum does not appear twice in the index. */
  212. for (ulint i = 0; i < key->key_parts; i++) {
  213. KEY_PART_INFO* key_part1;
  214. ulint col_type; /* Column type */
  215. key_part1 = key->key_part + i;
  216. field = key_part1->field;
  217. col_type = get_innobase_type_from_mysql_type(
  218. &is_unsigned, field);
  219. if (DATA_BLOB == col_type
  220. || (key_part1->length < field->pack_length()
  221. && field->type() != MYSQL_TYPE_VARCHAR)
  222. || (field->type() == MYSQL_TYPE_VARCHAR
  223. && key_part1->length < field->pack_length()
  224. - ((Field_varstring*)field)->length_bytes)) {
  225. if (col_type == DATA_INT
  226. || col_type == DATA_FLOAT
  227. || col_type == DATA_DOUBLE
  228. || col_type == DATA_DECIMAL) {
  229. fprintf(stderr,
  230. "InnoDB: error: MySQL is trying to create a column prefix index field\n"
  231. "InnoDB: on an inappropriate data type. Table name %s, column name %s.\n",
  232. innodb_table->name,
  233. field->field_name);
  234. error = ER_WRONG_KEY_COLUMN;
  235. }
  236. }
  237. for (ulint j = 0; j < i; j++) {
  238. KEY_PART_INFO* key_part2;
  239. key_part2 = key->key_part + j;
  240. if (0 == strcmp(
  241. key_part1->field->field_name,
  242. key_part2->field->field_name)) {
  243. ut_print_timestamp(stderr);
  244. fputs(" InnoDB: Error: column ",
  245. stderr);
  246. ut_print_name(stderr, trx, FALSE,
  247. key_part1->field->field_name);
  248. fputs(" appears twice in ", stderr);
  249. ut_print_name(stderr, trx, FALSE,
  250. key->name);
  251. fputs("\n"
  252. " InnoDB: This is not allowed in InnoDB.\n",
  253. stderr);
  254. error = ER_WRONG_KEY_COLUMN;
  255. return(error);
  256. }
  257. }
  258. }
  259. }
  260. return(error);
  261. }
  262. /***********************************************************************
  263. Create index field definition for key part */
  264. static
  265. void
  266. innobase_create_index_field_def(
  267. /*============================*/
  268. KEY_PART_INFO* key_part, /* in: MySQL key definition */
  269. mem_heap_t* heap, /* in: memory heap */
  270. merge_index_field_t* index_field) /* out: index field
  271. definition for key_part */
  272. {
  273. Field* field;
  274. ibool is_unsigned;
  275. ulint col_type;
  276. DBUG_ENTER("innobase_create_index_field_def");
  277. ut_ad(key_part);
  278. ut_ad(index_field);
  279. field = key_part->field;
  280. ut_a(field);
  281. col_type = get_innobase_type_from_mysql_type(&is_unsigned, field);
  282. if (DATA_BLOB == col_type
  283. || (key_part->length < field->pack_length()
  284. && field->type() != MYSQL_TYPE_VARCHAR)
  285. || (field->type() == MYSQL_TYPE_VARCHAR
  286. && key_part->length < field->pack_length()
  287. - ((Field_varstring*)field)->length_bytes)) {
  288. index_field->prefix_len = key_part->length;
  289. } else {
  290. index_field->prefix_len = 0;
  291. }
  292. index_field->field_name = mem_heap_strdup(heap, field->field_name);
  293. DBUG_VOID_RETURN;
  294. }
  295. /***********************************************************************
  296. Create index definition for key */
  297. static
  298. void
  299. innobase_create_index_def(
  300. /*======================*/
  301. KEY* key, /* in: key definition */
  302. bool new_primary, /* in: TRUE=generating
  303. a new primary key
  304. on the table */
  305. bool key_primary, /* in: TRUE if this key
  306. is a primary key */
  307. merge_index_def_t* index, /* out: index definition */
  308. mem_heap_t* heap) /* in: heap where memory
  309. is allocated */
  310. {
  311. ulint i;
  312. ulint len;
  313. ulint n_fields = key->key_parts;
  314. char* index_name;
  315. DBUG_ENTER("innobase_create_index_def");
  316. index->fields = (merge_index_field_t*) mem_heap_alloc(
  317. heap, n_fields * sizeof *index->fields);
  318. index->ind_type = 0;
  319. index->n_fields = n_fields;
  320. len = strlen(key->name) + 1;
  321. index->name = index_name = (char*) mem_heap_alloc(heap,
  322. len + !new_primary);
  323. if (UNIV_LIKELY(!new_primary)) {
  324. *index_name++ = TEMP_INDEX_PREFIX;
  325. }
  326. memcpy(index_name, key->name, len);
  327. if (key->flags & HA_NOSAME) {
  328. index->ind_type |= DICT_UNIQUE;
  329. }
  330. if (key_primary) {
  331. index->ind_type |= DICT_CLUSTERED;
  332. }
  333. for (i = 0; i < n_fields; i++) {
  334. innobase_create_index_field_def(&key->key_part[i], heap,
  335. &index->fields[i]);
  336. }
  337. DBUG_VOID_RETURN;
  338. }
  339. /***********************************************************************
  340. Copy index field definition */
  341. static
  342. void
  343. innobase_copy_index_field_def(
  344. /*==========================*/
  345. const dict_field_t* field, /* in: definition to copy */
  346. merge_index_field_t* index_field) /* out: copied definition */
  347. {
  348. DBUG_ENTER("innobase_copy_index_field_def");
  349. DBUG_ASSERT(field != NULL);
  350. DBUG_ASSERT(index_field != NULL);
  351. index_field->field_name = field->name;
  352. index_field->prefix_len = field->prefix_len;
  353. DBUG_VOID_RETURN;
  354. }
  355. /***********************************************************************
  356. Copy index definition for the index */
  357. static
  358. void
  359. innobase_copy_index_def(
  360. /*====================*/
  361. const dict_index_t* index, /* in: index definition to copy */
  362. merge_index_def_t* new_index,/* out: Index definition */
  363. mem_heap_t* heap) /* in: heap where allocated */
  364. {
  365. ulint n_fields;
  366. ulint i;
  367. DBUG_ENTER("innobase_copy_index_def");
  368. /* Note that we take only those fields that user defined to be
  369. in the index. In the internal representation more colums were
  370. added and those colums are not copied .*/
  371. n_fields = index->n_user_defined_cols;
  372. new_index->fields = (merge_index_field_t*) mem_heap_alloc(
  373. heap, n_fields * sizeof *new_index->fields);
  374. /* When adding a PRIMARY KEY, we may convert a previous
  375. clustered index to a secondary index (UNIQUE NOT NULL). */
  376. new_index->ind_type = index->type & ~DICT_CLUSTERED;
  377. new_index->n_fields = n_fields;
  378. new_index->name = index->name;
  379. for (i = 0; i < n_fields; i++) {
  380. innobase_copy_index_field_def(&index->fields[i],
  381. &new_index->fields[i]);
  382. }
  383. DBUG_VOID_RETURN;
  384. }
  385. /***********************************************************************
  386. Create an index table where indexes are ordered as follows:
  387. IF a new primary key is defined for the table THEN
  388. 1) New primary key
  389. 2) Original secondary indexes
  390. 3) New secondary indexes
  391. ELSE
  392. 1) All new indexes in the order they arrive from MySQL
  393. ENDIF
  394. */
  395. static
  396. merge_index_def_t*
  397. innobase_create_key_def(
  398. /*====================*/
  399. /* out: key definitions or NULL */
  400. trx_t* trx, /* in: trx */
  401. const dict_table_t*table, /* in: table definition */
  402. mem_heap_t* heap, /* in: heap where space for key
  403. definitions are allocated */
  404. KEY* key_info, /* in: Indexes to be created */
  405. ulint& n_keys) /* in/out: Number of indexes to
  406. be created */
  407. {
  408. ulint i = 0;
  409. merge_index_def_t* indexdef;
  410. merge_index_def_t* indexdefs;
  411. bool new_primary;
  412. DBUG_ENTER("innobase_create_key_def");
  413. indexdef = indexdefs = (merge_index_def_t*)
  414. mem_heap_alloc(heap, sizeof *indexdef
  415. * (n_keys + UT_LIST_GET_LEN(table->indexes)));
  416. /* If there is a primary key, it is always the first index
  417. defined for the table. */
  418. new_primary = !my_strcasecmp(system_charset_info,
  419. key_info->name, "PRIMARY");
  420. /* If there is a UNIQUE INDEX consisting entirely of NOT NULL
  421. columns, MySQL will treat it as a PRIMARY KEY unless the
  422. table already has one. */
  423. if (!new_primary && (key_info->flags & HA_NOSAME)
  424. && row_table_got_default_clust_index(table)) {
  425. uint key_part = key_info->key_parts;
  426. new_primary = TRUE;
  427. while (key_part--) {
  428. if (key_info->key_part[key_part].key_type
  429. & FIELDFLAG_MAYBE_NULL) {
  430. new_primary = FALSE;
  431. break;
  432. }
  433. }
  434. }
  435. if (new_primary) {
  436. const dict_index_t* index;
  437. /* Create the PRIMARY key index definition */
  438. innobase_create_index_def(&key_info[i++], TRUE, TRUE,
  439. indexdef++, heap);
  440. row_mysql_lock_data_dictionary(trx);
  441. index = dict_table_get_first_index(table);
  442. /* Copy the index definitions of the old table. Skip
  443. the old clustered index if it is a generated clustered
  444. index or a PRIMARY KEY. If the clustered index is a
  445. UNIQUE INDEX, it must be converted to a secondary index. */
  446. if (dict_index_get_nth_col(index, 0)->mtype == DATA_SYS
  447. || !my_strcasecmp(system_charset_info,
  448. index->name, "PRIMARY")) {
  449. index = dict_table_get_next_index(index);
  450. }
  451. while (index) {
  452. innobase_copy_index_def(index, indexdef++, heap);
  453. index = dict_table_get_next_index(index);
  454. }
  455. row_mysql_unlock_data_dictionary(trx);
  456. }
  457. /* Create definitions for added secondary indexes. */
  458. while (i < n_keys) {
  459. innobase_create_index_def(&key_info[i++], new_primary, FALSE,
  460. indexdef++, heap);
  461. }
  462. n_keys = indexdef - indexdefs;
  463. DBUG_RETURN(indexdefs);
  464. }
  465. /***********************************************************************
  466. Create a temporary tablename using query id, thread id, and id */
  467. static
  468. char*
  469. innobase_create_temporary_tablename(
  470. /*================================*/
  471. /* out: temporary tablename */
  472. mem_heap_t* heap, /* in: memory heap */
  473. char id, /* in: identifier [0-9a-zA-Z] */
  474. const char* table_name) /* in: table name */
  475. {
  476. char* name;
  477. ulint len;
  478. static const char suffix[] = "@0023 "; /* "# " */
  479. len = strlen(table_name);
  480. name = (char*) mem_heap_alloc(heap, len + sizeof suffix);
  481. memcpy(name, table_name, len);
  482. memcpy(name + len, suffix, sizeof suffix);
  483. name[len + (sizeof suffix - 2)] = id;
  484. return(name);
  485. }
  486. /***********************************************************************
  487. Create indexes. */
  488. int
  489. ha_innobase::add_index(
  490. /*===================*/
  491. /* out: 0 or error number */
  492. TABLE* table, /* in: Table where indexes are created */
  493. KEY* key_info, /* in: Indexes to be created */
  494. uint num_of_keys) /* in: Number of indexes to be created */
  495. {
  496. dict_index_t** index; /* Index to be created */
  497. dict_table_t* innodb_table; /* InnoDB table in dictionary */
  498. dict_table_t* indexed_table; /* Table where indexes are created */
  499. merge_index_def_t* index_defs; /* Index definitions */
  500. mem_heap_t* heap; /* Heap for index definitions */
  501. trx_t* trx; /* Transaction */
  502. ulint num_of_idx;
  503. ulint num_created = 0;
  504. ibool dict_locked = FALSE;
  505. ulint new_primary;
  506. ulint error;
  507. DBUG_ENTER("ha_innobase::add_index");
  508. ut_a(table);
  509. ut_a(key_info);
  510. ut_a(num_of_keys);
  511. if (srv_created_new_raw || srv_force_recovery) {
  512. DBUG_RETURN(HA_ERR_WRONG_COMMAND);
  513. }
  514. update_thd();
  515. heap = mem_heap_create(1024);
  516. /* In case MySQL calls this in the middle of a SELECT query, release
  517. possible adaptive hash latch to avoid deadlocks of threads. */
  518. trx_search_latch_release_if_reserved(prebuilt->trx);
  519. trx = trx_allocate_for_mysql();
  520. trx_start_if_not_started(trx);
  521. trans_register_ha(user_thd, FALSE, ht);
  522. trx->mysql_thd = user_thd;
  523. trx->mysql_query_str = thd_query(user_thd);
  524. innodb_table = indexed_table
  525. = dict_table_get(prebuilt->table->name, FALSE);
  526. /* Check that index keys are sensible */
  527. error = innobase_check_index_keys(
  528. table, innodb_table, trx, key_info, num_of_keys);
  529. if (UNIV_UNLIKELY(error)) {
  530. err_exit:
  531. mem_heap_free(heap);
  532. trx_general_rollback_for_mysql(trx, FALSE, NULL);
  533. trx_free_for_mysql(trx);
  534. DBUG_RETURN(error);
  535. }
  536. /* Create table containing all indexes to be built in this
  537. alter table add index so that they are in the correct order
  538. in the table. */
  539. num_of_idx = num_of_keys;
  540. index_defs = innobase_create_key_def(
  541. trx, innodb_table, heap, key_info, num_of_idx);
  542. new_primary = DICT_CLUSTERED & index_defs[0].ind_type;
  543. /* Allocate memory for dictionary index definitions */
  544. index = (dict_index_t**) mem_heap_alloc(
  545. heap, num_of_idx * sizeof *index);
  546. /* Latch the InnoDB data dictionary exclusively so that no deadlocks
  547. or lock waits can happen in it during an index create operation. */
  548. row_mysql_lock_data_dictionary(trx);
  549. dict_locked = TRUE;
  550. /* Flag this transaction as a dictionary operation, so that
  551. the data dictionary will be locked in crash recovery. Prevent
  552. warnings if row_merge_lock_table() results in a lock wait. */
  553. trx_set_dict_operation(trx, TRX_DICT_OP_INDEX_MAY_WAIT);
  554. /* Acquire an exclusive lock on the table
  555. before creating any indexes. */
  556. error = row_merge_lock_table(trx, innodb_table);
  557. if (UNIV_UNLIKELY(error != DB_SUCCESS)) {
  558. goto error_handling;
  559. }
  560. trx_set_dict_operation(trx, TRX_DICT_OP_INDEX);
  561. /* If a new primary key is defined for the table we need
  562. to drop the original table and rebuild all indexes. */
  563. if (UNIV_UNLIKELY(new_primary)) {
  564. char* new_table_name = innobase_create_temporary_tablename(
  565. heap, '1', innodb_table->name);
  566. /* Clone the table. */
  567. trx_set_dict_operation(trx, TRX_DICT_OP_TABLE);
  568. indexed_table = row_merge_create_temporary_table(
  569. new_table_name, index_defs, innodb_table, trx);
  570. if (!indexed_table) {
  571. switch (trx->error_state) {
  572. case DB_TABLESPACE_ALREADY_EXISTS:
  573. case DB_DUPLICATE_KEY:
  574. innobase_convert_tablename(new_table_name);
  575. my_error(HA_ERR_TABLE_EXIST, MYF(0),
  576. new_table_name);
  577. error = HA_ERR_TABLE_EXIST;
  578. break;
  579. default:
  580. error = convert_error_code_to_mysql(
  581. trx->error_state, user_thd);
  582. }
  583. row_mysql_unlock_data_dictionary(trx);
  584. goto err_exit;
  585. }
  586. trx->table_id = indexed_table->id;
  587. }
  588. /* Create the indexes in SYS_INDEXES and load into dictionary. */
  589. for (ulint i = 0; i < num_of_idx; i++) {
  590. index[i] = row_merge_create_index(trx, indexed_table,
  591. &index_defs[i]);
  592. if (!index[i]) {
  593. error = trx->error_state;
  594. goto error_handling;
  595. }
  596. num_created++;
  597. }
  598. ut_ad(error == DB_SUCCESS);
  599. /* Raise version number of the table to track this table's
  600. definition changes. */
  601. indexed_table->version_number++;
  602. row_mysql_unlock_data_dictionary(trx);
  603. dict_locked = FALSE;
  604. ut_a(trx->n_active_thrs == 0);
  605. ut_a(UT_LIST_GET_LEN(trx->signals) == 0);
  606. if (UNIV_UNLIKELY(new_primary)) {
  607. /* A primary key is to be built. Acquire an exclusive
  608. table lock also on the table that is being created. */
  609. ut_ad(indexed_table != innodb_table);
  610. error = row_merge_lock_table(trx, indexed_table);
  611. if (UNIV_UNLIKELY(error != DB_SUCCESS)) {
  612. goto error_handling;
  613. }
  614. }
  615. /* Read the clustered index of the table and build indexes
  616. based on this information using temporary files and merge sort. */
  617. error = row_merge_build_indexes(trx, innodb_table, indexed_table,
  618. index, num_of_idx, table);
  619. error_handling:
  620. #ifdef UNIV_DEBUG
  621. /* TODO: At the moment we can't handle the following statement
  622. in our debugging code below:
  623. alter table t drop index b, add index (b);
  624. The fix will have to parse the SQL and note that the index
  625. being added has the same name as the the one being dropped and
  626. ignore that in the dup index check.*/
  627. //dict_table_check_for_dup_indexes(prebuilt->table);
  628. #endif
  629. /* After an error, remove all those index definitions from the
  630. dictionary which were defined. */
  631. switch (error) {
  632. const char* old_name;
  633. char* tmp_name;
  634. case DB_SUCCESS:
  635. ut_ad(!dict_locked);
  636. if (!new_primary) {
  637. error = row_merge_rename_indexes(trx, indexed_table);
  638. if (error != DB_SUCCESS) {
  639. row_merge_drop_indexes(trx, indexed_table,
  640. index, num_created);
  641. }
  642. goto convert_error;
  643. }
  644. /* If a new primary key was defined for the table and
  645. there was no error at this point, we can now rename
  646. the old table as a temporary table, rename the new
  647. temporary table as the old table and drop the old table. */
  648. old_name = innodb_table->name;
  649. tmp_name = innobase_create_temporary_tablename(heap, '2',
  650. old_name);
  651. row_mysql_lock_data_dictionary(trx);
  652. dict_locked = TRUE;
  653. error = row_merge_rename_tables(innodb_table, indexed_table,
  654. tmp_name, trx);
  655. if (error != DB_SUCCESS) {
  656. row_merge_drop_table(trx, indexed_table);
  657. switch (error) {
  658. case DB_TABLESPACE_ALREADY_EXISTS:
  659. case DB_DUPLICATE_KEY:
  660. innobase_convert_tablename(tmp_name);
  661. my_error(HA_ERR_TABLE_EXIST, MYF(0), tmp_name);
  662. error = HA_ERR_TABLE_EXIST;
  663. break;
  664. default:
  665. error = convert_error_code_to_mysql(
  666. trx->error_state, user_thd);
  667. }
  668. break;
  669. }
  670. row_prebuilt_table_obsolete(innodb_table);
  671. row_prebuilt_free(prebuilt, TRUE);
  672. prebuilt = row_create_prebuilt(indexed_table);
  673. prebuilt->table->n_mysql_handles_opened++;
  674. /* Drop the old table if there are no open views
  675. referring to it. If there are such views, we will
  676. drop the table when we free the prebuilts and there
  677. are no more references to it. */
  678. error = row_merge_drop_table(trx, innodb_table);
  679. goto convert_error;
  680. case DB_TOO_BIG_RECORD:
  681. my_error(HA_ERR_TO_BIG_ROW, MYF(0));
  682. goto error;
  683. case DB_PRIMARY_KEY_IS_NULL:
  684. my_error(ER_PRIMARY_CANT_HAVE_NULL, MYF(0));
  685. /* fall through */
  686. case DB_DUPLICATE_KEY:
  687. error:
  688. prebuilt->trx->error_info = NULL;
  689. prebuilt->trx->error_key_num = trx->error_key_num;
  690. /* fall through */
  691. default:
  692. if (new_primary) {
  693. row_merge_drop_table(trx, indexed_table);
  694. } else {
  695. row_merge_drop_indexes(trx, indexed_table,
  696. index, num_created);
  697. }
  698. convert_error:
  699. error = convert_error_code_to_mysql(error, user_thd);
  700. }
  701. mem_heap_free(heap);
  702. trx_commit_for_mysql(trx);
  703. if (dict_locked) {
  704. row_mysql_unlock_data_dictionary(trx);
  705. }
  706. trx_free_for_mysql(trx);
  707. /* There might be work for utility threads.*/
  708. srv_active_wake_master_thread();
  709. DBUG_RETURN(error);
  710. }
  711. /***********************************************************************
  712. Prepare to drop some indexes of a table. */
  713. int
  714. ha_innobase::prepare_drop_index(
  715. /*============================*/
  716. /* out: 0 or error number */
  717. TABLE* table, /* in: Table where indexes are dropped */
  718. uint* key_num, /* in: Key nums to be dropped */
  719. uint num_of_keys) /* in: Number of keys to be dropped */
  720. {
  721. trx_t* trx;
  722. int err = 0;
  723. uint n_key;
  724. DBUG_ENTER("ha_innobase::prepare_drop_index");
  725. ut_ad(table);
  726. ut_ad(key_num);
  727. ut_ad(num_of_keys);
  728. if (srv_created_new_raw || srv_force_recovery) {
  729. DBUG_RETURN(HA_ERR_WRONG_COMMAND);
  730. }
  731. update_thd();
  732. trx_search_latch_release_if_reserved(prebuilt->trx);
  733. trx = prebuilt->trx;
  734. /* Test and mark all the indexes to be dropped */
  735. row_mysql_lock_data_dictionary(trx);
  736. for (n_key = 0; n_key < num_of_keys; n_key++) {
  737. const KEY* key;
  738. dict_index_t* index;
  739. key = table->key_info + key_num[n_key];
  740. index = dict_table_get_index_on_name_and_min_id(
  741. prebuilt->table, key->name);
  742. if (!index) {
  743. sql_print_error("InnoDB could not find key n:o %u "
  744. "with name %s for table %s",
  745. key_num[n_key],
  746. key ? key->name : "NULL",
  747. prebuilt->table->name);
  748. err = HA_ERR_KEY_NOT_FOUND;
  749. goto func_exit;
  750. }
  751. /* Refuse to drop the clustered index. It would be
  752. better to automatically generate a clustered index,
  753. but mysql_alter_table() will call this method only
  754. after ha_innobase::add_index(). */
  755. if (dict_index_is_clust(index)) {
  756. my_error(ER_REQUIRES_PRIMARY_KEY, MYF(0));
  757. err = -1;
  758. goto func_exit;
  759. }
  760. index->to_be_dropped = TRUE;
  761. }
  762. /* If FOREIGN_KEY_CHECK = 1 you may not drop an index defined
  763. for a foreign key constraint because InnoDB requires that both
  764. tables contain indexes for the constraint. Note that CREATE
  765. INDEX id ON table does a CREATE INDEX and DROP INDEX, and we
  766. can ignore here foreign keys because a new index for the
  767. foreign key has already been created.
  768. We check for the foreign key constraints after marking the
  769. candidate indexes for deletion, because when we check for an
  770. equivalent foreign index we don't want to select an index that
  771. is later deleted. */
  772. if (trx->check_foreigns
  773. && thd_sql_command(user_thd) != SQLCOM_CREATE_INDEX) {
  774. for (n_key = 0; n_key < num_of_keys; n_key++) {
  775. KEY* key;
  776. dict_index_t* index;
  777. dict_foreign_t* foreign;
  778. key = table->key_info + key_num[n_key];
  779. index = dict_table_get_index_on_name_and_min_id(
  780. prebuilt->table, key->name);
  781. ut_a(index);
  782. ut_a(index->to_be_dropped);
  783. /* Check if the index is referenced. */
  784. foreign = dict_table_get_referenced_constraint(
  785. prebuilt->table, index);
  786. if (foreign) {
  787. index_needed:
  788. trx_set_detailed_error(
  789. trx,
  790. "Index needed in foreign key "
  791. "constraint");
  792. trx->error_info = index;
  793. err = HA_ERR_DROP_INDEX_FK;
  794. break;
  795. } else {
  796. /* Check if this index references some
  797. other table */
  798. foreign = dict_table_get_foreign_constraint(
  799. prebuilt->table, index);
  800. if (foreign) {
  801. ut_a(foreign->foreign_index == index);
  802. /* Search for an equivalent index that
  803. the foreign key contraint could use
  804. if this index were to be deleted. */
  805. if (!dict_table_find_equivalent_index(
  806. prebuilt->table,
  807. foreign->foreign_index)) {
  808. goto index_needed;
  809. }
  810. }
  811. }
  812. }
  813. }
  814. func_exit:
  815. if (err) {
  816. /* Undo our changes since there was some sort of error */
  817. for (n_key = 0; n_key < num_of_keys; n_key++) {
  818. const KEY* key;
  819. dict_index_t* index;
  820. key = table->key_info + key_num[n_key];
  821. index = dict_table_get_index_on_name_and_min_id(
  822. prebuilt->table, key->name);
  823. if (index) {
  824. index->to_be_dropped = FALSE;
  825. }
  826. }
  827. }
  828. row_mysql_unlock_data_dictionary(trx);
  829. DBUG_RETURN(err);
  830. }
  831. /***********************************************************************
  832. Drop the indexes that were passed to a successful prepare_drop_index(). */
  833. int
  834. ha_innobase::final_drop_index(
  835. /*==========================*/
  836. /* out: 0 or error number */
  837. TABLE* table) /* in: Table where indexes are dropped */
  838. {
  839. dict_index_t* index; /* Index to be dropped */
  840. trx_t* trx; /* Transaction */
  841. DBUG_ENTER("ha_innobase::final_drop_index");
  842. ut_ad(table);
  843. if (srv_created_new_raw || srv_force_recovery) {
  844. DBUG_RETURN(HA_ERR_WRONG_COMMAND);
  845. }
  846. update_thd();
  847. trx_search_latch_release_if_reserved(prebuilt->trx);
  848. trx = prebuilt->trx;
  849. /* Drop indexes marked to be dropped */
  850. row_mysql_lock_data_dictionary(trx);
  851. index = dict_table_get_first_index(prebuilt->table);
  852. while (index) {
  853. dict_index_t* next_index;
  854. next_index = dict_table_get_next_index(index);
  855. if (index->to_be_dropped) {
  856. row_merge_drop_index(index, prebuilt->table, trx);
  857. }
  858. index = next_index;
  859. }
  860. prebuilt->table->version_number++;
  861. #ifdef UNIV_DEBUG
  862. dict_table_check_for_dup_indexes(prebuilt->table);
  863. #endif
  864. row_mysql_unlock_data_dictionary(trx);
  865. /* Flush the log to reduce probability that the .frm files and
  866. the InnoDB data dictionary get out-of-sync if the user runs
  867. with innodb_flush_log_at_trx_commit = 0 */
  868. log_buffer_flush_to_disk();
  869. /* Tell the InnoDB server that there might be work for
  870. utility threads: */
  871. srv_active_wake_master_thread();
  872. trx_commit_for_mysql(trx);
  873. DBUG_RETURN(0);
  874. }