 Applying InnoDB Plugin 1.0.5 snapshot, part 2
From r5639 to r5685
Detailed revision comments:
r5639 | marko | 2009-08-06 05:39:34 -0500 (Thu, 06 Aug 2009) | 3 lines
branches/zip: mem_heap_block_free(): If innodb_use_sys_malloc is set,
do not tell Valgrind that the memory is free, to avoid
a bogus warning in Valgrind's built-in free() hook.
r5642 | calvin | 2009-08-06 18:04:03 -0500 (Thu, 06 Aug 2009) | 2 lines
branches/zip: remove duplicate "the" in comments.
r5662 | marko | 2009-08-11 04:54:16 -0500 (Tue, 11 Aug 2009) | 1 line
branches/zip: Bump the version number to 1.0.5 after releasing 1.0.4.
r5663 | marko | 2009-08-11 06:42:37 -0500 (Tue, 11 Aug 2009) | 2 lines
branches/zip: trx_general_rollback_for_mysql(): Remove the redundant
parameter partial. If savept==NULL, partial==FALSE.
r5670 | marko | 2009-08-12 08:16:37 -0500 (Wed, 12 Aug 2009) | 2 lines
branches/zip: trx_undo_rec_copy(): Add const qualifier to undo_rec.
This is a non-functional change.
r5671 | marko | 2009-08-13 03:46:33 -0500 (Thu, 13 Aug 2009) | 5 lines
branches/zip: ha_innobase::add_index(): Fix Bug #46557:
after a successful operation, read innodb_table->flags from
the newly created table object, not from the old one that was just freed.
Approved by Sunny.
r5681 | sunny | 2009-08-14 01:16:24 -0500 (Fri, 14 Aug 2009) | 3 lines
branches/zip: When building HotBackup srv_use_sys_malloc is #ifdef out. We
move access to the this variable within a !UNIV_HOTBACKUP block.
r5684 | sunny | 2009-08-20 03:05:30 -0500 (Thu, 20 Aug 2009) | 10 lines
branches/zip: Fix bug# 46650: Innodb assertion autoinc_lock == lock in lock_table_remove_low on INSERT SELECT
We only store the autoinc locks that are granted in the transaction's autoinc
lock vector. A transacton, that has been rolled back due to a deadlock because
of an AUTOINC lock attempt, will not have added that lock to the vector. We
need to check for that when we remove that lock.
rb://145
Approved by Marko.
r5685 | sunny | 2009-08-20 03:18:29 -0500 (Thu, 20 Aug 2009) | 2 lines
branches/zip: Update the ChangeLog with r5684 change.
16 years ago  Implement UNIV_BLOB_DEBUG. An early version of this caught Bug #55284.
This option is known to be broken when tablespaces contain off-page
columns after crash recovery. It has only been tested when creating
the data files from the scratch.
btr_blob_dbg_t: A map from page_no:heap_no:field_no to first_blob_page_no.
This map is instantiated for every clustered index in index->blobs.
It is protected by index->blobs_mutex.
btr_blob_dbg_msg_issue(): Issue a diagnostic message.
Invoked when btr_blob_dbg_msg is set.
btr_blob_dbg_rbt_insert(): Insert a btr_blob_dbg_t into index->blobs.
btr_blob_dbg_rbt_delete(): Remove a btr_blob_dbg_t from index->blobs.
btr_blob_dbg_cmp(): Comparator for btr_blob_dbg_t.
btr_blob_dbg_add_blob(): Add a BLOB reference to the map.
btr_blob_dbg_add_rec(): Add all BLOB references from a record to the map.
btr_blob_dbg_print(): Display the map of BLOB references in an index.
btr_blob_dbg_remove_rec(): Remove all BLOB references of a record from
the map.
btr_blob_dbg_is_empty(): Check that no BLOB references exist to or
from a page. Disowned references from delete-marked records are
tolerated.
btr_blob_dbg_op(): Perform an operation on all BLOB references on a
B-tree page.
btr_blob_dbg_add(): Add all BLOB references from a B-tree page to the
map.
btr_blob_dbg_remove(): Remove all BLOB references from a B-tree page
from the map.
btr_blob_dbg_restore(): Restore the BLOB references after a failed
page reorganize.
btr_blob_dbg_set_deleted_flag(): Modify the 'deleted' flag in the BLOB
references of a record.
btr_blob_dbg_owner(): Own or disown a BLOB reference.
btr_page_create(), btr_page_free_low(): Assert that no BLOB references exist.
btr_create(): Create index->blobs for clustered indexes.
btr_page_reorganize_low(): Invoke btr_blob_dbg_remove() before copying
the records. Invoke btr_blob_dbg_restore() if the operation fails.
btr_page_empty(), btr_lift_page_up(), btr_compress(), btr_discard_page():
Invoke btr_blob_dbg_remove().
btr_cur_del_mark_set_clust_rec(): Invoke btr_blob_dbg_set_deleted_flag().
Other cases of modifying the delete mark are either in the secondary
index or during crash recovery, which we do not promise to support.
btr_cur_set_ownership_of_extern_field(): Invoke btr_blob_dbg_owner().
btr_store_big_rec_extern_fields(): Invoke btr_blob_dbg_add_blob().
btr_free_externally_stored_field(): Invoke btr_blob_dbg_assert_empty()
on the first BLOB page.
page_cur_insert_rec_low(), page_cur_insert_rec_zip(),
page_copy_rec_list_end_to_created_page(): Invoke btr_blob_dbg_add_rec().
page_cur_insert_rec_zip_reorg(), page_copy_rec_list_end(),
page_copy_rec_list_start(): After failure, invoke
btr_blob_dbg_remove() and btr_blob_dbg_add().
page_cur_delete_rec(): Invoke btr_blob_dbg_remove_rec().
page_delete_rec_list_end(): Invoke btr_blob_dbg_op(btr_blob_dbg_remove_rec).
page_zip_reorganize(): Invoke btr_blob_dbg_remove() before copying the records.
page_zip_copy_recs(): Invoke btr_blob_dbg_add().
row_upd_rec_in_place(): Invoke btr_blob_dbg_rbt_delete() and
btr_blob_dbg_rbt_insert().
innobase_start_or_create_for_mysql(): Warn when UNIV_BLOB_DEBUG is enabled.
rb://550 approved by Jimmy Yang 15 years ago  Applying InnoDB Plugin 1.0.5 snapshot ,part 12
From r5995 to r6043
Detailed revision comments:
r5995 | marko | 2009-09-28 03:52:25 -0500 (Mon, 28 Sep 2009) | 17 lines
branches/zip: Do not write to PAGE_INDEX_ID after page creation,
not even when restoring an uncompressed page after a compression failure.
btr_page_reorganize_low(): On compression failure, do not restore
those page header fields that should not be affected by the
reorganization. Instead, compare the fields.
page_zip_decompress(): Add the parameter ibool all, for copying all
page header fields. Pass the parameter all=TRUE on block read
completion, redo log application, and page_zip_validate(); pass
all=FALSE in all other cases.
page_zip_reorganize(): Do not restore the uncompressed page on
failure. It will be restored (to pre-modification state) by the
caller anyway.
rb://167, Issue #346
r5996 | marko | 2009-09-28 07:46:02 -0500 (Mon, 28 Sep 2009) | 4 lines
branches/zip: Address Issue #350 in comments.
lock_rec_queue_validate(), lock_rec_queue_validate(): Note that
this debug code may violate the latching order and cause deadlocks.
r5997 | marko | 2009-09-28 08:03:58 -0500 (Mon, 28 Sep 2009) | 12 lines
branches/zip: Remove an assertion failure when the InnoDB data dictionary
is inconsistent with the MySQL .frm file.
ha_innobase::index_read(): When the index cannot be found,
return an error.
ha_innobase::change_active_index(): When prebuilt->index == NULL,
set also prebuilt->index_usable = FALSE. This is not needed for
correctness, because prebuilt->index_usable is only checked by
row_search_for_mysql(), which requires prebuilt->index != NULL.
This addresses Issue #349. Approved by Heikki Tuuri over IM.
r6005 | vasil | 2009-09-29 03:09:52 -0500 (Tue, 29 Sep 2009) | 4 lines
branches/zip:
ChangeLog: wrap around 78th column, not earlier.
r6006 | vasil | 2009-09-29 05:15:25 -0500 (Tue, 29 Sep 2009) | 4 lines
branches/zip:
Add ChangeLog entry for the release of 1.0.4.
r6007 | vasil | 2009-09-29 08:19:59 -0500 (Tue, 29 Sep 2009) | 6 lines
branches/zip:
Fix the year, should be 2009.
Pointed by: Calvin
r6026 | marko | 2009-09-30 02:18:24 -0500 (Wed, 30 Sep 2009) | 1 line
branches/zip: Add some debug assertions for checking FSEG_MAGIC_N.
r6028 | marko | 2009-09-30 08:55:23 -0500 (Wed, 30 Sep 2009) | 3 lines
branches/zip: recv_no_log_write: New debug flag for tracking down
Mantis Issue #347. No modifications should be made to the database
while recv_apply_hashed_log_recs() is about to complete.
r6029 | calvin | 2009-09-30 15:32:02 -0500 (Wed, 30 Sep 2009) | 4 lines
branches/zip: non-functional changes
Fix typo.
r6031 | marko | 2009-10-01 06:24:33 -0500 (Thu, 01 Oct 2009) | 49 lines
branches/zip: Clean up after a crash during DROP INDEX.
When InnoDB crashes while dropping an index, ensure that
the index will be completely dropped during crash recovery.
row_merge_drop_index(): Before dropping an index, rename the index to
start with TEMP_INDEX_PREFIX_STR and commit the change, so that
row_merge_drop_temp_indexes() will drop the index after crash
recovery if the server crashes while dropping the index.
fseg_inode_try_get(): New function, forked from fseg_inode_get().
Return NULL if the file segment index node is free.
fseg_inode_get(): Assert that the file segment index node is not free.
fseg_free_step(): If the file segment index node is already free,
print a diagnostic message and return TRUE.
fsp_free_seg_inode(): Write a nonzero number to FSEG_MAGIC_N, so that
allocated-and-freed file segment index nodes can be better
distinguished from uninitialized ones.
This is rb://174, addressing Issue #348.
Tested by restarting mysqld upon the completion of the added
log_write_up_to() invocation below, during DROP INDEX. The index was
dropped after crash recovery, and re-issuing the DROP INDEX did not
crash the server.
Index: btr/btr0btr.c
===================================================================
--- btr/btr0btr.c (revision 6026)
+++ btr/btr0btr.c (working copy)
@@ -42,6 +42,7 @@ Created 6/2/1994 Heikki Tuuri
#include "ibuf0ibuf.h"
#include "trx0trx.h"
+#include "log0log.h"
/*
Latching strategy of the InnoDB B-tree
--------------------------------------
@@ -873,6 +874,8 @@ leaf_loop:
goto leaf_loop;
}
+
+ log_write_up_to(mtr.end_lsn, LOG_WAIT_ALL_GROUPS, TRUE);
top_loop:
mtr_start(&mtr);
r6033 | calvin | 2009-10-01 15:19:46 -0500 (Thu, 01 Oct 2009) | 4 lines
branches/zip: fix a typo in error message
Reported as bug#47763.
r6043 | inaam | 2009-10-05 09:45:35 -0500 (Mon, 05 Oct 2009) | 12 lines
branches/zip rb://176
Do not invalidate buffer pool while an LRU batch is active. Added
code to buf_pool_invalidate() to wait for the running batches to finish.
This patch also resets the state of buf_pool struct at invalidation. This
addresses the concern where buf_pool->freed_page_clock becomes non-zero
because we read in a system tablespace page for file format info at
startup.
Approved by: Marko
16 years ago  Implement UNIV_BLOB_DEBUG. An early version of this caught Bug #55284.
This option is known to be broken when tablespaces contain off-page
columns after crash recovery. It has only been tested when creating
the data files from the scratch.
btr_blob_dbg_t: A map from page_no:heap_no:field_no to first_blob_page_no.
This map is instantiated for every clustered index in index->blobs.
It is protected by index->blobs_mutex.
btr_blob_dbg_msg_issue(): Issue a diagnostic message.
Invoked when btr_blob_dbg_msg is set.
btr_blob_dbg_rbt_insert(): Insert a btr_blob_dbg_t into index->blobs.
btr_blob_dbg_rbt_delete(): Remove a btr_blob_dbg_t from index->blobs.
btr_blob_dbg_cmp(): Comparator for btr_blob_dbg_t.
btr_blob_dbg_add_blob(): Add a BLOB reference to the map.
btr_blob_dbg_add_rec(): Add all BLOB references from a record to the map.
btr_blob_dbg_print(): Display the map of BLOB references in an index.
btr_blob_dbg_remove_rec(): Remove all BLOB references of a record from
the map.
btr_blob_dbg_is_empty(): Check that no BLOB references exist to or
from a page. Disowned references from delete-marked records are
tolerated.
btr_blob_dbg_op(): Perform an operation on all BLOB references on a
B-tree page.
btr_blob_dbg_add(): Add all BLOB references from a B-tree page to the
map.
btr_blob_dbg_remove(): Remove all BLOB references from a B-tree page
from the map.
btr_blob_dbg_restore(): Restore the BLOB references after a failed
page reorganize.
btr_blob_dbg_set_deleted_flag(): Modify the 'deleted' flag in the BLOB
references of a record.
btr_blob_dbg_owner(): Own or disown a BLOB reference.
btr_page_create(), btr_page_free_low(): Assert that no BLOB references exist.
btr_create(): Create index->blobs for clustered indexes.
btr_page_reorganize_low(): Invoke btr_blob_dbg_remove() before copying
the records. Invoke btr_blob_dbg_restore() if the operation fails.
btr_page_empty(), btr_lift_page_up(), btr_compress(), btr_discard_page():
Invoke btr_blob_dbg_remove().
btr_cur_del_mark_set_clust_rec(): Invoke btr_blob_dbg_set_deleted_flag().
Other cases of modifying the delete mark are either in the secondary
index or during crash recovery, which we do not promise to support.
btr_cur_set_ownership_of_extern_field(): Invoke btr_blob_dbg_owner().
btr_store_big_rec_extern_fields(): Invoke btr_blob_dbg_add_blob().
btr_free_externally_stored_field(): Invoke btr_blob_dbg_assert_empty()
on the first BLOB page.
page_cur_insert_rec_low(), page_cur_insert_rec_zip(),
page_copy_rec_list_end_to_created_page(): Invoke btr_blob_dbg_add_rec().
page_cur_insert_rec_zip_reorg(), page_copy_rec_list_end(),
page_copy_rec_list_start(): After failure, invoke
btr_blob_dbg_remove() and btr_blob_dbg_add().
page_cur_delete_rec(): Invoke btr_blob_dbg_remove_rec().
page_delete_rec_list_end(): Invoke btr_blob_dbg_op(btr_blob_dbg_remove_rec).
page_zip_reorganize(): Invoke btr_blob_dbg_remove() before copying the records.
page_zip_copy_recs(): Invoke btr_blob_dbg_add().
row_upd_rec_in_place(): Invoke btr_blob_dbg_rbt_delete() and
btr_blob_dbg_rbt_insert().
innobase_start_or_create_for_mysql(): Warn when UNIV_BLOB_DEBUG is enabled.
rb://550 approved by Jimmy Yang 15 years ago  Implement UNIV_BLOB_DEBUG. An early version of this caught Bug #55284.
This option is known to be broken when tablespaces contain off-page
columns after crash recovery. It has only been tested when creating
the data files from the scratch.
btr_blob_dbg_t: A map from page_no:heap_no:field_no to first_blob_page_no.
This map is instantiated for every clustered index in index->blobs.
It is protected by index->blobs_mutex.
btr_blob_dbg_msg_issue(): Issue a diagnostic message.
Invoked when btr_blob_dbg_msg is set.
btr_blob_dbg_rbt_insert(): Insert a btr_blob_dbg_t into index->blobs.
btr_blob_dbg_rbt_delete(): Remove a btr_blob_dbg_t from index->blobs.
btr_blob_dbg_cmp(): Comparator for btr_blob_dbg_t.
btr_blob_dbg_add_blob(): Add a BLOB reference to the map.
btr_blob_dbg_add_rec(): Add all BLOB references from a record to the map.
btr_blob_dbg_print(): Display the map of BLOB references in an index.
btr_blob_dbg_remove_rec(): Remove all BLOB references of a record from
the map.
btr_blob_dbg_is_empty(): Check that no BLOB references exist to or
from a page. Disowned references from delete-marked records are
tolerated.
btr_blob_dbg_op(): Perform an operation on all BLOB references on a
B-tree page.
btr_blob_dbg_add(): Add all BLOB references from a B-tree page to the
map.
btr_blob_dbg_remove(): Remove all BLOB references from a B-tree page
from the map.
btr_blob_dbg_restore(): Restore the BLOB references after a failed
page reorganize.
btr_blob_dbg_set_deleted_flag(): Modify the 'deleted' flag in the BLOB
references of a record.
btr_blob_dbg_owner(): Own or disown a BLOB reference.
btr_page_create(), btr_page_free_low(): Assert that no BLOB references exist.
btr_create(): Create index->blobs for clustered indexes.
btr_page_reorganize_low(): Invoke btr_blob_dbg_remove() before copying
the records. Invoke btr_blob_dbg_restore() if the operation fails.
btr_page_empty(), btr_lift_page_up(), btr_compress(), btr_discard_page():
Invoke btr_blob_dbg_remove().
btr_cur_del_mark_set_clust_rec(): Invoke btr_blob_dbg_set_deleted_flag().
Other cases of modifying the delete mark are either in the secondary
index or during crash recovery, which we do not promise to support.
btr_cur_set_ownership_of_extern_field(): Invoke btr_blob_dbg_owner().
btr_store_big_rec_extern_fields(): Invoke btr_blob_dbg_add_blob().
btr_free_externally_stored_field(): Invoke btr_blob_dbg_assert_empty()
on the first BLOB page.
page_cur_insert_rec_low(), page_cur_insert_rec_zip(),
page_copy_rec_list_end_to_created_page(): Invoke btr_blob_dbg_add_rec().
page_cur_insert_rec_zip_reorg(), page_copy_rec_list_end(),
page_copy_rec_list_start(): After failure, invoke
btr_blob_dbg_remove() and btr_blob_dbg_add().
page_cur_delete_rec(): Invoke btr_blob_dbg_remove_rec().
page_delete_rec_list_end(): Invoke btr_blob_dbg_op(btr_blob_dbg_remove_rec).
page_zip_reorganize(): Invoke btr_blob_dbg_remove() before copying the records.
page_zip_copy_recs(): Invoke btr_blob_dbg_add().
row_upd_rec_in_place(): Invoke btr_blob_dbg_rbt_delete() and
btr_blob_dbg_rbt_insert().
innobase_start_or_create_for_mysql(): Warn when UNIV_BLOB_DEBUG is enabled.
rb://550 approved by Jimmy Yang 15 years ago  Applying InnoDB Plugin 1.0.5 snapshot ,part 12
From r5995 to r6043
Detailed revision comments:
r5995 | marko | 2009-09-28 03:52:25 -0500 (Mon, 28 Sep 2009) | 17 lines
branches/zip: Do not write to PAGE_INDEX_ID after page creation,
not even when restoring an uncompressed page after a compression failure.
btr_page_reorganize_low(): On compression failure, do not restore
those page header fields that should not be affected by the
reorganization. Instead, compare the fields.
page_zip_decompress(): Add the parameter ibool all, for copying all
page header fields. Pass the parameter all=TRUE on block read
completion, redo log application, and page_zip_validate(); pass
all=FALSE in all other cases.
page_zip_reorganize(): Do not restore the uncompressed page on
failure. It will be restored (to pre-modification state) by the
caller anyway.
rb://167, Issue #346
r5996 | marko | 2009-09-28 07:46:02 -0500 (Mon, 28 Sep 2009) | 4 lines
branches/zip: Address Issue #350 in comments.
lock_rec_queue_validate(), lock_rec_queue_validate(): Note that
this debug code may violate the latching order and cause deadlocks.
r5997 | marko | 2009-09-28 08:03:58 -0500 (Mon, 28 Sep 2009) | 12 lines
branches/zip: Remove an assertion failure when the InnoDB data dictionary
is inconsistent with the MySQL .frm file.
ha_innobase::index_read(): When the index cannot be found,
return an error.
ha_innobase::change_active_index(): When prebuilt->index == NULL,
set also prebuilt->index_usable = FALSE. This is not needed for
correctness, because prebuilt->index_usable is only checked by
row_search_for_mysql(), which requires prebuilt->index != NULL.
This addresses Issue #349. Approved by Heikki Tuuri over IM.
r6005 | vasil | 2009-09-29 03:09:52 -0500 (Tue, 29 Sep 2009) | 4 lines
branches/zip:
ChangeLog: wrap around 78th column, not earlier.
r6006 | vasil | 2009-09-29 05:15:25 -0500 (Tue, 29 Sep 2009) | 4 lines
branches/zip:
Add ChangeLog entry for the release of 1.0.4.
r6007 | vasil | 2009-09-29 08:19:59 -0500 (Tue, 29 Sep 2009) | 6 lines
branches/zip:
Fix the year, should be 2009.
Pointed by: Calvin
r6026 | marko | 2009-09-30 02:18:24 -0500 (Wed, 30 Sep 2009) | 1 line
branches/zip: Add some debug assertions for checking FSEG_MAGIC_N.
r6028 | marko | 2009-09-30 08:55:23 -0500 (Wed, 30 Sep 2009) | 3 lines
branches/zip: recv_no_log_write: New debug flag for tracking down
Mantis Issue #347. No modifications should be made to the database
while recv_apply_hashed_log_recs() is about to complete.
r6029 | calvin | 2009-09-30 15:32:02 -0500 (Wed, 30 Sep 2009) | 4 lines
branches/zip: non-functional changes
Fix typo.
r6031 | marko | 2009-10-01 06:24:33 -0500 (Thu, 01 Oct 2009) | 49 lines
branches/zip: Clean up after a crash during DROP INDEX.
When InnoDB crashes while dropping an index, ensure that
the index will be completely dropped during crash recovery.
row_merge_drop_index(): Before dropping an index, rename the index to
start with TEMP_INDEX_PREFIX_STR and commit the change, so that
row_merge_drop_temp_indexes() will drop the index after crash
recovery if the server crashes while dropping the index.
fseg_inode_try_get(): New function, forked from fseg_inode_get().
Return NULL if the file segment index node is free.
fseg_inode_get(): Assert that the file segment index node is not free.
fseg_free_step(): If the file segment index node is already free,
print a diagnostic message and return TRUE.
fsp_free_seg_inode(): Write a nonzero number to FSEG_MAGIC_N, so that
allocated-and-freed file segment index nodes can be better
distinguished from uninitialized ones.
This is rb://174, addressing Issue #348.
Tested by restarting mysqld upon the completion of the added
log_write_up_to() invocation below, during DROP INDEX. The index was
dropped after crash recovery, and re-issuing the DROP INDEX did not
crash the server.
Index: btr/btr0btr.c
===================================================================
--- btr/btr0btr.c (revision 6026)
+++ btr/btr0btr.c (working copy)
@@ -42,6 +42,7 @@ Created 6/2/1994 Heikki Tuuri
#include "ibuf0ibuf.h"
#include "trx0trx.h"
+#include "log0log.h"
/*
Latching strategy of the InnoDB B-tree
--------------------------------------
@@ -873,6 +874,8 @@ leaf_loop:
goto leaf_loop;
}
+
+ log_write_up_to(mtr.end_lsn, LOG_WAIT_ALL_GROUPS, TRUE);
top_loop:
mtr_start(&mtr);
r6033 | calvin | 2009-10-01 15:19:46 -0500 (Thu, 01 Oct 2009) | 4 lines
branches/zip: fix a typo in error message
Reported as bug#47763.
r6043 | inaam | 2009-10-05 09:45:35 -0500 (Mon, 05 Oct 2009) | 12 lines
branches/zip rb://176
Do not invalidate buffer pool while an LRU batch is active. Added
code to buf_pool_invalidate() to wait for the running batches to finish.
This patch also resets the state of buf_pool struct at invalidation. This
addresses the concern where buf_pool->freed_page_clock becomes non-zero
because we read in a system tablespace page for file format info at
startup.
Approved by: Marko
16 years ago  Implement UNIV_BLOB_DEBUG. An early version of this caught Bug #55284.
This option is known to be broken when tablespaces contain off-page
columns after crash recovery. It has only been tested when creating
the data files from the scratch.
btr_blob_dbg_t: A map from page_no:heap_no:field_no to first_blob_page_no.
This map is instantiated for every clustered index in index->blobs.
It is protected by index->blobs_mutex.
btr_blob_dbg_msg_issue(): Issue a diagnostic message.
Invoked when btr_blob_dbg_msg is set.
btr_blob_dbg_rbt_insert(): Insert a btr_blob_dbg_t into index->blobs.
btr_blob_dbg_rbt_delete(): Remove a btr_blob_dbg_t from index->blobs.
btr_blob_dbg_cmp(): Comparator for btr_blob_dbg_t.
btr_blob_dbg_add_blob(): Add a BLOB reference to the map.
btr_blob_dbg_add_rec(): Add all BLOB references from a record to the map.
btr_blob_dbg_print(): Display the map of BLOB references in an index.
btr_blob_dbg_remove_rec(): Remove all BLOB references of a record from
the map.
btr_blob_dbg_is_empty(): Check that no BLOB references exist to or
from a page. Disowned references from delete-marked records are
tolerated.
btr_blob_dbg_op(): Perform an operation on all BLOB references on a
B-tree page.
btr_blob_dbg_add(): Add all BLOB references from a B-tree page to the
map.
btr_blob_dbg_remove(): Remove all BLOB references from a B-tree page
from the map.
btr_blob_dbg_restore(): Restore the BLOB references after a failed
page reorganize.
btr_blob_dbg_set_deleted_flag(): Modify the 'deleted' flag in the BLOB
references of a record.
btr_blob_dbg_owner(): Own or disown a BLOB reference.
btr_page_create(), btr_page_free_low(): Assert that no BLOB references exist.
btr_create(): Create index->blobs for clustered indexes.
btr_page_reorganize_low(): Invoke btr_blob_dbg_remove() before copying
the records. Invoke btr_blob_dbg_restore() if the operation fails.
btr_page_empty(), btr_lift_page_up(), btr_compress(), btr_discard_page():
Invoke btr_blob_dbg_remove().
btr_cur_del_mark_set_clust_rec(): Invoke btr_blob_dbg_set_deleted_flag().
Other cases of modifying the delete mark are either in the secondary
index or during crash recovery, which we do not promise to support.
btr_cur_set_ownership_of_extern_field(): Invoke btr_blob_dbg_owner().
btr_store_big_rec_extern_fields(): Invoke btr_blob_dbg_add_blob().
btr_free_externally_stored_field(): Invoke btr_blob_dbg_assert_empty()
on the first BLOB page.
page_cur_insert_rec_low(), page_cur_insert_rec_zip(),
page_copy_rec_list_end_to_created_page(): Invoke btr_blob_dbg_add_rec().
page_cur_insert_rec_zip_reorg(), page_copy_rec_list_end(),
page_copy_rec_list_start(): After failure, invoke
btr_blob_dbg_remove() and btr_blob_dbg_add().
page_cur_delete_rec(): Invoke btr_blob_dbg_remove_rec().
page_delete_rec_list_end(): Invoke btr_blob_dbg_op(btr_blob_dbg_remove_rec).
page_zip_reorganize(): Invoke btr_blob_dbg_remove() before copying the records.
page_zip_copy_recs(): Invoke btr_blob_dbg_add().
row_upd_rec_in_place(): Invoke btr_blob_dbg_rbt_delete() and
btr_blob_dbg_rbt_insert().
innobase_start_or_create_for_mysql(): Warn when UNIV_BLOB_DEBUG is enabled.
rb://550 approved by Jimmy Yang 15 years ago  Implement UNIV_BLOB_DEBUG. An early version of this caught Bug #55284.
This option is known to be broken when tablespaces contain off-page
columns after crash recovery. It has only been tested when creating
the data files from the scratch.
btr_blob_dbg_t: A map from page_no:heap_no:field_no to first_blob_page_no.
This map is instantiated for every clustered index in index->blobs.
It is protected by index->blobs_mutex.
btr_blob_dbg_msg_issue(): Issue a diagnostic message.
Invoked when btr_blob_dbg_msg is set.
btr_blob_dbg_rbt_insert(): Insert a btr_blob_dbg_t into index->blobs.
btr_blob_dbg_rbt_delete(): Remove a btr_blob_dbg_t from index->blobs.
btr_blob_dbg_cmp(): Comparator for btr_blob_dbg_t.
btr_blob_dbg_add_blob(): Add a BLOB reference to the map.
btr_blob_dbg_add_rec(): Add all BLOB references from a record to the map.
btr_blob_dbg_print(): Display the map of BLOB references in an index.
btr_blob_dbg_remove_rec(): Remove all BLOB references of a record from
the map.
btr_blob_dbg_is_empty(): Check that no BLOB references exist to or
from a page. Disowned references from delete-marked records are
tolerated.
btr_blob_dbg_op(): Perform an operation on all BLOB references on a
B-tree page.
btr_blob_dbg_add(): Add all BLOB references from a B-tree page to the
map.
btr_blob_dbg_remove(): Remove all BLOB references from a B-tree page
from the map.
btr_blob_dbg_restore(): Restore the BLOB references after a failed
page reorganize.
btr_blob_dbg_set_deleted_flag(): Modify the 'deleted' flag in the BLOB
references of a record.
btr_blob_dbg_owner(): Own or disown a BLOB reference.
btr_page_create(), btr_page_free_low(): Assert that no BLOB references exist.
btr_create(): Create index->blobs for clustered indexes.
btr_page_reorganize_low(): Invoke btr_blob_dbg_remove() before copying
the records. Invoke btr_blob_dbg_restore() if the operation fails.
btr_page_empty(), btr_lift_page_up(), btr_compress(), btr_discard_page():
Invoke btr_blob_dbg_remove().
btr_cur_del_mark_set_clust_rec(): Invoke btr_blob_dbg_set_deleted_flag().
Other cases of modifying the delete mark are either in the secondary
index or during crash recovery, which we do not promise to support.
btr_cur_set_ownership_of_extern_field(): Invoke btr_blob_dbg_owner().
btr_store_big_rec_extern_fields(): Invoke btr_blob_dbg_add_blob().
btr_free_externally_stored_field(): Invoke btr_blob_dbg_assert_empty()
on the first BLOB page.
page_cur_insert_rec_low(), page_cur_insert_rec_zip(),
page_copy_rec_list_end_to_created_page(): Invoke btr_blob_dbg_add_rec().
page_cur_insert_rec_zip_reorg(), page_copy_rec_list_end(),
page_copy_rec_list_start(): After failure, invoke
btr_blob_dbg_remove() and btr_blob_dbg_add().
page_cur_delete_rec(): Invoke btr_blob_dbg_remove_rec().
page_delete_rec_list_end(): Invoke btr_blob_dbg_op(btr_blob_dbg_remove_rec).
page_zip_reorganize(): Invoke btr_blob_dbg_remove() before copying the records.
page_zip_copy_recs(): Invoke btr_blob_dbg_add().
row_upd_rec_in_place(): Invoke btr_blob_dbg_rbt_delete() and
btr_blob_dbg_rbt_insert().
innobase_start_or_create_for_mysql(): Warn when UNIV_BLOB_DEBUG is enabled.
rb://550 approved by Jimmy Yang 15 years ago |
|
/*****************************************************************************
Copyright (c) 1994, 2010, Innobase Oy. All Rights Reserved.
This program is free software; you can redistribute it and/or modify it underthe terms of the GNU General Public License as published by the Free SoftwareFoundation; version 2 of the License.
This program is distributed in the hope that it will be useful, but WITHOUTANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESSFOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along withthis program; if not, write to the Free Software Foundation, Inc., 59 TemplePlace, Suite 330, Boston, MA 02111-1307 USA
*****************************************************************************/
/**************************************************//**
@file page/page0page.cIndex page routines
Created 2/2/1994 Heikki Tuuri*******************************************************/
#define THIS_MODULE
#include "page0page.h"
#ifdef UNIV_NONINL
#include "page0page.ic"
#endif
#undef THIS_MODULE
#include "page0cur.h"
#include "page0zip.h"
#include "buf0buf.h"
#include "btr0btr.h"
#ifndef UNIV_HOTBACKUP
# include "srv0srv.h"
# include "lock0lock.h"
# include "fut0lst.h"
# include "btr0sea.h"
#endif /* !UNIV_HOTBACKUP */
/* THE INDEX PAGE
==============
The index page consists of a page header which contains the page'sid and other information. On top of it are the index recordsin a heap linked into a one way linear list according to alphabetic order.
Just below page end is an array of pointers which we call page directory,to about every sixth record in the list. The pointers are placed inthe directory in the alphabetical order of the records pointed to,enabling us to make binary search using the array. Each slot n:o Iin the directory points to a record, where a 4-bit field contains a countof those records which are in the linear list between pointer I andthe pointer I - 1 in the directory, including the recordpointed to by pointer I and not including the record pointed to by I - 1.We say that the record pointed to by slot I, or that slot I, ownsthese records. The count is always kept in the range 4 to 8, withthe exception that it is 1 for the first slot, and 1--8 for the second slot.
An essentially binary search can be performed in the list of indexrecords, like we could do if we had pointer to every record in thepage directory. The data structure is, however, more efficient whenwe are doing inserts, because most inserts are just pushed on a heap.Only every 8th insert requires block move in the directory pointertable, which itself is quite small. A record is deleted from the pageby just taking it off the linear list and updating the number of ownedrecords-field of the record which owns it, and updating the page directory,if necessary. A special case is the one when the record owns itself.Because the overhead of inserts is so small, we may also increase thepage size from the projected default of 8 kB to 64 kB without toomuch loss of efficiency in inserts. Bigger page becomes actualwhen the disk transfer rate compared to seek and latency time rises.On the present system, the page size is set so that the page transfertime (3 ms) is 20 % of the disk random access time (15 ms).
When the page is split, merged, or becomes full but contains deletedrecords, we have to reorganize the page.
Assuming a page size of 8 kB, a typical index page of a secondaryindex contains 300 index entries, and the size of the page directoryis 50 x 4 bytes = 200 bytes. */
/***************************************************************//**
Looks for the directory slot which owns the given record.@return the directory slot number */UNIV_INTERNulintpage_dir_find_owner_slot(/*=====================*/ const rec_t* rec) /*!< in: the physical record */{ const page_t* page; register uint16 rec_offs_bytes; register const page_dir_slot_t* slot; register const page_dir_slot_t* first_slot; register const rec_t* r = rec;
ut_ad(page_rec_check(rec));
page = page_align(rec); first_slot = page_dir_get_nth_slot(page, 0); slot = page_dir_get_nth_slot(page, page_dir_get_n_slots(page) - 1);
if (page_is_comp(page)) { while (rec_get_n_owned_new(r) == 0) { r = rec_get_next_ptr_const(r, TRUE); ut_ad(r >= page + PAGE_NEW_SUPREMUM); ut_ad(r < page + (UNIV_PAGE_SIZE - PAGE_DIR)); } } else { while (rec_get_n_owned_old(r) == 0) { r = rec_get_next_ptr_const(r, FALSE); ut_ad(r >= page + PAGE_OLD_SUPREMUM); ut_ad(r < page + (UNIV_PAGE_SIZE - PAGE_DIR)); } }
rec_offs_bytes = mach_encode_2(r - page);
while (UNIV_LIKELY(*(uint16*) slot != rec_offs_bytes)) {
if (UNIV_UNLIKELY(slot == first_slot)) { fprintf(stderr, "InnoDB: Probable data corruption on" " page %lu\n" "InnoDB: Original record ", (ulong) page_get_page_no(page));
if (page_is_comp(page)) { fputs("(compact record)", stderr); } else { rec_print_old(stderr, rec); }
fputs("\n" "InnoDB: on that page.\n" "InnoDB: Cannot find the dir slot for record ", stderr); if (page_is_comp(page)) { fputs("(compact record)", stderr); } else { rec_print_old(stderr, page + mach_decode_2(rec_offs_bytes)); } fputs("\n" "InnoDB: on that page!\n", stderr);
buf_page_print(page, 0);
ut_error; }
slot += PAGE_DIR_SLOT_SIZE; }
return(((ulint) (first_slot - slot)) / PAGE_DIR_SLOT_SIZE);}
/**************************************************************//**
Used to check the consistency of a directory slot.@return TRUE if succeed */staticiboolpage_dir_slot_check(/*================*/ page_dir_slot_t* slot) /*!< in: slot */{ page_t* page; ulint n_slots; ulint n_owned;
ut_a(slot);
page = page_align(slot);
n_slots = page_dir_get_n_slots(page);
ut_a(slot <= page_dir_get_nth_slot(page, 0)); ut_a(slot >= page_dir_get_nth_slot(page, n_slots - 1));
ut_a(page_rec_check(page_dir_slot_get_rec(slot)));
if (page_is_comp(page)) { n_owned = rec_get_n_owned_new(page_dir_slot_get_rec(slot)); } else { n_owned = rec_get_n_owned_old(page_dir_slot_get_rec(slot)); }
if (slot == page_dir_get_nth_slot(page, 0)) { ut_a(n_owned == 1); } else if (slot == page_dir_get_nth_slot(page, n_slots - 1)) { ut_a(n_owned >= 1); ut_a(n_owned <= PAGE_DIR_SLOT_MAX_N_OWNED); } else { ut_a(n_owned >= PAGE_DIR_SLOT_MIN_N_OWNED); ut_a(n_owned <= PAGE_DIR_SLOT_MAX_N_OWNED); }
return(TRUE);}
/*************************************************************//**
Sets the max trx id field value. */UNIV_INTERNvoidpage_set_max_trx_id(/*================*/ buf_block_t* block, /*!< in/out: page */ page_zip_des_t* page_zip,/*!< in/out: compressed page, or NULL */ trx_id_t trx_id, /*!< in: transaction id */ mtr_t* mtr) /*!< in/out: mini-transaction, or NULL */{ page_t* page = buf_block_get_frame(block);#ifndef UNIV_HOTBACKUP
const ibool is_hashed = block->is_hashed;
if (is_hashed) { rw_lock_x_lock(&btr_search_latch); }
ut_ad(!mtr || mtr_memo_contains(mtr, block, MTR_MEMO_PAGE_X_FIX));#endif /* !UNIV_HOTBACKUP */
/* It is not necessary to write this change to the redo log, as
during a database recovery we assume that the max trx id of every page is the maximum trx id assigned before the crash. */
if (UNIV_LIKELY_NULL(page_zip)) { mach_write_to_8(page + (PAGE_HEADER + PAGE_MAX_TRX_ID), trx_id); page_zip_write_header(page_zip, page + (PAGE_HEADER + PAGE_MAX_TRX_ID), 8, mtr);#ifndef UNIV_HOTBACKUP
} else if (mtr) { mlog_write_dulint(page + (PAGE_HEADER + PAGE_MAX_TRX_ID), trx_id, mtr);#endif /* !UNIV_HOTBACKUP */
} else { mach_write_to_8(page + (PAGE_HEADER + PAGE_MAX_TRX_ID), trx_id); }
#ifndef UNIV_HOTBACKUP
if (is_hashed) { rw_lock_x_unlock(&btr_search_latch); }#endif /* !UNIV_HOTBACKUP */
}
/************************************************************//**
Allocates a block of memory from the heap of an index page.@return pointer to start of allocated buffer, or NULL if allocation fails */UNIV_INTERNbyte*page_mem_alloc_heap(/*================*/ page_t* page, /*!< in/out: index page */ page_zip_des_t* page_zip,/*!< in/out: compressed page with enough
space available for inserting the record, or NULL */ ulint need, /*!< in: total number of bytes needed */ ulint* heap_no)/*!< out: this contains the heap number
of the allocated record if allocation succeeds */{ byte* block; ulint avl_space;
ut_ad(page && heap_no);
avl_space = page_get_max_insert_size(page, 1);
if (avl_space >= need) { block = page_header_get_ptr(page, PAGE_HEAP_TOP);
page_header_set_ptr(page, page_zip, PAGE_HEAP_TOP, block + need); *heap_no = page_dir_get_n_heap(page);
page_dir_set_n_heap(page, page_zip, 1 + *heap_no);
return(block); }
return(NULL);}
#ifndef UNIV_HOTBACKUP
/**********************************************************//**
Writes a log record of page creation. */UNIV_INLINEvoidpage_create_write_log(/*==================*/ buf_frame_t* frame, /*!< in: a buffer frame where the page is
created */ mtr_t* mtr, /*!< in: mini-transaction handle */ ibool comp) /*!< in: TRUE=compact page format */{ mlog_write_initial_log_record(frame, comp ? MLOG_COMP_PAGE_CREATE : MLOG_PAGE_CREATE, mtr);}#else /* !UNIV_HOTBACKUP */
# define page_create_write_log(frame,mtr,comp) ((void) 0)
#endif /* !UNIV_HOTBACKUP */
/***********************************************************//**
Parses a redo log record of creating a page.@return end of log record or NULL */UNIV_INTERNbyte*page_parse_create(/*==============*/ byte* ptr, /*!< in: buffer */ byte* end_ptr __attribute__((unused)), /*!< in: buffer end */ ulint comp, /*!< in: nonzero=compact page format */ buf_block_t* block, /*!< in: block or NULL */ mtr_t* mtr) /*!< in: mtr or NULL */{ ut_ad(ptr && end_ptr);
/* The record is empty, except for the record initial part */
if (block) { page_create(block, mtr, comp); }
return(ptr);}
/**********************************************************//**
The index page creation function.@return pointer to the page */staticpage_t*page_create_low(/*============*/ buf_block_t* block, /*!< in: a buffer block where the
page is created */ ulint comp) /*!< in: nonzero=compact page format */{ page_dir_slot_t* slot; mem_heap_t* heap; dtuple_t* tuple; dfield_t* field; byte* heap_top; rec_t* infimum_rec; rec_t* supremum_rec; page_t* page; dict_index_t* index; ulint* offsets;
ut_ad(block);#if PAGE_BTR_IBUF_FREE_LIST + FLST_BASE_NODE_SIZE > PAGE_DATA
# error "PAGE_BTR_IBUF_FREE_LIST + FLST_BASE_NODE_SIZE > PAGE_DATA"
#endif
#if PAGE_BTR_IBUF_FREE_LIST_NODE + FLST_NODE_SIZE > PAGE_DATA
# error "PAGE_BTR_IBUF_FREE_LIST_NODE + FLST_NODE_SIZE > PAGE_DATA"
#endif
/* The infimum and supremum records use a dummy index. */ if (UNIV_LIKELY(comp)) { index = dict_ind_compact; } else { index = dict_ind_redundant; }
/* 1. INCREMENT MODIFY CLOCK */ buf_block_modify_clock_inc(block);
page = buf_block_get_frame(block);
fil_page_set_type(page, FIL_PAGE_INDEX);
heap = mem_heap_create(200);
/* 3. CREATE THE INFIMUM AND SUPREMUM RECORDS */
/* Create first a data tuple for infimum record */ tuple = dtuple_create(heap, 1); dtuple_set_info_bits(tuple, REC_STATUS_INFIMUM); field = dtuple_get_nth_field(tuple, 0);
dfield_set_data(field, "infimum", 8); dtype_set(dfield_get_type(field), DATA_VARCHAR, DATA_ENGLISH | DATA_NOT_NULL, 8); /* Set the corresponding physical record to its place in the page
record heap */
heap_top = page + PAGE_DATA;
infimum_rec = rec_convert_dtuple_to_rec(heap_top, index, tuple, 0);
if (UNIV_LIKELY(comp)) { ut_a(infimum_rec == page + PAGE_NEW_INFIMUM);
rec_set_n_owned_new(infimum_rec, NULL, 1); rec_set_heap_no_new(infimum_rec, 0); } else { ut_a(infimum_rec == page + PAGE_OLD_INFIMUM);
rec_set_n_owned_old(infimum_rec, 1); rec_set_heap_no_old(infimum_rec, 0); }
offsets = rec_get_offsets(infimum_rec, index, NULL, ULINT_UNDEFINED, &heap);
heap_top = rec_get_end(infimum_rec, offsets);
/* Create then a tuple for supremum */
tuple = dtuple_create(heap, 1); dtuple_set_info_bits(tuple, REC_STATUS_SUPREMUM); field = dtuple_get_nth_field(tuple, 0);
dfield_set_data(field, "supremum", comp ? 8 : 9); dtype_set(dfield_get_type(field), DATA_VARCHAR, DATA_ENGLISH | DATA_NOT_NULL, comp ? 8 : 9);
supremum_rec = rec_convert_dtuple_to_rec(heap_top, index, tuple, 0);
if (UNIV_LIKELY(comp)) { ut_a(supremum_rec == page + PAGE_NEW_SUPREMUM);
rec_set_n_owned_new(supremum_rec, NULL, 1); rec_set_heap_no_new(supremum_rec, 1); } else { ut_a(supremum_rec == page + PAGE_OLD_SUPREMUM);
rec_set_n_owned_old(supremum_rec, 1); rec_set_heap_no_old(supremum_rec, 1); }
offsets = rec_get_offsets(supremum_rec, index, offsets, ULINT_UNDEFINED, &heap); heap_top = rec_get_end(supremum_rec, offsets);
ut_ad(heap_top == page + (comp ? PAGE_NEW_SUPREMUM_END : PAGE_OLD_SUPREMUM_END));
mem_heap_free(heap);
/* 4. INITIALIZE THE PAGE */
page_header_set_field(page, NULL, PAGE_N_DIR_SLOTS, 2); page_header_set_ptr(page, NULL, PAGE_HEAP_TOP, heap_top); page_header_set_field(page, NULL, PAGE_N_HEAP, comp ? 0x8000 | PAGE_HEAP_NO_USER_LOW : PAGE_HEAP_NO_USER_LOW); page_header_set_ptr(page, NULL, PAGE_FREE, NULL); page_header_set_field(page, NULL, PAGE_GARBAGE, 0); page_header_set_ptr(page, NULL, PAGE_LAST_INSERT, NULL); page_header_set_field(page, NULL, PAGE_DIRECTION, PAGE_NO_DIRECTION); page_header_set_field(page, NULL, PAGE_N_DIRECTION, 0); page_header_set_field(page, NULL, PAGE_N_RECS, 0); page_set_max_trx_id(block, NULL, ut_dulint_zero, NULL); memset(heap_top, 0, UNIV_PAGE_SIZE - PAGE_EMPTY_DIR_START - page_offset(heap_top));
/* 5. SET POINTERS IN RECORDS AND DIR SLOTS */
/* Set the slots to point to infimum and supremum. */
slot = page_dir_get_nth_slot(page, 0); page_dir_slot_set_rec(slot, infimum_rec);
slot = page_dir_get_nth_slot(page, 1); page_dir_slot_set_rec(slot, supremum_rec);
/* Set the next pointers in infimum and supremum */
if (UNIV_LIKELY(comp)) { rec_set_next_offs_new(infimum_rec, PAGE_NEW_SUPREMUM); rec_set_next_offs_new(supremum_rec, 0); } else { rec_set_next_offs_old(infimum_rec, PAGE_OLD_SUPREMUM); rec_set_next_offs_old(supremum_rec, 0); }
return(page);}
/**********************************************************//**
Create an uncompressed B-tree index page.@return pointer to the page */UNIV_INTERNpage_t*page_create(/*========*/ buf_block_t* block, /*!< in: a buffer block where the
page is created */ mtr_t* mtr, /*!< in: mini-transaction handle */ ulint comp) /*!< in: nonzero=compact page format */{ page_create_write_log(buf_block_get_frame(block), mtr, comp); return(page_create_low(block, comp));}
/**********************************************************//**
Create a compressed B-tree index page.@return pointer to the page */UNIV_INTERNpage_t*page_create_zip(/*============*/ buf_block_t* block, /*!< in/out: a buffer frame where the
page is created */ dict_index_t* index, /*!< in: the index of the page */ ulint level, /*!< in: the B-tree level of the page */ mtr_t* mtr) /*!< in: mini-transaction handle */{ page_t* page; page_zip_des_t* page_zip = buf_block_get_page_zip(block);
ut_ad(block); ut_ad(page_zip); ut_ad(index); ut_ad(dict_table_is_comp(index->table));
page = page_create_low(block, TRUE); mach_write_to_2(page + PAGE_HEADER + PAGE_LEVEL, level);
if (UNIV_UNLIKELY(!page_zip_compress(page_zip, page, index, mtr))) { /* The compression of a newly created page
should always succeed. */ ut_error; }
return(page);}
/*************************************************************//**
Differs from page_copy_rec_list_end, because this function does nottouch the lock table and max trx id on page or compress the page. */UNIV_INTERNvoidpage_copy_rec_list_end_no_locks(/*============================*/ buf_block_t* new_block, /*!< in: index page to copy to */ buf_block_t* block, /*!< in: index page of rec */ rec_t* rec, /*!< in: record on page */ dict_index_t* index, /*!< in: record descriptor */ mtr_t* mtr) /*!< in: mtr */{ page_t* new_page = buf_block_get_frame(new_block); page_cur_t cur1; rec_t* cur2; mem_heap_t* heap = NULL; ulint offsets_[REC_OFFS_NORMAL_SIZE]; ulint* offsets = offsets_; rec_offs_init(offsets_);
page_cur_position(rec, block, &cur1);
if (page_cur_is_before_first(&cur1)) {
page_cur_move_to_next(&cur1); }
ut_a((ibool)!!page_is_comp(new_page) == dict_table_is_comp(index->table)); ut_a(page_is_comp(new_page) == page_rec_is_comp(rec)); ut_a(mach_read_from_2(new_page + UNIV_PAGE_SIZE - 10) == (ulint) (page_is_comp(new_page) ? PAGE_NEW_INFIMUM : PAGE_OLD_INFIMUM));
cur2 = page_get_infimum_rec(buf_block_get_frame(new_block));
/* Copy records from the original page to the new page */
while (!page_cur_is_after_last(&cur1)) { rec_t* cur1_rec = page_cur_get_rec(&cur1); rec_t* ins_rec; offsets = rec_get_offsets(cur1_rec, index, offsets, ULINT_UNDEFINED, &heap); ins_rec = page_cur_insert_rec_low(cur2, index, cur1_rec, offsets, mtr); if (UNIV_UNLIKELY(!ins_rec)) { /* Track an assertion failure reported on the mailing
list on June 18th, 2003 */
buf_page_print(new_page, 0); buf_page_print(page_align(rec), 0); ut_print_timestamp(stderr);
fprintf(stderr, "InnoDB: rec offset %lu, cur1 offset %lu," " cur2 offset %lu\n", (ulong) page_offset(rec), (ulong) page_offset(page_cur_get_rec(&cur1)), (ulong) page_offset(cur2)); ut_error; }
page_cur_move_to_next(&cur1); cur2 = ins_rec; }
if (UNIV_LIKELY_NULL(heap)) { mem_heap_free(heap); }}
#ifndef UNIV_HOTBACKUP
/*************************************************************//**
Copies records from page to new_page, from a given record onward,including that record. Infimum and supremum records are not copied.The records are copied to the start of the record list on new_page.@return pointer to the original successor of the infimum record onnew_page, or NULL on zip overflow (new_block will be decompressed) */UNIV_INTERNrec_t*page_copy_rec_list_end(/*===================*/ buf_block_t* new_block, /*!< in/out: index page to copy to */ buf_block_t* block, /*!< in: index page containing rec */ rec_t* rec, /*!< in: record on page */ dict_index_t* index, /*!< in: record descriptor */ mtr_t* mtr) /*!< in: mtr */{ page_t* new_page = buf_block_get_frame(new_block); page_zip_des_t* new_page_zip = buf_block_get_page_zip(new_block); page_t* page = page_align(rec); rec_t* ret = page_rec_get_next( page_get_infimum_rec(new_page)); ulint log_mode = 0; /* remove warning */
#ifdef UNIV_ZIP_DEBUG
if (new_page_zip) { page_zip_des_t* page_zip = buf_block_get_page_zip(block); ut_a(page_zip);
/* Strict page_zip_validate() may fail here.
Furthermore, btr_compress() may set FIL_PAGE_PREV to FIL_NULL on new_page while leaving it intact on new_page_zip. So, we cannot validate new_page_zip. */ ut_a(page_zip_validate_low(page_zip, page, TRUE)); }#endif /* UNIV_ZIP_DEBUG */
ut_ad(buf_block_get_frame(block) == page); ut_ad(page_is_leaf(page) == page_is_leaf(new_page)); ut_ad(page_is_comp(page) == page_is_comp(new_page)); /* Here, "ret" may be pointing to a user record or the
predefined supremum record. */
if (UNIV_LIKELY_NULL(new_page_zip)) { log_mode = mtr_set_log_mode(mtr, MTR_LOG_NONE); }
if (page_dir_get_n_heap(new_page) == PAGE_HEAP_NO_USER_LOW) { page_copy_rec_list_end_to_created_page(new_page, rec, index, mtr); } else { page_copy_rec_list_end_no_locks(new_block, block, rec, index, mtr); }
/* Update PAGE_MAX_TRX_ID on the uncompressed page.
Modifications will be redo logged and copied to the compressed page in page_zip_compress() or page_zip_reorganize() below. */ if (dict_index_is_sec_or_ibuf(index) && page_is_leaf(page)) { page_update_max_trx_id(new_block, NULL, page_get_max_trx_id(page), mtr); }
if (UNIV_LIKELY_NULL(new_page_zip)) { mtr_set_log_mode(mtr, log_mode);
if (UNIV_UNLIKELY (!page_zip_compress(new_page_zip, new_page, index, mtr))) { /* Before trying to reorganize the page,
store the number of preceding records on the page. */ ulint ret_pos = page_rec_get_n_recs_before(ret); /* Before copying, "ret" was the successor of
the predefined infimum record. It must still have at least one predecessor (the predefined infimum record, or a freshly copied record that is smaller than "ret"). */ ut_a(ret_pos > 0);
if (UNIV_UNLIKELY (!page_zip_reorganize(new_block, index, mtr))) {
btr_blob_dbg_remove(new_page, index, "copy_end_reorg_fail"); if (UNIV_UNLIKELY (!page_zip_decompress(new_page_zip, new_page, FALSE))) { ut_error; } ut_ad(page_validate(new_page, index)); btr_blob_dbg_add(new_page, index, "copy_end_reorg_fail"); return(NULL); } else { /* The page was reorganized:
Seek to ret_pos. */ ret = new_page + PAGE_NEW_INFIMUM;
do { ret = rec_get_next_ptr(ret, TRUE); } while (--ret_pos); } } }
/* Update the lock table and possible hash index */
lock_move_rec_list_end(new_block, block, rec);
btr_search_move_or_delete_hash_entries(new_block, block, index);
return(ret);}
/*************************************************************//**
Copies records from page to new_page, up to the given record,NOT including that record. Infimum and supremum records are not copied.The records are copied to the end of the record list on new_page.@return pointer to the original predecessor of the supremum record onnew_page, or NULL on zip overflow (new_block will be decompressed) */UNIV_INTERNrec_t*page_copy_rec_list_start(/*=====================*/ buf_block_t* new_block, /*!< in/out: index page to copy to */ buf_block_t* block, /*!< in: index page containing rec */ rec_t* rec, /*!< in: record on page */ dict_index_t* index, /*!< in: record descriptor */ mtr_t* mtr) /*!< in: mtr */{ page_t* new_page = buf_block_get_frame(new_block); page_zip_des_t* new_page_zip = buf_block_get_page_zip(new_block); page_cur_t cur1; rec_t* cur2; ulint log_mode = 0 /* remove warning */; mem_heap_t* heap = NULL; rec_t* ret = page_rec_get_prev(page_get_supremum_rec(new_page)); ulint offsets_[REC_OFFS_NORMAL_SIZE]; ulint* offsets = offsets_; rec_offs_init(offsets_);
/* Here, "ret" may be pointing to a user record or the
predefined infimum record. */
if (page_rec_is_infimum(rec)) {
return(ret); }
if (UNIV_LIKELY_NULL(new_page_zip)) { log_mode = mtr_set_log_mode(mtr, MTR_LOG_NONE); }
page_cur_set_before_first(block, &cur1); page_cur_move_to_next(&cur1);
cur2 = ret;
/* Copy records from the original page to the new page */
while (page_cur_get_rec(&cur1) != rec) { rec_t* cur1_rec = page_cur_get_rec(&cur1); offsets = rec_get_offsets(cur1_rec, index, offsets, ULINT_UNDEFINED, &heap); cur2 = page_cur_insert_rec_low(cur2, index, cur1_rec, offsets, mtr); ut_a(cur2);
page_cur_move_to_next(&cur1); }
if (UNIV_LIKELY_NULL(heap)) { mem_heap_free(heap); }
/* Update PAGE_MAX_TRX_ID on the uncompressed page.
Modifications will be redo logged and copied to the compressed page in page_zip_compress() or page_zip_reorganize() below. */ if (dict_index_is_sec_or_ibuf(index) && page_is_leaf(page_align(rec))) { page_update_max_trx_id(new_block, NULL, page_get_max_trx_id(page_align(rec)), mtr); }
if (UNIV_LIKELY_NULL(new_page_zip)) { mtr_set_log_mode(mtr, log_mode);
if (UNIV_UNLIKELY (!page_zip_compress(new_page_zip, new_page, index, mtr))) { /* Before trying to reorganize the page,
store the number of preceding records on the page. */ ulint ret_pos = page_rec_get_n_recs_before(ret); /* Before copying, "ret" was the predecessor
of the predefined supremum record. If it was the predefined infimum record, then it would still be the infimum. Thus, the assertion ut_a(ret_pos > 0) would fail here. */
if (UNIV_UNLIKELY (!page_zip_reorganize(new_block, index, mtr))) {
btr_blob_dbg_remove(new_page, index, "copy_start_reorg_fail"); if (UNIV_UNLIKELY (!page_zip_decompress(new_page_zip, new_page, FALSE))) { ut_error; } ut_ad(page_validate(new_page, index)); btr_blob_dbg_add(new_page, index, "copy_start_reorg_fail"); return(NULL); } else { /* The page was reorganized:
Seek to ret_pos. */ ret = new_page + PAGE_NEW_INFIMUM;
do { ret = rec_get_next_ptr(ret, TRUE); } while (--ret_pos); } } }
/* Update the lock table and possible hash index */
lock_move_rec_list_start(new_block, block, rec, ret);
btr_search_move_or_delete_hash_entries(new_block, block, index);
return(ret);}
/**********************************************************//**
Writes a log record of a record list end or start deletion. */UNIV_INLINEvoidpage_delete_rec_list_write_log(/*===========================*/ rec_t* rec, /*!< in: record on page */ dict_index_t* index, /*!< in: record descriptor */ byte type, /*!< in: operation type:
MLOG_LIST_END_DELETE, ... */ mtr_t* mtr) /*!< in: mtr */{ byte* log_ptr; ut_ad(type == MLOG_LIST_END_DELETE || type == MLOG_LIST_START_DELETE || type == MLOG_COMP_LIST_END_DELETE || type == MLOG_COMP_LIST_START_DELETE);
log_ptr = mlog_open_and_write_index(mtr, rec, index, type, 2); if (log_ptr) { /* Write the parameter as a 2-byte ulint */ mach_write_to_2(log_ptr, page_offset(rec)); mlog_close(mtr, log_ptr + 2); }}#else /* !UNIV_HOTBACKUP */
# define page_delete_rec_list_write_log(rec,index,type,mtr) ((void) 0)
#endif /* !UNIV_HOTBACKUP */
/**********************************************************//**
Parses a log record of a record list end or start deletion.@return end of log record or NULL */UNIV_INTERNbyte*page_parse_delete_rec_list(/*=======================*/ byte type, /*!< in: MLOG_LIST_END_DELETE,
MLOG_LIST_START_DELETE, MLOG_COMP_LIST_END_DELETE or MLOG_COMP_LIST_START_DELETE */ byte* ptr, /*!< in: buffer */ byte* end_ptr,/*!< in: buffer end */ buf_block_t* block, /*!< in/out: buffer block or NULL */ dict_index_t* index, /*!< in: record descriptor */ mtr_t* mtr) /*!< in: mtr or NULL */{ page_t* page; ulint offset;
ut_ad(type == MLOG_LIST_END_DELETE || type == MLOG_LIST_START_DELETE || type == MLOG_COMP_LIST_END_DELETE || type == MLOG_COMP_LIST_START_DELETE);
/* Read the record offset as a 2-byte ulint */
if (end_ptr < ptr + 2) {
return(NULL); }
offset = mach_read_from_2(ptr); ptr += 2;
if (!block) {
return(ptr); }
page = buf_block_get_frame(block);
ut_ad(!!page_is_comp(page) == dict_table_is_comp(index->table));
if (type == MLOG_LIST_END_DELETE || type == MLOG_COMP_LIST_END_DELETE) { page_delete_rec_list_end(page + offset, block, index, ULINT_UNDEFINED, ULINT_UNDEFINED, mtr); } else { page_delete_rec_list_start(page + offset, block, index, mtr); }
return(ptr);}
/*************************************************************//**
Deletes records from a page from a given record onward, including that record.The infimum and supremum records are not deleted. */UNIV_INTERNvoidpage_delete_rec_list_end(/*=====================*/ rec_t* rec, /*!< in: pointer to record on page */ buf_block_t* block, /*!< in: buffer block of the page */ dict_index_t* index, /*!< in: record descriptor */ ulint n_recs, /*!< in: number of records to delete,
or ULINT_UNDEFINED if not known */ ulint size, /*!< in: the sum of the sizes of the
records in the end of the chain to delete, or ULINT_UNDEFINED if not known */ mtr_t* mtr) /*!< in: mtr */{ page_dir_slot_t*slot; ulint slot_index; rec_t* last_rec; rec_t* prev_rec; ulint n_owned; page_zip_des_t* page_zip = buf_block_get_page_zip(block); page_t* page = page_align(rec); mem_heap_t* heap = NULL; ulint offsets_[REC_OFFS_NORMAL_SIZE]; ulint* offsets = offsets_; rec_offs_init(offsets_);
ut_ad(size == ULINT_UNDEFINED || size < UNIV_PAGE_SIZE); ut_ad(!page_zip || page_rec_is_comp(rec));#ifdef UNIV_ZIP_DEBUG
ut_a(!page_zip || page_zip_validate(page_zip, page));#endif /* UNIV_ZIP_DEBUG */
if (page_rec_is_infimum(rec)) { rec = page_rec_get_next(rec); }
if (page_rec_is_supremum(rec)) {
return; }
/* Reset the last insert info in the page header and increment
the modify clock for the frame */
page_header_set_ptr(page, page_zip, PAGE_LAST_INSERT, NULL);
/* The page gets invalid for optimistic searches: increment the
frame modify clock */
buf_block_modify_clock_inc(block);
page_delete_rec_list_write_log(rec, index, page_is_comp(page) ? MLOG_COMP_LIST_END_DELETE : MLOG_LIST_END_DELETE, mtr);
if (UNIV_LIKELY_NULL(page_zip)) { ulint log_mode;
ut_a(page_is_comp(page)); /* Individual deletes are not logged */
log_mode = mtr_set_log_mode(mtr, MTR_LOG_NONE);
do { page_cur_t cur; page_cur_position(rec, block, &cur);
offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap); rec = rec_get_next_ptr(rec, TRUE);#ifdef UNIV_ZIP_DEBUG
ut_a(page_zip_validate(page_zip, page));#endif /* UNIV_ZIP_DEBUG */
page_cur_delete_rec(&cur, index, offsets, mtr); } while (page_offset(rec) != PAGE_NEW_SUPREMUM);
if (UNIV_LIKELY_NULL(heap)) { mem_heap_free(heap); }
/* Restore log mode */
mtr_set_log_mode(mtr, log_mode); return; }
prev_rec = page_rec_get_prev(rec);
last_rec = page_rec_get_prev(page_get_supremum_rec(page));
if ((size == ULINT_UNDEFINED) || (n_recs == ULINT_UNDEFINED)) { rec_t* rec2 = rec; /* Calculate the sum of sizes and the number of records */ size = 0; n_recs = 0;
do { ulint s; offsets = rec_get_offsets(rec2, index, offsets, ULINT_UNDEFINED, &heap); s = rec_offs_size(offsets); ut_ad(rec2 - page + s - rec_offs_extra_size(offsets) < UNIV_PAGE_SIZE); ut_ad(size + s < UNIV_PAGE_SIZE); size += s; n_recs++;
rec2 = page_rec_get_next(rec2); } while (!page_rec_is_supremum(rec2));
if (UNIV_LIKELY_NULL(heap)) { mem_heap_free(heap); } }
ut_ad(size < UNIV_PAGE_SIZE);
/* Update the page directory; there is no need to balance the number
of the records owned by the supremum record, as it is allowed to be less than PAGE_DIR_SLOT_MIN_N_OWNED */
if (page_is_comp(page)) { rec_t* rec2 = rec; ulint count = 0;
while (rec_get_n_owned_new(rec2) == 0) { count++;
rec2 = rec_get_next_ptr(rec2, TRUE); }
ut_ad(rec_get_n_owned_new(rec2) > count);
n_owned = rec_get_n_owned_new(rec2) - count; slot_index = page_dir_find_owner_slot(rec2); slot = page_dir_get_nth_slot(page, slot_index); } else { rec_t* rec2 = rec; ulint count = 0;
while (rec_get_n_owned_old(rec2) == 0) { count++;
rec2 = rec_get_next_ptr(rec2, FALSE); }
ut_ad(rec_get_n_owned_old(rec2) > count);
n_owned = rec_get_n_owned_old(rec2) - count; slot_index = page_dir_find_owner_slot(rec2); slot = page_dir_get_nth_slot(page, slot_index); }
page_dir_slot_set_rec(slot, page_get_supremum_rec(page)); page_dir_slot_set_n_owned(slot, NULL, n_owned);
page_dir_set_n_slots(page, NULL, slot_index + 1);
/* Remove the record chain segment from the record chain */ page_rec_set_next(prev_rec, page_get_supremum_rec(page));
btr_blob_dbg_op(page, rec, index, "delete_end", btr_blob_dbg_remove_rec);
/* Catenate the deleted chain segment to the page free list */
page_rec_set_next(last_rec, page_header_get_ptr(page, PAGE_FREE)); page_header_set_ptr(page, NULL, PAGE_FREE, rec);
page_header_set_field(page, NULL, PAGE_GARBAGE, size + page_header_get_field(page, PAGE_GARBAGE));
page_header_set_field(page, NULL, PAGE_N_RECS, (ulint)(page_get_n_recs(page) - n_recs));}
/*************************************************************//**
Deletes records from page, up to the given record, NOT includingthat record. Infimum and supremum records are not deleted. */UNIV_INTERNvoidpage_delete_rec_list_start(/*=======================*/ rec_t* rec, /*!< in: record on page */ buf_block_t* block, /*!< in: buffer block of the page */ dict_index_t* index, /*!< in: record descriptor */ mtr_t* mtr) /*!< in: mtr */{ page_cur_t cur1; ulint log_mode; ulint offsets_[REC_OFFS_NORMAL_SIZE]; ulint* offsets = offsets_; mem_heap_t* heap = NULL; byte type;
rec_offs_init(offsets_);
ut_ad((ibool) !!page_rec_is_comp(rec) == dict_table_is_comp(index->table));#ifdef UNIV_ZIP_DEBUG
{ page_zip_des_t* page_zip= buf_block_get_page_zip(block); page_t* page = buf_block_get_frame(block);
/* page_zip_validate() would detect a min_rec_mark mismatch
in btr_page_split_and_insert() between btr_attach_half_pages() and insert_page = ... when btr_page_get_split_rec_to_left() holds (direction == FSP_DOWN). */ ut_a(!page_zip || page_zip_validate_low(page_zip, page, TRUE)); }#endif /* UNIV_ZIP_DEBUG */
if (page_rec_is_infimum(rec)) {
return; }
if (page_rec_is_comp(rec)) { type = MLOG_COMP_LIST_START_DELETE; } else { type = MLOG_LIST_START_DELETE; }
page_delete_rec_list_write_log(rec, index, type, mtr);
page_cur_set_before_first(block, &cur1); page_cur_move_to_next(&cur1);
/* Individual deletes are not logged */
log_mode = mtr_set_log_mode(mtr, MTR_LOG_NONE);
while (page_cur_get_rec(&cur1) != rec) { offsets = rec_get_offsets(page_cur_get_rec(&cur1), index, offsets, ULINT_UNDEFINED, &heap); page_cur_delete_rec(&cur1, index, offsets, mtr); }
if (UNIV_LIKELY_NULL(heap)) { mem_heap_free(heap); }
/* Restore log mode */
mtr_set_log_mode(mtr, log_mode);}
#ifndef UNIV_HOTBACKUP
/*************************************************************//**
Moves record list end to another page. Moved records includesplit_rec.@return TRUE on success; FALSE on compression failure (new_block willbe decompressed) */UNIV_INTERNiboolpage_move_rec_list_end(/*===================*/ buf_block_t* new_block, /*!< in/out: index page where to move */ buf_block_t* block, /*!< in: index page from where to move */ rec_t* split_rec, /*!< in: first record to move */ dict_index_t* index, /*!< in: record descriptor */ mtr_t* mtr) /*!< in: mtr */{ page_t* new_page = buf_block_get_frame(new_block); ulint old_data_size; ulint new_data_size; ulint old_n_recs; ulint new_n_recs;
old_data_size = page_get_data_size(new_page); old_n_recs = page_get_n_recs(new_page);#ifdef UNIV_ZIP_DEBUG
{ page_zip_des_t* new_page_zip = buf_block_get_page_zip(new_block); page_zip_des_t* page_zip = buf_block_get_page_zip(block); ut_a(!new_page_zip == !page_zip); ut_a(!new_page_zip || page_zip_validate(new_page_zip, new_page)); ut_a(!page_zip || page_zip_validate(page_zip, page_align(split_rec))); }#endif /* UNIV_ZIP_DEBUG */
if (UNIV_UNLIKELY(!page_copy_rec_list_end(new_block, block, split_rec, index, mtr))) { return(FALSE); }
new_data_size = page_get_data_size(new_page); new_n_recs = page_get_n_recs(new_page);
ut_ad(new_data_size >= old_data_size);
page_delete_rec_list_end(split_rec, block, index, new_n_recs - old_n_recs, new_data_size - old_data_size, mtr);
return(TRUE);}
/*************************************************************//**
Moves record list start to another page. Moved records do not includesplit_rec.@return TRUE on success; FALSE on compression failure */UNIV_INTERNiboolpage_move_rec_list_start(/*=====================*/ buf_block_t* new_block, /*!< in/out: index page where to move */ buf_block_t* block, /*!< in/out: page containing split_rec */ rec_t* split_rec, /*!< in: first record not to move */ dict_index_t* index, /*!< in: record descriptor */ mtr_t* mtr) /*!< in: mtr */{ if (UNIV_UNLIKELY(!page_copy_rec_list_start(new_block, block, split_rec, index, mtr))) { return(FALSE); }
page_delete_rec_list_start(split_rec, block, index, mtr);
return(TRUE);}
/***********************************************************************//**
This is a low-level operation which is used in a database index creationto update the page number of a created B-tree to a data dictionary record. */UNIV_INTERNvoidpage_rec_write_index_page_no(/*=========================*/ rec_t* rec, /*!< in: record to update */ ulint i, /*!< in: index of the field to update */ ulint page_no,/*!< in: value to write */ mtr_t* mtr) /*!< in: mtr */{ byte* data; ulint len;
data = rec_get_nth_field_old(rec, i, &len);
ut_ad(len == 4);
mlog_write_ulint(data, page_no, MLOG_4BYTES, mtr);}#endif /* !UNIV_HOTBACKUP */
/**************************************************************//**
Used to delete n slots from the directory. This function updatesalso n_owned fields in the records, so that the first slot afterthe deleted ones inherits the records of the deleted slots. */UNIV_INLINEvoidpage_dir_delete_slot(/*=================*/ page_t* page, /*!< in/out: the index page */ page_zip_des_t* page_zip,/*!< in/out: compressed page, or NULL */ ulint slot_no)/*!< in: slot to be deleted */{ page_dir_slot_t* slot; ulint n_owned; ulint i; ulint n_slots;
ut_ad(!page_zip || page_is_comp(page)); ut_ad(slot_no > 0); ut_ad(slot_no + 1 < page_dir_get_n_slots(page));
n_slots = page_dir_get_n_slots(page);
/* 1. Reset the n_owned fields of the slots to be
deleted */ slot = page_dir_get_nth_slot(page, slot_no); n_owned = page_dir_slot_get_n_owned(slot); page_dir_slot_set_n_owned(slot, page_zip, 0);
/* 2. Update the n_owned value of the first non-deleted slot */
slot = page_dir_get_nth_slot(page, slot_no + 1); page_dir_slot_set_n_owned(slot, page_zip, n_owned + page_dir_slot_get_n_owned(slot));
/* 3. Destroy the slot by copying slots */ for (i = slot_no + 1; i < n_slots; i++) { rec_t* rec = (rec_t*) page_dir_slot_get_rec(page_dir_get_nth_slot(page, i)); page_dir_slot_set_rec(page_dir_get_nth_slot(page, i - 1), rec); }
/* 4. Zero out the last slot, which will be removed */ mach_write_to_2(page_dir_get_nth_slot(page, n_slots - 1), 0);
/* 5. Update the page header */ page_header_set_field(page, page_zip, PAGE_N_DIR_SLOTS, n_slots - 1);}
/**************************************************************//**
Used to add n slots to the directory. Does not set the record pointersin the added slots or update n_owned values: this is the responsibilityof the caller. */UNIV_INLINEvoidpage_dir_add_slot(/*==============*/ page_t* page, /*!< in/out: the index page */ page_zip_des_t* page_zip,/*!< in/out: comprssed page, or NULL */ ulint start) /*!< in: the slot above which the new slots
are added */{ page_dir_slot_t* slot; ulint n_slots;
n_slots = page_dir_get_n_slots(page);
ut_ad(start < n_slots - 1);
/* Update the page header */ page_dir_set_n_slots(page, page_zip, n_slots + 1);
/* Move slots up */ slot = page_dir_get_nth_slot(page, n_slots); memmove(slot, slot + PAGE_DIR_SLOT_SIZE, (n_slots - 1 - start) * PAGE_DIR_SLOT_SIZE);}
/****************************************************************//**
Splits a directory slot which owns too many records. */UNIV_INTERNvoidpage_dir_split_slot(/*================*/ page_t* page, /*!< in/out: index page */ page_zip_des_t* page_zip,/*!< in/out: compressed page whose
uncompressed part will be written, or NULL */ ulint slot_no)/*!< in: the directory slot */{ rec_t* rec; page_dir_slot_t* new_slot; page_dir_slot_t* prev_slot; page_dir_slot_t* slot; ulint i; ulint n_owned;
ut_ad(page); ut_ad(!page_zip || page_is_comp(page)); ut_ad(slot_no > 0);
slot = page_dir_get_nth_slot(page, slot_no);
n_owned = page_dir_slot_get_n_owned(slot); ut_ad(n_owned == PAGE_DIR_SLOT_MAX_N_OWNED + 1);
/* 1. We loop to find a record approximately in the middle of the
records owned by the slot. */
prev_slot = page_dir_get_nth_slot(page, slot_no - 1); rec = (rec_t*) page_dir_slot_get_rec(prev_slot);
for (i = 0; i < n_owned / 2; i++) { rec = page_rec_get_next(rec); }
ut_ad(n_owned / 2 >= PAGE_DIR_SLOT_MIN_N_OWNED);
/* 2. We add one directory slot immediately below the slot to be
split. */
page_dir_add_slot(page, page_zip, slot_no - 1);
/* The added slot is now number slot_no, and the old slot is
now number slot_no + 1 */
new_slot = page_dir_get_nth_slot(page, slot_no); slot = page_dir_get_nth_slot(page, slot_no + 1);
/* 3. We store the appropriate values to the new slot. */
page_dir_slot_set_rec(new_slot, rec); page_dir_slot_set_n_owned(new_slot, page_zip, n_owned / 2);
/* 4. Finally, we update the number of records field of the
original slot */
page_dir_slot_set_n_owned(slot, page_zip, n_owned - (n_owned / 2));}
/*************************************************************//**
Tries to balance the given directory slot with too few records with the upperneighbor, so that there are at least the minimum number of records owned bythe slot; this may result in the merging of two slots. */UNIV_INTERNvoidpage_dir_balance_slot(/*==================*/ page_t* page, /*!< in/out: index page */ page_zip_des_t* page_zip,/*!< in/out: compressed page, or NULL */ ulint slot_no)/*!< in: the directory slot */{ page_dir_slot_t* slot; page_dir_slot_t* up_slot; ulint n_owned; ulint up_n_owned; rec_t* old_rec; rec_t* new_rec;
ut_ad(page); ut_ad(!page_zip || page_is_comp(page)); ut_ad(slot_no > 0);
slot = page_dir_get_nth_slot(page, slot_no);
/* The last directory slot cannot be balanced with the upper
neighbor, as there is none. */
if (UNIV_UNLIKELY(slot_no == page_dir_get_n_slots(page) - 1)) {
return; }
up_slot = page_dir_get_nth_slot(page, slot_no + 1);
n_owned = page_dir_slot_get_n_owned(slot); up_n_owned = page_dir_slot_get_n_owned(up_slot);
ut_ad(n_owned == PAGE_DIR_SLOT_MIN_N_OWNED - 1);
/* If the upper slot has the minimum value of n_owned, we will merge
the two slots, therefore we assert: */ ut_ad(2 * PAGE_DIR_SLOT_MIN_N_OWNED - 1 <= PAGE_DIR_SLOT_MAX_N_OWNED);
if (up_n_owned > PAGE_DIR_SLOT_MIN_N_OWNED) {
/* In this case we can just transfer one record owned
by the upper slot to the property of the lower slot */ old_rec = (rec_t*) page_dir_slot_get_rec(slot);
if (page_is_comp(page)) { new_rec = rec_get_next_ptr(old_rec, TRUE);
rec_set_n_owned_new(old_rec, page_zip, 0); rec_set_n_owned_new(new_rec, page_zip, n_owned + 1); } else { new_rec = rec_get_next_ptr(old_rec, FALSE);
rec_set_n_owned_old(old_rec, 0); rec_set_n_owned_old(new_rec, n_owned + 1); }
page_dir_slot_set_rec(slot, new_rec);
page_dir_slot_set_n_owned(up_slot, page_zip, up_n_owned -1); } else { /* In this case we may merge the two slots */ page_dir_delete_slot(page, page_zip, slot_no); }}
#ifndef UNIV_HOTBACKUP
/************************************************************//**
Returns the middle record of the record list. If there are an even numberof records in the list, returns the first record of the upper half-list.@return middle record */UNIV_INTERNrec_t*page_get_middle_rec(/*================*/ page_t* page) /*!< in: page */{ page_dir_slot_t* slot; ulint middle; ulint i; ulint n_owned; ulint count; rec_t* rec;
/* This many records we must leave behind */ middle = (page_get_n_recs(page) + PAGE_HEAP_NO_USER_LOW) / 2;
count = 0;
for (i = 0;; i++) {
slot = page_dir_get_nth_slot(page, i); n_owned = page_dir_slot_get_n_owned(slot);
if (count + n_owned > middle) { break; } else { count += n_owned; } }
ut_ad(i > 0); slot = page_dir_get_nth_slot(page, i - 1); rec = (rec_t*) page_dir_slot_get_rec(slot); rec = page_rec_get_next(rec);
/* There are now count records behind rec */
for (i = 0; i < middle - count; i++) { rec = page_rec_get_next(rec); }
return(rec);}#endif /* !UNIV_HOTBACKUP */
/***************************************************************//**
Returns the number of records before the given record in chain.The number includes infimum and supremum records.@return number of records */UNIV_INTERNulintpage_rec_get_n_recs_before(/*=======================*/ const rec_t* rec) /*!< in: the physical record */{ const page_dir_slot_t* slot; const rec_t* slot_rec; const page_t* page; ulint i; lint n = 0;
ut_ad(page_rec_check(rec));
page = page_align(rec); if (page_is_comp(page)) { while (rec_get_n_owned_new(rec) == 0) {
rec = rec_get_next_ptr_const(rec, TRUE); n--; }
for (i = 0; ; i++) { slot = page_dir_get_nth_slot(page, i); slot_rec = page_dir_slot_get_rec(slot);
n += rec_get_n_owned_new(slot_rec);
if (rec == slot_rec) {
break; } } } else { while (rec_get_n_owned_old(rec) == 0) {
rec = rec_get_next_ptr_const(rec, FALSE); n--; }
for (i = 0; ; i++) { slot = page_dir_get_nth_slot(page, i); slot_rec = page_dir_slot_get_rec(slot);
n += rec_get_n_owned_old(slot_rec);
if (rec == slot_rec) {
break; } } }
n--;
ut_ad(n >= 0);
return((ulint) n);}
#ifndef UNIV_HOTBACKUP
/************************************************************//**
Prints record contents including the data relevant only inthe index page context. */UNIV_INTERNvoidpage_rec_print(/*===========*/ const rec_t* rec, /*!< in: physical record */ const ulint* offsets)/*!< in: record descriptor */{ ut_a(!page_rec_is_comp(rec) == !rec_offs_comp(offsets)); rec_print_new(stderr, rec, offsets); if (page_rec_is_comp(rec)) { fprintf(stderr, " n_owned: %lu; heap_no: %lu; next rec: %lu\n", (ulong) rec_get_n_owned_new(rec), (ulong) rec_get_heap_no_new(rec), (ulong) rec_get_next_offs(rec, TRUE)); } else { fprintf(stderr, " n_owned: %lu; heap_no: %lu; next rec: %lu\n", (ulong) rec_get_n_owned_old(rec), (ulong) rec_get_heap_no_old(rec), (ulong) rec_get_next_offs(rec, TRUE)); }
page_rec_check(rec); rec_validate(rec, offsets);}
/***************************************************************//**
This is used to print the contents of the directory fordebugging purposes. */UNIV_INTERNvoidpage_dir_print(/*===========*/ page_t* page, /*!< in: index page */ ulint pr_n) /*!< in: print n first and n last entries */{ ulint n; ulint i; page_dir_slot_t* slot;
n = page_dir_get_n_slots(page);
fprintf(stderr, "--------------------------------\n" "PAGE DIRECTORY\n" "Page address %p\n" "Directory stack top at offs: %lu; number of slots: %lu\n", page, (ulong) page_offset(page_dir_get_nth_slot(page, n - 1)), (ulong) n); for (i = 0; i < n; i++) { slot = page_dir_get_nth_slot(page, i); if ((i == pr_n) && (i < n - pr_n)) { fputs(" ... \n", stderr); } if ((i < pr_n) || (i >= n - pr_n)) { fprintf(stderr, "Contents of slot: %lu: n_owned: %lu," " rec offs: %lu\n", (ulong) i, (ulong) page_dir_slot_get_n_owned(slot), (ulong) page_offset(page_dir_slot_get_rec(slot))); } } fprintf(stderr, "Total of %lu records\n" "--------------------------------\n", (ulong) (PAGE_HEAP_NO_USER_LOW + page_get_n_recs(page)));}
/***************************************************************//**
This is used to print the contents of the page record list fordebugging purposes. */UNIV_INTERNvoidpage_print_list(/*============*/ buf_block_t* block, /*!< in: index page */ dict_index_t* index, /*!< in: dictionary index of the page */ ulint pr_n) /*!< in: print n first and n last entries */{ page_t* page = block->frame; page_cur_t cur; ulint count; ulint n_recs; mem_heap_t* heap = NULL; ulint offsets_[REC_OFFS_NORMAL_SIZE]; ulint* offsets = offsets_; rec_offs_init(offsets_);
ut_a((ibool)!!page_is_comp(page) == dict_table_is_comp(index->table));
fprintf(stderr, "--------------------------------\n" "PAGE RECORD LIST\n" "Page address %p\n", page);
n_recs = page_get_n_recs(page);
page_cur_set_before_first(block, &cur); count = 0; for (;;) { offsets = rec_get_offsets(cur.rec, index, offsets, ULINT_UNDEFINED, &heap); page_rec_print(cur.rec, offsets);
if (count == pr_n) { break; } if (page_cur_is_after_last(&cur)) { break; } page_cur_move_to_next(&cur); count++; }
if (n_recs > 2 * pr_n) { fputs(" ... \n", stderr); }
while (!page_cur_is_after_last(&cur)) { page_cur_move_to_next(&cur);
if (count + pr_n >= n_recs) { offsets = rec_get_offsets(cur.rec, index, offsets, ULINT_UNDEFINED, &heap); page_rec_print(cur.rec, offsets); } count++; }
fprintf(stderr, "Total of %lu records \n" "--------------------------------\n", (ulong) (count + 1));
if (UNIV_LIKELY_NULL(heap)) { mem_heap_free(heap); }}
/***************************************************************//**
Prints the info in a page header. */UNIV_INTERNvoidpage_header_print(/*==============*/ const page_t* page){ fprintf(stderr, "--------------------------------\n" "PAGE HEADER INFO\n" "Page address %p, n records %lu (%s)\n" "n dir slots %lu, heap top %lu\n" "Page n heap %lu, free %lu, garbage %lu\n" "Page last insert %lu, direction %lu, n direction %lu\n", page, (ulong) page_header_get_field(page, PAGE_N_RECS), page_is_comp(page) ? "compact format" : "original format", (ulong) page_header_get_field(page, PAGE_N_DIR_SLOTS), (ulong) page_header_get_field(page, PAGE_HEAP_TOP), (ulong) page_dir_get_n_heap(page), (ulong) page_header_get_field(page, PAGE_FREE), (ulong) page_header_get_field(page, PAGE_GARBAGE), (ulong) page_header_get_field(page, PAGE_LAST_INSERT), (ulong) page_header_get_field(page, PAGE_DIRECTION), (ulong) page_header_get_field(page, PAGE_N_DIRECTION));}
/***************************************************************//**
This is used to print the contents of the page fordebugging purposes. */UNIV_INTERNvoidpage_print(/*=======*/ buf_block_t* block, /*!< in: index page */ dict_index_t* index, /*!< in: dictionary index of the page */ ulint dn, /*!< in: print dn first and last entries
in directory */ ulint rn) /*!< in: print rn first and last records
in directory */{ page_t* page = block->frame;
page_header_print(page); page_dir_print(page, dn); page_print_list(block, index, rn);}#endif /* !UNIV_HOTBACKUP */
/***************************************************************//**
The following is used to validate a record on a page. This functiondiffers from rec_validate as it can also check the n_owned field andthe heap_no field.@return TRUE if ok */UNIV_INTERNiboolpage_rec_validate(/*==============*/ rec_t* rec, /*!< in: physical record */ const ulint* offsets)/*!< in: array returned by rec_get_offsets() */{ ulint n_owned; ulint heap_no; page_t* page;
page = page_align(rec); ut_a(!page_is_comp(page) == !rec_offs_comp(offsets));
page_rec_check(rec); rec_validate(rec, offsets);
if (page_rec_is_comp(rec)) { n_owned = rec_get_n_owned_new(rec); heap_no = rec_get_heap_no_new(rec); } else { n_owned = rec_get_n_owned_old(rec); heap_no = rec_get_heap_no_old(rec); }
if (UNIV_UNLIKELY(!(n_owned <= PAGE_DIR_SLOT_MAX_N_OWNED))) { fprintf(stderr, "InnoDB: Dir slot of rec %lu, n owned too big %lu\n", (ulong) page_offset(rec), (ulong) n_owned); return(FALSE); }
if (UNIV_UNLIKELY(!(heap_no < page_dir_get_n_heap(page)))) { fprintf(stderr, "InnoDB: Heap no of rec %lu too big %lu %lu\n", (ulong) page_offset(rec), (ulong) heap_no, (ulong) page_dir_get_n_heap(page)); return(FALSE); }
return(TRUE);}
#ifndef UNIV_HOTBACKUP
/***************************************************************//**
Checks that the first directory slot points to the infimum record andthe last to the supremum. This function is intended to track if thebug fixed in 4.0.14 has caused corruption to users' databases. */UNIV_INTERNvoidpage_check_dir(/*===========*/ const page_t* page) /*!< in: index page */{ ulint n_slots; ulint infimum_offs; ulint supremum_offs;
n_slots = page_dir_get_n_slots(page); infimum_offs = mach_read_from_2(page_dir_get_nth_slot(page, 0)); supremum_offs = mach_read_from_2(page_dir_get_nth_slot(page, n_slots - 1));
if (UNIV_UNLIKELY(!page_rec_is_infimum_low(infimum_offs))) {
fprintf(stderr, "InnoDB: Page directory corruption:" " infimum not pointed to\n"); buf_page_print(page, 0); }
if (UNIV_UNLIKELY(!page_rec_is_supremum_low(supremum_offs))) {
fprintf(stderr, "InnoDB: Page directory corruption:" " supremum not pointed to\n"); buf_page_print(page, 0); }}#endif /* !UNIV_HOTBACKUP */
/***************************************************************//**
This function checks the consistency of an index page when we do notknow the index. This is also resilient so that this should never crasheven if the page is total garbage.@return TRUE if ok */UNIV_INTERNiboolpage_simple_validate_old(/*=====================*/ page_t* page) /*!< in: old-style index page */{ page_dir_slot_t* slot; ulint slot_no; ulint n_slots; rec_t* rec; byte* rec_heap_top; ulint count; ulint own_count; ibool ret = FALSE;
ut_a(!page_is_comp(page));
/* Check first that the record heap and the directory do not
overlap. */
n_slots = page_dir_get_n_slots(page);
if (UNIV_UNLIKELY(n_slots > UNIV_PAGE_SIZE / 4)) { fprintf(stderr, "InnoDB: Nonsensical number %lu of page dir slots\n", (ulong) n_slots);
goto func_exit; }
rec_heap_top = page_header_get_ptr(page, PAGE_HEAP_TOP);
if (UNIV_UNLIKELY(rec_heap_top > page_dir_get_nth_slot(page, n_slots - 1))) {
fprintf(stderr, "InnoDB: Record heap and dir overlap on a page," " heap top %lu, dir %lu\n", (ulong) page_header_get_field(page, PAGE_HEAP_TOP), (ulong) page_offset(page_dir_get_nth_slot(page, n_slots - 1)));
goto func_exit; }
/* Validate the record list in a loop checking also that it is
consistent with the page record directory. */
count = 0; own_count = 1; slot_no = 0; slot = page_dir_get_nth_slot(page, slot_no);
rec = page_get_infimum_rec(page);
for (;;) { if (UNIV_UNLIKELY(rec > rec_heap_top)) { fprintf(stderr, "InnoDB: Record %lu is above" " rec heap top %lu\n", (ulong)(rec - page), (ulong)(rec_heap_top - page));
goto func_exit; }
if (UNIV_UNLIKELY(rec_get_n_owned_old(rec))) { /* This is a record pointed to by a dir slot */ if (UNIV_UNLIKELY(rec_get_n_owned_old(rec) != own_count)) {
fprintf(stderr, "InnoDB: Wrong owned count %lu, %lu," " rec %lu\n", (ulong) rec_get_n_owned_old(rec), (ulong) own_count, (ulong)(rec - page));
goto func_exit; }
if (UNIV_UNLIKELY (page_dir_slot_get_rec(slot) != rec)) { fprintf(stderr, "InnoDB: Dir slot does not point" " to right rec %lu\n", (ulong)(rec - page));
goto func_exit; }
own_count = 0;
if (!page_rec_is_supremum(rec)) { slot_no++; slot = page_dir_get_nth_slot(page, slot_no); } }
if (page_rec_is_supremum(rec)) {
break; }
if (UNIV_UNLIKELY (rec_get_next_offs(rec, FALSE) < FIL_PAGE_DATA || rec_get_next_offs(rec, FALSE) >= UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Next record offset" " nonsensical %lu for rec %lu\n", (ulong) rec_get_next_offs(rec, FALSE), (ulong) (rec - page));
goto func_exit; }
count++;
if (UNIV_UNLIKELY(count > UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Page record list appears" " to be circular %lu\n", (ulong) count); goto func_exit; }
rec = page_rec_get_next(rec); own_count++; }
if (UNIV_UNLIKELY(rec_get_n_owned_old(rec) == 0)) { fprintf(stderr, "InnoDB: n owned is zero in a supremum rec\n");
goto func_exit; }
if (UNIV_UNLIKELY(slot_no != n_slots - 1)) { fprintf(stderr, "InnoDB: n slots wrong %lu, %lu\n", (ulong) slot_no, (ulong) (n_slots - 1)); goto func_exit; }
if (UNIV_UNLIKELY(page_header_get_field(page, PAGE_N_RECS) + PAGE_HEAP_NO_USER_LOW != count + 1)) { fprintf(stderr, "InnoDB: n recs wrong %lu %lu\n", (ulong) page_header_get_field(page, PAGE_N_RECS) + PAGE_HEAP_NO_USER_LOW, (ulong) (count + 1));
goto func_exit; }
/* Check then the free list */ rec = page_header_get_ptr(page, PAGE_FREE);
while (rec != NULL) { if (UNIV_UNLIKELY(rec < page + FIL_PAGE_DATA || rec >= page + UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Free list record has" " a nonsensical offset %lu\n", (ulong) (rec - page));
goto func_exit; }
if (UNIV_UNLIKELY(rec > rec_heap_top)) { fprintf(stderr, "InnoDB: Free list record %lu" " is above rec heap top %lu\n", (ulong) (rec - page), (ulong) (rec_heap_top - page));
goto func_exit; }
count++;
if (UNIV_UNLIKELY(count > UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Page free list appears" " to be circular %lu\n", (ulong) count); goto func_exit; }
rec = page_rec_get_next(rec); }
if (UNIV_UNLIKELY(page_dir_get_n_heap(page) != count + 1)) {
fprintf(stderr, "InnoDB: N heap is wrong %lu, %lu\n", (ulong) page_dir_get_n_heap(page), (ulong) (count + 1));
goto func_exit; }
ret = TRUE;
func_exit: return(ret);}
/***************************************************************//**
This function checks the consistency of an index page when we do notknow the index. This is also resilient so that this should never crasheven if the page is total garbage.@return TRUE if ok */UNIV_INTERNiboolpage_simple_validate_new(/*=====================*/ page_t* page) /*!< in: new-style index page */{ page_dir_slot_t* slot; ulint slot_no; ulint n_slots; rec_t* rec; byte* rec_heap_top; ulint count; ulint own_count; ibool ret = FALSE;
ut_a(page_is_comp(page));
/* Check first that the record heap and the directory do not
overlap. */
n_slots = page_dir_get_n_slots(page);
if (UNIV_UNLIKELY(n_slots > UNIV_PAGE_SIZE / 4)) { fprintf(stderr, "InnoDB: Nonsensical number %lu" " of page dir slots\n", (ulong) n_slots);
goto func_exit; }
rec_heap_top = page_header_get_ptr(page, PAGE_HEAP_TOP);
if (UNIV_UNLIKELY(rec_heap_top > page_dir_get_nth_slot(page, n_slots - 1))) {
fprintf(stderr, "InnoDB: Record heap and dir overlap on a page," " heap top %lu, dir %lu\n", (ulong) page_header_get_field(page, PAGE_HEAP_TOP), (ulong) page_offset(page_dir_get_nth_slot(page, n_slots - 1)));
goto func_exit; }
/* Validate the record list in a loop checking also that it is
consistent with the page record directory. */
count = 0; own_count = 1; slot_no = 0; slot = page_dir_get_nth_slot(page, slot_no);
rec = page_get_infimum_rec(page);
for (;;) { if (UNIV_UNLIKELY(rec > rec_heap_top)) { fprintf(stderr, "InnoDB: Record %lu is above rec" " heap top %lu\n", (ulong) page_offset(rec), (ulong) page_offset(rec_heap_top));
goto func_exit; }
if (UNIV_UNLIKELY(rec_get_n_owned_new(rec))) { /* This is a record pointed to by a dir slot */ if (UNIV_UNLIKELY(rec_get_n_owned_new(rec) != own_count)) {
fprintf(stderr, "InnoDB: Wrong owned count %lu, %lu," " rec %lu\n", (ulong) rec_get_n_owned_new(rec), (ulong) own_count, (ulong) page_offset(rec));
goto func_exit; }
if (UNIV_UNLIKELY (page_dir_slot_get_rec(slot) != rec)) { fprintf(stderr, "InnoDB: Dir slot does not point" " to right rec %lu\n", (ulong) page_offset(rec));
goto func_exit; }
own_count = 0;
if (!page_rec_is_supremum(rec)) { slot_no++; slot = page_dir_get_nth_slot(page, slot_no); } }
if (page_rec_is_supremum(rec)) {
break; }
if (UNIV_UNLIKELY (rec_get_next_offs(rec, TRUE) < FIL_PAGE_DATA || rec_get_next_offs(rec, TRUE) >= UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Next record offset nonsensical %lu" " for rec %lu\n", (ulong) rec_get_next_offs(rec, TRUE), (ulong) page_offset(rec));
goto func_exit; }
count++;
if (UNIV_UNLIKELY(count > UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Page record list appears" " to be circular %lu\n", (ulong) count); goto func_exit; }
rec = page_rec_get_next(rec); own_count++; }
if (UNIV_UNLIKELY(rec_get_n_owned_new(rec) == 0)) { fprintf(stderr, "InnoDB: n owned is zero" " in a supremum rec\n");
goto func_exit; }
if (UNIV_UNLIKELY(slot_no != n_slots - 1)) { fprintf(stderr, "InnoDB: n slots wrong %lu, %lu\n", (ulong) slot_no, (ulong) (n_slots - 1)); goto func_exit; }
if (UNIV_UNLIKELY(page_header_get_field(page, PAGE_N_RECS) + PAGE_HEAP_NO_USER_LOW != count + 1)) { fprintf(stderr, "InnoDB: n recs wrong %lu %lu\n", (ulong) page_header_get_field(page, PAGE_N_RECS) + PAGE_HEAP_NO_USER_LOW, (ulong) (count + 1));
goto func_exit; }
/* Check then the free list */ rec = page_header_get_ptr(page, PAGE_FREE);
while (rec != NULL) { if (UNIV_UNLIKELY(rec < page + FIL_PAGE_DATA || rec >= page + UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Free list record has" " a nonsensical offset %lu\n", (ulong) page_offset(rec));
goto func_exit; }
if (UNIV_UNLIKELY(rec > rec_heap_top)) { fprintf(stderr, "InnoDB: Free list record %lu" " is above rec heap top %lu\n", (ulong) page_offset(rec), (ulong) page_offset(rec_heap_top));
goto func_exit; }
count++;
if (UNIV_UNLIKELY(count > UNIV_PAGE_SIZE)) { fprintf(stderr, "InnoDB: Page free list appears" " to be circular %lu\n", (ulong) count); goto func_exit; }
rec = page_rec_get_next(rec); }
if (UNIV_UNLIKELY(page_dir_get_n_heap(page) != count + 1)) {
fprintf(stderr, "InnoDB: N heap is wrong %lu, %lu\n", (ulong) page_dir_get_n_heap(page), (ulong) (count + 1));
goto func_exit; }
ret = TRUE;
func_exit: return(ret);}
/***************************************************************//**
This function checks the consistency of an index page.@return TRUE if ok */UNIV_INTERNiboolpage_validate(/*==========*/ page_t* page, /*!< in: index page */ dict_index_t* index) /*!< in: data dictionary index containing
the page record type definition */{ page_dir_slot_t*slot; mem_heap_t* heap; byte* buf; ulint count; ulint own_count; ulint rec_own_count; ulint slot_no; ulint data_size; rec_t* rec; rec_t* old_rec = NULL; ulint offs; ulint n_slots; ibool ret = FALSE; ulint i; ulint* offsets = NULL; ulint* old_offsets = NULL;
if (UNIV_UNLIKELY((ibool) !!page_is_comp(page) != dict_table_is_comp(index->table))) { fputs("InnoDB: 'compact format' flag mismatch\n", stderr); goto func_exit2; } if (page_is_comp(page)) { if (UNIV_UNLIKELY(!page_simple_validate_new(page))) { goto func_exit2; } } else { if (UNIV_UNLIKELY(!page_simple_validate_old(page))) { goto func_exit2; } }
heap = mem_heap_create(UNIV_PAGE_SIZE + 200);
/* The following buffer is used to check that the
records in the page record heap do not overlap */
buf = mem_heap_zalloc(heap, UNIV_PAGE_SIZE);
/* Check first that the record heap and the directory do not
overlap. */
n_slots = page_dir_get_n_slots(page);
if (UNIV_UNLIKELY(!(page_header_get_ptr(page, PAGE_HEAP_TOP) <= page_dir_get_nth_slot(page, n_slots - 1)))) {
fprintf(stderr, "InnoDB: Record heap and dir overlap" " on space %lu page %lu index %s, %p, %p\n", (ulong) page_get_space_id(page), (ulong) page_get_page_no(page), index->name, page_header_get_ptr(page, PAGE_HEAP_TOP), page_dir_get_nth_slot(page, n_slots - 1));
goto func_exit; }
/* Validate the record list in a loop checking also that
it is consistent with the directory. */ count = 0; data_size = 0; own_count = 1; slot_no = 0; slot = page_dir_get_nth_slot(page, slot_no);
rec = page_get_infimum_rec(page);
for (;;) { offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap);
if (page_is_comp(page) && page_rec_is_user_rec(rec) && UNIV_UNLIKELY(rec_get_node_ptr_flag(rec) == page_is_leaf(page))) { fputs("InnoDB: node_ptr flag mismatch\n", stderr); goto func_exit; }
if (UNIV_UNLIKELY(!page_rec_validate(rec, offsets))) { goto func_exit; }
#ifndef UNIV_HOTBACKUP
/* Check that the records are in the ascending order */ if (UNIV_LIKELY(count >= PAGE_HEAP_NO_USER_LOW) && !page_rec_is_supremum(rec)) { if (UNIV_UNLIKELY (1 != cmp_rec_rec(rec, old_rec, offsets, old_offsets, index))) { fprintf(stderr, "InnoDB: Records in wrong order" " on space %lu page %lu index %s\n", (ulong) page_get_space_id(page), (ulong) page_get_page_no(page), index->name); fputs("\nInnoDB: previous record ", stderr); rec_print_new(stderr, old_rec, old_offsets); fputs("\nInnoDB: record ", stderr); rec_print_new(stderr, rec, offsets); putc('\n', stderr);
goto func_exit; } }#endif /* !UNIV_HOTBACKUP */
if (page_rec_is_user_rec(rec)) {
data_size += rec_offs_size(offsets); }
offs = page_offset(rec_get_start(rec, offsets)); i = rec_offs_size(offsets); if (UNIV_UNLIKELY(offs + i >= UNIV_PAGE_SIZE)) { fputs("InnoDB: record offset out of bounds\n", stderr); goto func_exit; }
while (i--) { if (UNIV_UNLIKELY(buf[offs + i])) { /* No other record may overlap this */
fputs("InnoDB: Record overlaps another\n", stderr); goto func_exit; }
buf[offs + i] = 1; }
if (page_is_comp(page)) { rec_own_count = rec_get_n_owned_new(rec); } else { rec_own_count = rec_get_n_owned_old(rec); }
if (UNIV_UNLIKELY(rec_own_count)) { /* This is a record pointed to by a dir slot */ if (UNIV_UNLIKELY(rec_own_count != own_count)) { fprintf(stderr, "InnoDB: Wrong owned count %lu, %lu\n", (ulong) rec_own_count, (ulong) own_count); goto func_exit; }
if (page_dir_slot_get_rec(slot) != rec) { fputs("InnoDB: Dir slot does not" " point to right rec\n", stderr); goto func_exit; }
page_dir_slot_check(slot);
own_count = 0; if (!page_rec_is_supremum(rec)) { slot_no++; slot = page_dir_get_nth_slot(page, slot_no); } }
if (page_rec_is_supremum(rec)) { break; }
count++; own_count++; old_rec = rec; rec = page_rec_get_next(rec);
/* set old_offsets to offsets; recycle offsets */ { ulint* offs = old_offsets; old_offsets = offsets; offsets = offs; } }
if (page_is_comp(page)) { if (UNIV_UNLIKELY(rec_get_n_owned_new(rec) == 0)) {
goto n_owned_zero; } } else if (UNIV_UNLIKELY(rec_get_n_owned_old(rec) == 0)) {n_owned_zero: fputs("InnoDB: n owned is zero\n", stderr); goto func_exit; }
if (UNIV_UNLIKELY(slot_no != n_slots - 1)) { fprintf(stderr, "InnoDB: n slots wrong %lu %lu\n", (ulong) slot_no, (ulong) (n_slots - 1)); goto func_exit; }
if (UNIV_UNLIKELY(page_header_get_field(page, PAGE_N_RECS) + PAGE_HEAP_NO_USER_LOW != count + 1)) { fprintf(stderr, "InnoDB: n recs wrong %lu %lu\n", (ulong) page_header_get_field(page, PAGE_N_RECS) + PAGE_HEAP_NO_USER_LOW, (ulong) (count + 1)); goto func_exit; }
if (UNIV_UNLIKELY(data_size != page_get_data_size(page))) { fprintf(stderr, "InnoDB: Summed data size %lu, returned by func %lu\n", (ulong) data_size, (ulong) page_get_data_size(page)); goto func_exit; }
/* Check then the free list */ rec = page_header_get_ptr(page, PAGE_FREE);
while (rec != NULL) { offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap); if (UNIV_UNLIKELY(!page_rec_validate(rec, offsets))) {
goto func_exit; }
count++; offs = page_offset(rec_get_start(rec, offsets)); i = rec_offs_size(offsets); if (UNIV_UNLIKELY(offs + i >= UNIV_PAGE_SIZE)) { fputs("InnoDB: record offset out of bounds\n", stderr); goto func_exit; }
while (i--) {
if (UNIV_UNLIKELY(buf[offs + i])) { fputs("InnoDB: Record overlaps another" " in free list\n", stderr); goto func_exit; }
buf[offs + i] = 1; }
rec = page_rec_get_next(rec); }
if (UNIV_UNLIKELY(page_dir_get_n_heap(page) != count + 1)) { fprintf(stderr, "InnoDB: N heap is wrong %lu %lu\n", (ulong) page_dir_get_n_heap(page), (ulong) count + 1); goto func_exit; }
ret = TRUE;
func_exit: mem_heap_free(heap);
if (UNIV_UNLIKELY(ret == FALSE)) {func_exit2: fprintf(stderr, "InnoDB: Apparent corruption" " in space %lu page %lu index %s\n", (ulong) page_get_space_id(page), (ulong) page_get_page_no(page), index->name); buf_page_print(page, 0); }
return(ret);}
#ifndef UNIV_HOTBACKUP
/***************************************************************//**
Looks in the page record list for a record with the given heap number.@return record, NULL if not found */UNIV_INTERNconst rec_t*page_find_rec_with_heap_no(/*=======================*/ const page_t* page, /*!< in: index page */ ulint heap_no)/*!< in: heap number */{ const rec_t* rec;
if (page_is_comp(page)) { rec = page + PAGE_NEW_INFIMUM;
for(;;) { ulint rec_heap_no = rec_get_heap_no_new(rec);
if (rec_heap_no == heap_no) {
return(rec); } else if (rec_heap_no == PAGE_HEAP_NO_SUPREMUM) {
return(NULL); }
rec = page + rec_get_next_offs(rec, TRUE); } } else { rec = page + PAGE_OLD_INFIMUM;
for (;;) { ulint rec_heap_no = rec_get_heap_no_old(rec);
if (rec_heap_no == heap_no) {
return(rec); } else if (rec_heap_no == PAGE_HEAP_NO_SUPREMUM) {
return(NULL); }
rec = page + rec_get_next_offs(rec, FALSE); } }}#endif /* !UNIV_HOTBACKUP */
|