You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

571 lines
20 KiB

BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
19 years ago
BUG#21842 (Cluster fails to replicate to innodb or myisam with err 134 using TPC-B): Problem: A RBR event can contain incomplete row data (only key value and fields which have been changed). In that case, when the row is unpacked into record and written to a table, the missing fields get incorrect NULL values leading to master-slave inconsistency. Solution: Use values found in slave's table for columns which are not given in the rows event. The code for writing a single row uses the following algorithm: 1. unpack row_data into table->record[0], 2. try to insert record, 3. if duplicate record found, fetch it into table->record[0], 4. unpack row_data into table->record[0], 5. write table->record[0] into the table. Where row_data is the row as stored in the data area of a rows event. Thus: a) unpacking of row_data happens at the time when row is written into a table, b) when unpacking (in step 4), only columns present in row_data are overwritten - all other columns remain as they were found in the table. Since all data needed for the above algorithm is stored inside Rows_log_event class, functions which locate and write rows are turned into methods of that class. replace_record() -> Rows_log_event::write_row() find_and_fetch_row() -> Rows_log_event::find_row() Both methods take row data from event's data buffer - the row being processed is pointed by m_curr_row. They unpack the data as needed into table's record buffers record[0] or record[1]. When row is unpacked, m_curr_row_end is set to point at next row in the data buffer. Other changes introduced in this changeset: - Change signature of unpack_row(): don't report errors and don't setup table's rw_set here. Errors can happen only when setting default values in prepare_record() function and are detected there. - In Rows_log_event and derived classes, don't pass arguments to the execution primitives (do_...() member functions) but use class members instead. - Move old row handling code into log_event_old.cc to be used by *_rows_log_event_old classes. Also, a new test rpl_ndb_2other is added which tests basic replication from master using ndb tables to slave storing the same tables using (possibly) different engine (myisam,innodb). Test is based on existing tests rpl_ndb_2myisam and rpl_ndb_2innodb. However, these tests doesn't work for various reasons and currently are disabled (see BUG#19227). The new test differs from the ones it is based on as follows: 1. Single test tests replication with different storage engines on slave (myisam, innodb, ndb). 2. Include file extra/rpl_tests/rpl_ndb_2multi_eng.test containing original tests is replaced by extra/rpl_tests/rpl_ndb_2multi_basic.test which doesn't contain tests using partitioned tables as these don't work currently. Instead, it tests replication to a slave which has more or less columns than master. 3. Include file include/rpl_multi_engine3.inc is replaced with include/rpl_multi_engine2.inc. The later differs by performing slightly different operations (updating more than one row in the table) and clearing table with "TRUNCATE TABLE" statement instead of "DELETE FROM" as replication of "DELETE" doesn't work well in this setting. 4. Slave must use option --log-slave-updates=0 as otherwise execution of replication events generated by ndb fails if table uses a different storage engine on slave (see BUG#29569).
19 years ago
  1. /* Copyright 2007 MySQL AB. All rights reserved.
  2. This program is free software; you can redistribute it and/or modify
  3. it under the terms of the GNU General Public License as published by
  4. the Free Software Foundation; version 2 of the License.
  5. This program is distributed in the hope that it will be useful,
  6. but WITHOUT ANY WARRANTY; without even the implied warranty of
  7. MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
  8. GNU General Public License for more details.
  9. You should have received a copy of the GNU General Public License
  10. along with this program; if not, write to the Free Software
  11. Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
  12. #ifndef LOG_EVENT_OLD_H
  13. #define LOG_EVENT_OLD_H
  14. /*
  15. Need to include this file at the proper position of log_event.h
  16. */
  17. /**
  18. @file
  19. @brief This file contains classes handling old formats of row-based
  20. binlog events.
  21. */
  22. /*
  23. Around 2007-10-31, I made these classes completely separated from
  24. the new classes (before, there was a complex class hierarchy
  25. involving multiple inheritance; see BUG#31581), by simply copying
  26. and pasting the entire contents of Rows_log_event into
  27. Old_rows_log_event and the entire contents of
  28. {Write|Update|Delete}_rows_log_event into
  29. {Write|Update|Delete}_rows_log_event_old. For clarity, I will keep
  30. the comments marking which code was cut-and-pasted for some time.
  31. With the classes collapsed into one, there is probably some
  32. redundancy (maybe some methods can be simplified and/or removed),
  33. but we keep them this way for now. /Sven
  34. */
  35. /**
  36. @class Old_rows_log_event
  37. Base class for the three types of row-based events
  38. {Write|Update|Delete}_row_log_event_old, with event type codes
  39. PRE_GA_{WRITE|UPDATE|DELETE}_ROWS_EVENT. These events are never
  40. created any more, except when reading a relay log created by an old
  41. server.
  42. */
  43. class Old_rows_log_event : public Log_event
  44. {
  45. /********** BEGIN CUT & PASTE FROM Rows_log_event **********/
  46. public:
  47. /**
  48. Enumeration of the errors that can be returned.
  49. */
  50. enum enum_error
  51. {
  52. ERR_OPEN_FAILURE = -1, /**< Failure to open table */
  53. ERR_OK = 0, /**< No error */
  54. ERR_TABLE_LIMIT_EXCEEDED = 1, /**< No more room for tables */
  55. ERR_OUT_OF_MEM = 2, /**< Out of memory */
  56. ERR_BAD_TABLE_DEF = 3, /**< Table definition does not match */
  57. ERR_RBR_TO_SBR = 4 /**< daisy-chanining RBR to SBR not allowed */
  58. };
  59. /*
  60. These definitions allow you to combine the flags into an
  61. appropriate flag set using the normal bitwise operators. The
  62. implicit conversion from an enum-constant to an integer is
  63. accepted by the compiler, which is then used to set the real set
  64. of flags.
  65. */
  66. enum enum_flag
  67. {
  68. /* Last event of a statement */
  69. STMT_END_F = (1U << 0),
  70. /* Value of the OPTION_NO_FOREIGN_KEY_CHECKS flag in thd->options */
  71. NO_FOREIGN_KEY_CHECKS_F = (1U << 1),
  72. /* Value of the OPTION_RELAXED_UNIQUE_CHECKS flag in thd->options */
  73. RELAXED_UNIQUE_CHECKS_F = (1U << 2),
  74. /**
  75. Indicates that rows in this event are complete, that is contain
  76. values for all columns of the table.
  77. */
  78. COMPLETE_ROWS_F = (1U << 3)
  79. };
  80. typedef uint16 flag_set;
  81. /* Special constants representing sets of flags */
  82. enum
  83. {
  84. RLE_NO_FLAGS = 0U
  85. };
  86. virtual ~Old_rows_log_event();
  87. void set_flags(flag_set flags_arg) { m_flags |= flags_arg; }
  88. void clear_flags(flag_set flags_arg) { m_flags &= ~flags_arg; }
  89. flag_set get_flags(flag_set flags_arg) const { return m_flags & flags_arg; }
  90. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  91. virtual void pack_info(Protocol *protocol);
  92. #endif
  93. #ifdef MYSQL_CLIENT
  94. /* not for direct call, each derived has its own ::print() */
  95. virtual void print(FILE *file, PRINT_EVENT_INFO *print_event_info)= 0;
  96. #endif
  97. #ifndef MYSQL_CLIENT
  98. int add_row_data(uchar *data, size_t length)
  99. {
  100. return do_add_row_data(data,length);
  101. }
  102. #endif
  103. /* Member functions to implement superclass interface */
  104. virtual int get_data_size();
  105. MY_BITMAP const *get_cols() const { return &m_cols; }
  106. size_t get_width() const { return m_width; }
  107. ulong get_table_id() const { return m_table_id; }
  108. #ifndef MYSQL_CLIENT
  109. virtual bool write_data_header(IO_CACHE *file);
  110. virtual bool write_data_body(IO_CACHE *file);
  111. virtual const char *get_db() { return m_table->s->db.str; }
  112. #endif
  113. /*
  114. Check that malloc() succeeded in allocating memory for the rows
  115. buffer and the COLS vector. Checking that an Update_rows_log_event_old
  116. is valid is done in the Update_rows_log_event_old::is_valid()
  117. function.
  118. */
  119. virtual bool is_valid() const
  120. {
  121. return m_rows_buf && m_cols.bitmap;
  122. }
  123. uint m_row_count; /* The number of rows added to the event */
  124. protected:
  125. /*
  126. The constructors are protected since you're supposed to inherit
  127. this class, not create instances of this class.
  128. */
  129. #ifndef MYSQL_CLIENT
  130. Old_rows_log_event(THD*, TABLE*, ulong table_id,
  131. MY_BITMAP const *cols, bool is_transactional);
  132. #endif
  133. Old_rows_log_event(const char *row_data, uint event_len,
  134. Log_event_type event_type,
  135. const Format_description_log_event *description_event);
  136. #ifdef MYSQL_CLIENT
  137. void print_helper(FILE *, PRINT_EVENT_INFO *, char const *const name);
  138. #endif
  139. #ifndef MYSQL_CLIENT
  140. virtual int do_add_row_data(uchar *data, size_t length);
  141. #endif
  142. #ifndef MYSQL_CLIENT
  143. TABLE *m_table; /* The table the rows belong to */
  144. #endif
  145. ulong m_table_id; /* Table ID */
  146. MY_BITMAP m_cols; /* Bitmap denoting columns available */
  147. ulong m_width; /* The width of the columns bitmap */
  148. ulong m_master_reclength; /* Length of record on master side */
  149. /* Bit buffers in the same memory as the class */
  150. uint32 m_bitbuf[128/(sizeof(uint32)*8)];
  151. uint32 m_bitbuf_ai[128/(sizeof(uint32)*8)];
  152. uchar *m_rows_buf; /* The rows in packed format */
  153. uchar *m_rows_cur; /* One-after the end of the data */
  154. uchar *m_rows_end; /* One-after the end of the allocated space */
  155. flag_set m_flags; /* Flags for row-level events */
  156. /* helper functions */
  157. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  158. const uchar *m_curr_row; /* Start of the row being processed */
  159. const uchar *m_curr_row_end; /* One-after the end of the current row */
  160. uchar *m_key; /* Buffer to keep key value during searches */
  161. int find_row(const Relay_log_info *const);
  162. int write_row(const Relay_log_info *const, const bool);
  163. // Unpack the current row into m_table->record[0]
  164. int unpack_current_row(const Relay_log_info *const rli)
  165. {
  166. DBUG_ASSERT(m_table);
  167. ASSERT_OR_RETURN_ERROR(m_curr_row < m_rows_end, HA_ERR_CORRUPT_EVENT);
  168. int const result= ::unpack_row(rli, m_table, m_width, m_curr_row, &m_cols,
  169. &m_curr_row_end, &m_master_reclength);
  170. ASSERT_OR_RETURN_ERROR(m_curr_row_end <= m_rows_end, HA_ERR_CORRUPT_EVENT);
  171. return result;
  172. }
  173. #endif
  174. private:
  175. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  176. virtual int do_apply_event(Relay_log_info const *rli);
  177. virtual int do_update_pos(Relay_log_info *rli);
  178. virtual enum_skip_reason do_shall_skip(Relay_log_info *rli);
  179. /*
  180. Primitive to prepare for a sequence of row executions.
  181. DESCRIPTION
  182. Before doing a sequence of do_prepare_row() and do_exec_row()
  183. calls, this member function should be called to prepare for the
  184. entire sequence. Typically, this member function will allocate
  185. space for any buffers that are needed for the two member
  186. functions mentioned above.
  187. RETURN VALUE
  188. The member function will return 0 if all went OK, or a non-zero
  189. error code otherwise.
  190. */
  191. virtual
  192. int do_before_row_operations(const Slave_reporting_capability *const log) = 0;
  193. /*
  194. Primitive to clean up after a sequence of row executions.
  195. DESCRIPTION
  196. After doing a sequence of do_prepare_row() and do_exec_row(),
  197. this member function should be called to clean up and release
  198. any allocated buffers.
  199. The error argument, if non-zero, indicates an error which happened during
  200. row processing before this function was called. In this case, even if
  201. function is successful, it should return the error code given in the argument.
  202. */
  203. virtual
  204. int do_after_row_operations(const Slave_reporting_capability *const log,
  205. int error) = 0;
  206. /*
  207. Primitive to do the actual execution necessary for a row.
  208. DESCRIPTION
  209. The member function will do the actual execution needed to handle a row.
  210. The row is located at m_curr_row. When the function returns,
  211. m_curr_row_end should point at the next row (one byte after the end
  212. of the current row).
  213. RETURN VALUE
  214. 0 if execution succeeded, 1 if execution failed.
  215. */
  216. virtual int do_exec_row(const Relay_log_info *const rli) = 0;
  217. #endif /* !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION) */
  218. /********** END OF CUT & PASTE FROM Rows_log_event **********/
  219. protected:
  220. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  221. int do_apply_event(Old_rows_log_event*,const Relay_log_info*);
  222. /*
  223. Primitive to prepare for a sequence of row executions.
  224. DESCRIPTION
  225. Before doing a sequence of do_prepare_row() and do_exec_row()
  226. calls, this member function should be called to prepare for the
  227. entire sequence. Typically, this member function will allocate
  228. space for any buffers that are needed for the two member
  229. functions mentioned above.
  230. RETURN VALUE
  231. The member function will return 0 if all went OK, or a non-zero
  232. error code otherwise.
  233. */
  234. virtual int do_before_row_operations(TABLE *table) = 0;
  235. /*
  236. Primitive to clean up after a sequence of row executions.
  237. DESCRIPTION
  238. After doing a sequence of do_prepare_row() and do_exec_row(),
  239. this member function should be called to clean up and release
  240. any allocated buffers.
  241. */
  242. virtual int do_after_row_operations(TABLE *table, int error) = 0;
  243. /*
  244. Primitive to prepare for handling one row in a row-level event.
  245. DESCRIPTION
  246. The member function prepares for execution of operations needed for one
  247. row in a row-level event by reading up data from the buffer containing
  248. the row. No specific interpretation of the data is normally done here,
  249. since SQL thread specific data is not available: that data is made
  250. available for the do_exec function.
  251. A pointer to the start of the next row, or NULL if the preparation
  252. failed. Currently, preparation cannot fail, but don't rely on this
  253. behavior.
  254. RETURN VALUE
  255. Error code, if something went wrong, 0 otherwise.
  256. */
  257. virtual int do_prepare_row(THD*, Relay_log_info const*, TABLE*,
  258. uchar const *row_start,
  259. uchar const **row_end) = 0;
  260. /*
  261. Primitive to do the actual execution necessary for a row.
  262. DESCRIPTION
  263. The member function will do the actual execution needed to handle a row.
  264. RETURN VALUE
  265. 0 if execution succeeded, 1 if execution failed.
  266. */
  267. virtual int do_exec_row(TABLE *table) = 0;
  268. #endif /* !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION) */
  269. };
  270. /**
  271. @class Write_rows_log_event_old
  272. Old class for binlog events that write new rows to a table (event
  273. type code PRE_GA_WRITE_ROWS_EVENT). Such events are never produced
  274. by this version of the server, but they may be read from a relay log
  275. created by an old server. New servers create events of class
  276. Write_rows_log_event (event type code WRITE_ROWS_EVENT) instead.
  277. */
  278. class Write_rows_log_event_old : public Old_rows_log_event
  279. {
  280. /********** BEGIN CUT & PASTE FROM Write_rows_log_event **********/
  281. public:
  282. #if !defined(MYSQL_CLIENT)
  283. Write_rows_log_event_old(THD*, TABLE*, ulong table_id,
  284. MY_BITMAP const *cols, bool is_transactional);
  285. #endif
  286. #ifdef HAVE_REPLICATION
  287. Write_rows_log_event_old(const char *buf, uint event_len,
  288. const Format_description_log_event *description_event);
  289. #endif
  290. #if !defined(MYSQL_CLIENT)
  291. static bool binlog_row_logging_function(THD *thd, TABLE *table,
  292. bool is_transactional,
  293. MY_BITMAP *cols,
  294. uint fields,
  295. const uchar *before_record
  296. __attribute__((unused)),
  297. const uchar *after_record)
  298. {
  299. return thd->binlog_write_row(table, is_transactional,
  300. cols, fields, after_record);
  301. }
  302. #endif
  303. private:
  304. #ifdef MYSQL_CLIENT
  305. void print(FILE *file, PRINT_EVENT_INFO *print_event_info);
  306. #endif
  307. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  308. virtual int do_before_row_operations(const Slave_reporting_capability *const);
  309. virtual int do_after_row_operations(const Slave_reporting_capability *const,int);
  310. virtual int do_exec_row(const Relay_log_info *const);
  311. #endif
  312. /********** END OF CUT & PASTE FROM Write_rows_log_event **********/
  313. public:
  314. enum
  315. {
  316. /* Support interface to THD::binlog_prepare_pending_rows_event */
  317. TYPE_CODE = PRE_GA_WRITE_ROWS_EVENT
  318. };
  319. private:
  320. virtual Log_event_type get_type_code() { return (Log_event_type)TYPE_CODE; }
  321. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  322. // use old definition of do_apply_event()
  323. virtual int do_apply_event(const Relay_log_info *rli)
  324. { return Old_rows_log_event::do_apply_event(this,rli); }
  325. // primitives for old version of do_apply_event()
  326. virtual int do_before_row_operations(TABLE *table);
  327. virtual int do_after_row_operations(TABLE *table, int error);
  328. virtual int do_prepare_row(THD*, Relay_log_info const*, TABLE*,
  329. uchar const *row_start, uchar const **row_end);
  330. virtual int do_exec_row(TABLE *table);
  331. #endif
  332. };
  333. /**
  334. @class Update_rows_log_event_old
  335. Old class for binlog events that modify existing rows to a table
  336. (event type code PRE_GA_UPDATE_ROWS_EVENT). Such events are never
  337. produced by this version of the server, but they may be read from a
  338. relay log created by an old server. New servers create events of
  339. class Update_rows_log_event (event type code UPDATE_ROWS_EVENT)
  340. instead.
  341. */
  342. class Update_rows_log_event_old : public Old_rows_log_event
  343. {
  344. /********** BEGIN CUT & PASTE FROM Update_rows_log_event **********/
  345. public:
  346. #ifndef MYSQL_CLIENT
  347. Update_rows_log_event_old(THD*, TABLE*, ulong table_id,
  348. MY_BITMAP const *cols,
  349. bool is_transactional);
  350. #endif
  351. #ifdef HAVE_REPLICATION
  352. Update_rows_log_event_old(const char *buf, uint event_len,
  353. const Format_description_log_event *description_event);
  354. #endif
  355. #if !defined(MYSQL_CLIENT)
  356. static bool binlog_row_logging_function(THD *thd, TABLE *table,
  357. bool is_transactional,
  358. MY_BITMAP *cols,
  359. uint fields,
  360. const uchar *before_record,
  361. const uchar *after_record)
  362. {
  363. return thd->binlog_update_row(table, is_transactional,
  364. cols, fields, before_record, after_record);
  365. }
  366. #endif
  367. protected:
  368. #ifdef MYSQL_CLIENT
  369. void print(FILE *file, PRINT_EVENT_INFO *print_event_info);
  370. #endif
  371. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  372. virtual int do_before_row_operations(const Slave_reporting_capability *const);
  373. virtual int do_after_row_operations(const Slave_reporting_capability *const,int);
  374. virtual int do_exec_row(const Relay_log_info *const);
  375. #endif /* !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION) */
  376. /********** END OF CUT & PASTE FROM Update_rows_log_event **********/
  377. uchar *m_after_image, *m_memory;
  378. public:
  379. enum
  380. {
  381. /* Support interface to THD::binlog_prepare_pending_rows_event */
  382. TYPE_CODE = PRE_GA_UPDATE_ROWS_EVENT
  383. };
  384. private:
  385. virtual Log_event_type get_type_code() { return (Log_event_type)TYPE_CODE; }
  386. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  387. // use old definition of do_apply_event()
  388. virtual int do_apply_event(const Relay_log_info *rli)
  389. { return Old_rows_log_event::do_apply_event(this,rli); }
  390. // primitives for old version of do_apply_event()
  391. virtual int do_before_row_operations(TABLE *table);
  392. virtual int do_after_row_operations(TABLE *table, int error);
  393. virtual int do_prepare_row(THD*, Relay_log_info const*, TABLE*,
  394. uchar const *row_start, uchar const **row_end);
  395. virtual int do_exec_row(TABLE *table);
  396. #endif /* !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION) */
  397. };
  398. /**
  399. @class Delete_rows_log_event_old
  400. Old class for binlog events that delete existing rows from a table
  401. (event type code PRE_GA_DELETE_ROWS_EVENT). Such events are never
  402. produced by this version of the server, but they may be read from a
  403. relay log created by an old server. New servers create events of
  404. class Delete_rows_log_event (event type code DELETE_ROWS_EVENT)
  405. instead.
  406. */
  407. class Delete_rows_log_event_old : public Old_rows_log_event
  408. {
  409. /********** BEGIN CUT & PASTE FROM Update_rows_log_event **********/
  410. public:
  411. #ifndef MYSQL_CLIENT
  412. Delete_rows_log_event_old(THD*, TABLE*, ulong,
  413. MY_BITMAP const *cols, bool is_transactional);
  414. #endif
  415. #ifdef HAVE_REPLICATION
  416. Delete_rows_log_event_old(const char *buf, uint event_len,
  417. const Format_description_log_event *description_event);
  418. #endif
  419. #if !defined(MYSQL_CLIENT)
  420. static bool binlog_row_logging_function(THD *thd, TABLE *table,
  421. bool is_transactional,
  422. MY_BITMAP *cols,
  423. uint fields,
  424. const uchar *before_record,
  425. const uchar *after_record
  426. __attribute__((unused)))
  427. {
  428. return thd->binlog_delete_row(table, is_transactional,
  429. cols, fields, before_record);
  430. }
  431. #endif
  432. protected:
  433. #ifdef MYSQL_CLIENT
  434. void print(FILE *file, PRINT_EVENT_INFO *print_event_info);
  435. #endif
  436. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  437. virtual int do_before_row_operations(const Slave_reporting_capability *const);
  438. virtual int do_after_row_operations(const Slave_reporting_capability *const,int);
  439. virtual int do_exec_row(const Relay_log_info *const);
  440. #endif
  441. /********** END CUT & PASTE FROM Delete_rows_log_event **********/
  442. uchar *m_after_image, *m_memory;
  443. public:
  444. enum
  445. {
  446. /* Support interface to THD::binlog_prepare_pending_rows_event */
  447. TYPE_CODE = PRE_GA_DELETE_ROWS_EVENT
  448. };
  449. private:
  450. virtual Log_event_type get_type_code() { return (Log_event_type)TYPE_CODE; }
  451. #if !defined(MYSQL_CLIENT) && defined(HAVE_REPLICATION)
  452. // use old definition of do_apply_event()
  453. virtual int do_apply_event(const Relay_log_info *rli)
  454. { return Old_rows_log_event::do_apply_event(this,rli); }
  455. // primitives for old version of do_apply_event()
  456. virtual int do_before_row_operations(TABLE *table);
  457. virtual int do_after_row_operations(TABLE *table, int error);
  458. virtual int do_prepare_row(THD*, Relay_log_info const*, TABLE*,
  459. uchar const *row_start, uchar const **row_end);
  460. virtual int do_exec_row(TABLE *table);
  461. #endif
  462. };
  463. #endif