You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1764 lines
44 KiB

MDEV-13485 MTR tests fail massively with --innodb-sync-debug The parameter --innodb-sync-debug, which is disabled by default, aims to find potential deadlocks in InnoDB. When the parameter is enabled, lots of tests failed. Most of these failures were due to bogus diagnostics. But, as part of this fix, we are also fixing a bug in error handling code and removing dead code, and fixing cases where an uninitialized mutex was being locked and unlocked. dict_create_foreign_constraints_low(): Remove an extraneous mutex_exit() call that could cause corruption in an error handling path. Also, do not unnecessarily acquire dict_foreign_err_mutex. Its only purpose is to control concurrent access to dict_foreign_err_file. row_ins_foreign_trx_print(): Replace a redundant condition with a debug assertion. srv_dict_tmpfile, srv_dict_tmpfile_mutex: Remove. The temporary file is never being written to or read from. log_free_check(): Allow SYNC_FTS_CACHE (fts_cache_t::lock) to be held. ha_innobase::inplace_alter_table(), row_merge_insert_index_tuples(): Assert that no unexpected latches are being held. sync_latch_meta_init(): Properly initialize dict_operation_lock_key at SYNC_DICT_OPERATION. dict_sys->mutex is SYNC_DICT, and the now-removed SRV_DICT_TMPFILE was wrongly registered at SYNC_DICT_OPERATION. buf_block_init(): Correctly register buf_block_t::debug_latch. It was previously misleadingly reported as LATCH_ID_DICT_FOREIGN_ERR. latch_level_t: Correct the relative latching order of SYNC_IBUF_PESS_INSERT_MUTEX,SYNC_INDEX_TREE and SYNC_FILE_FORMAT_TAG,SYNC_DICT_OPERATION to avoid bogus failures. row_drop_table_for_mysql(): Avoid accessing btr_defragment_mutex if the defragmentation thread has not been started. This is the case during fts_drop_orphaned_tables() in recv_recovery_rollback_active(). fil_space_destroy_crypt_data(): Avoid acquiring fil_crypt_threads_mutex when it is uninitialized. We may have created crypt_data before the mutex was created, and the mutex creation would be skipped if InnoDB startup failed or --innodb-read-only was specified.
8 years ago
MDEV-13485 MTR tests fail massively with --innodb-sync-debug The parameter --innodb-sync-debug, which is disabled by default, aims to find potential deadlocks in InnoDB. When the parameter is enabled, lots of tests failed. Most of these failures were due to bogus diagnostics. But, as part of this fix, we are also fixing a bug in error handling code and removing dead code, and fixing cases where an uninitialized mutex was being locked and unlocked. dict_create_foreign_constraints_low(): Remove an extraneous mutex_exit() call that could cause corruption in an error handling path. Also, do not unnecessarily acquire dict_foreign_err_mutex. Its only purpose is to control concurrent access to dict_foreign_err_file. row_ins_foreign_trx_print(): Replace a redundant condition with a debug assertion. srv_dict_tmpfile, srv_dict_tmpfile_mutex: Remove. The temporary file is never being written to or read from. log_free_check(): Allow SYNC_FTS_CACHE (fts_cache_t::lock) to be held. ha_innobase::inplace_alter_table(), row_merge_insert_index_tuples(): Assert that no unexpected latches are being held. sync_latch_meta_init(): Properly initialize dict_operation_lock_key at SYNC_DICT_OPERATION. dict_sys->mutex is SYNC_DICT, and the now-removed SRV_DICT_TMPFILE was wrongly registered at SYNC_DICT_OPERATION. buf_block_init(): Correctly register buf_block_t::debug_latch. It was previously misleadingly reported as LATCH_ID_DICT_FOREIGN_ERR. latch_level_t: Correct the relative latching order of SYNC_IBUF_PESS_INSERT_MUTEX,SYNC_INDEX_TREE and SYNC_FILE_FORMAT_TAG,SYNC_DICT_OPERATION to avoid bogus failures. row_drop_table_for_mysql(): Avoid accessing btr_defragment_mutex if the defragmentation thread has not been started. This is the case during fts_drop_orphaned_tables() in recv_recovery_rollback_active(). fil_space_destroy_crypt_data(): Avoid acquiring fil_crypt_threads_mutex when it is uninitialized. We may have created crypt_data before the mutex was created, and the mutex creation would be skipped if InnoDB startup failed or --innodb-read-only was specified.
8 years ago
  1. /*****************************************************************************
  2. Copyright (c) 2014, 2016, Oracle and/or its affiliates. All Rights Reserved.
  3. Copyright (c) 2017, 2018, MariaDB Corporation.
  4. Portions of this file contain modifications contributed and copyrighted by
  5. Google, Inc. Those modifications are gratefully acknowledged and are described
  6. briefly in the InnoDB documentation. The contributions by Google are
  7. incorporated with their permission, and subject to the conditions contained in
  8. the file COPYING.Google.
  9. This program is free software; you can redistribute it and/or modify it under
  10. the terms of the GNU General Public License as published by the Free Software
  11. Foundation; version 2 of the License.
  12. This program is distributed in the hope that it will be useful, but WITHOUT
  13. ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
  14. FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
  15. You should have received a copy of the GNU General Public License along with
  16. this program; if not, write to the Free Software Foundation, Inc.,
  17. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA
  18. *****************************************************************************/
  19. /**************************************************//**
  20. @file sync/sync0debug.cc
  21. Debug checks for latches.
  22. Created 2012-08-21 Sunny Bains
  23. *******************************************************/
  24. #include "sync0sync.h"
  25. #include "sync0debug.h"
  26. #include "srv0start.h"
  27. #include <vector>
  28. #include <string>
  29. #include <algorithm>
  30. #include <iostream>
  31. #ifdef UNIV_DEBUG
  32. my_bool srv_sync_debug;
  33. /** The global mutex which protects debug info lists of all rw-locks.
  34. To modify the debug info list of an rw-lock, this mutex has to be
  35. acquired in addition to the mutex protecting the lock. */
  36. static SysMutex rw_lock_debug_mutex;
  37. /** The latch held by a thread */
  38. struct Latched {
  39. /** Constructor */
  40. Latched() : m_latch(), m_level(SYNC_UNKNOWN) { }
  41. /** Constructor
  42. @param[in] latch Latch instance
  43. @param[in] level Level of latch held */
  44. Latched(const latch_t* latch,
  45. latch_level_t level)
  46. :
  47. m_latch(latch),
  48. m_level(level)
  49. {
  50. /* No op */
  51. }
  52. /** @return the latch level */
  53. latch_level_t get_level() const
  54. {
  55. return(m_level);
  56. }
  57. /** Check if the rhs latch and level match
  58. @param[in] rhs instance to compare with
  59. @return true on match */
  60. bool operator==(const Latched& rhs) const
  61. {
  62. return(m_latch == rhs.m_latch && m_level == rhs.m_level);
  63. }
  64. /** The latch instance */
  65. const latch_t* m_latch;
  66. /** The latch level. For buffer blocks we can pass a separate latch
  67. level to check against, see buf_block_dbg_add_level() */
  68. latch_level_t m_level;
  69. };
  70. /** Thread specific latches. This is ordered on level in descending order. */
  71. typedef std::vector<Latched, ut_allocator<Latched> > Latches;
  72. /** The deadlock detector. */
  73. struct LatchDebug {
  74. /** Debug mutex for control structures, should not be tracked
  75. by this module. */
  76. typedef OSMutex Mutex;
  77. /** Comparator for the ThreadMap. */
  78. struct os_thread_id_less
  79. : public std::binary_function<
  80. os_thread_id_t,
  81. os_thread_id_t,
  82. bool>
  83. {
  84. /** @return true if lhs < rhs */
  85. bool operator()(
  86. const os_thread_id_t& lhs,
  87. const os_thread_id_t& rhs) const
  88. UNIV_NOTHROW
  89. {
  90. return(os_thread_pf(lhs) < os_thread_pf(rhs));
  91. }
  92. };
  93. /** For tracking a thread's latches. */
  94. typedef std::map<
  95. os_thread_id_t,
  96. Latches*,
  97. os_thread_id_less,
  98. ut_allocator<std::pair<const os_thread_id_t, Latches*> > >
  99. ThreadMap;
  100. /** Constructor */
  101. LatchDebug()
  102. UNIV_NOTHROW;
  103. /** Destructor */
  104. ~LatchDebug()
  105. UNIV_NOTHROW
  106. {
  107. m_mutex.destroy();
  108. }
  109. /** Create a new instance if one doesn't exist else return
  110. the existing one.
  111. @param[in] add add an empty entry if one is not
  112. found (default no)
  113. @return pointer to a thread's acquired latches. */
  114. Latches* thread_latches(bool add = false)
  115. UNIV_NOTHROW;
  116. /** Check that all the latches already owned by a thread have a lower
  117. level than limit.
  118. @param[in] latches the thread's existing (acquired) latches
  119. @param[in] limit to check against
  120. @return latched if there is one with a level <= limit . */
  121. const Latched* less(
  122. const Latches* latches,
  123. latch_level_t limit) const
  124. UNIV_NOTHROW;
  125. /** Checks if the level value exists in the thread's acquired latches.
  126. @param[in] latches the thread's existing (acquired) latches
  127. @param[in] level to lookup
  128. @return latch if found or 0 */
  129. const latch_t* find(
  130. const Latches* Latches,
  131. latch_level_t level) const
  132. UNIV_NOTHROW;
  133. /**
  134. Checks if the level value exists in the thread's acquired latches.
  135. @param[in] level to lookup
  136. @return latch if found or 0 */
  137. const latch_t* find(latch_level_t level)
  138. UNIV_NOTHROW;
  139. /** Report error and abort.
  140. @param[in] latches thread's existing latches
  141. @param[in] latched The existing latch causing the
  142. invariant to fail
  143. @param[in] level The new level request that breaks
  144. the order */
  145. void crash(
  146. const Latches* latches,
  147. const Latched* latched,
  148. latch_level_t level) const
  149. UNIV_NOTHROW;
  150. /** Do a basic ordering check.
  151. @param[in] latches thread's existing latches
  152. @param[in] requested_level Level requested by latch
  153. @param[in] level declared ulint so that we can
  154. do level - 1. The level of the
  155. latch that the thread is trying
  156. to acquire
  157. @return true if passes, else crash with error message. */
  158. bool basic_check(
  159. const Latches* latches,
  160. latch_level_t requested_level,
  161. ulint level) const
  162. UNIV_NOTHROW;
  163. /** Adds a latch and its level in the thread level array. Allocates
  164. the memory for the array if called for the first time for this
  165. OS thread. Makes the checks against other latch levels stored
  166. in the array for this thread.
  167. @param[in] latch latch that the thread wants to acqire.
  168. @param[in] level latch level to check against */
  169. void lock_validate(
  170. const latch_t* latch,
  171. latch_level_t level)
  172. UNIV_NOTHROW
  173. {
  174. /* Ignore diagnostic latches, starting with '.' */
  175. if (*latch->get_name() != '.'
  176. && latch->get_level() != SYNC_LEVEL_VARYING) {
  177. ut_ad(level != SYNC_LEVEL_VARYING);
  178. Latches* latches = check_order(latch, level);
  179. ut_a(latches->empty()
  180. || level == SYNC_LEVEL_VARYING
  181. || level == SYNC_NO_ORDER_CHECK
  182. || latches->back().get_level()
  183. == SYNC_NO_ORDER_CHECK
  184. || latches->back().m_latch->get_level()
  185. == SYNC_LEVEL_VARYING
  186. || latches->back().get_level() >= level);
  187. }
  188. }
  189. /** Adds a latch and its level in the thread level array. Allocates
  190. the memory for the array if called for the first time for this
  191. OS thread. Makes the checks against other latch levels stored
  192. in the array for this thread.
  193. @param[in] latch latch that the thread wants to acqire.
  194. @param[in] level latch level to check against */
  195. void lock_granted(
  196. const latch_t* latch,
  197. latch_level_t level)
  198. UNIV_NOTHROW
  199. {
  200. /* Ignore diagnostic latches, starting with '.' */
  201. if (*latch->get_name() != '.'
  202. && latch->get_level() != SYNC_LEVEL_VARYING) {
  203. Latches* latches = thread_latches(true);
  204. latches->push_back(Latched(latch, level));
  205. }
  206. }
  207. /** For recursive X rw-locks.
  208. @param[in] latch The RW-Lock to relock */
  209. void relock(const latch_t* latch)
  210. UNIV_NOTHROW
  211. {
  212. ut_a(latch->m_rw_lock);
  213. latch_level_t level = latch->get_level();
  214. /* Ignore diagnostic latches, starting with '.' */
  215. if (*latch->get_name() != '.'
  216. && latch->get_level() != SYNC_LEVEL_VARYING) {
  217. Latches* latches = thread_latches(true);
  218. Latches::iterator it = std::find(
  219. latches->begin(), latches->end(),
  220. Latched(latch, level));
  221. ut_a(latches->empty()
  222. || level == SYNC_LEVEL_VARYING
  223. || level == SYNC_NO_ORDER_CHECK
  224. || latches->back().m_latch->get_level()
  225. == SYNC_LEVEL_VARYING
  226. || latches->back().m_latch->get_level()
  227. == SYNC_NO_ORDER_CHECK
  228. || latches->back().get_level() >= level
  229. || it != latches->end());
  230. if (it == latches->end()) {
  231. latches->push_back(Latched(latch, level));
  232. } else {
  233. latches->insert(it, Latched(latch, level));
  234. }
  235. }
  236. }
  237. /** Iterate over a thread's latches.
  238. @param[in] functor The callback
  239. @return true if the functor returns true. */
  240. bool for_each(const sync_check_functor_t& functor)
  241. UNIV_NOTHROW
  242. {
  243. if (const Latches* latches = thread_latches()) {
  244. Latches::const_iterator end = latches->end();
  245. for (Latches::const_iterator it = latches->begin();
  246. it != end; ++it) {
  247. if (functor(it->m_level)) {
  248. return(true);
  249. }
  250. }
  251. }
  252. return(false);
  253. }
  254. /** Removes a latch from the thread level array if it is found there.
  255. @param[in] latch The latch that was released
  256. @return true if found in the array; it is not an error if the latch is
  257. not found, as we presently are not able to determine the level for
  258. every latch reservation the program does */
  259. void unlock(const latch_t* latch) UNIV_NOTHROW;
  260. /** Get the level name
  261. @param[in] level The level ID to lookup
  262. @return level name */
  263. const std::string& get_level_name(latch_level_t level) const
  264. UNIV_NOTHROW
  265. {
  266. Levels::const_iterator it = m_levels.find(level);
  267. ut_ad(it != m_levels.end());
  268. return(it->second);
  269. }
  270. /** Initialise the debug data structures */
  271. static void init()
  272. UNIV_NOTHROW;
  273. /** Shutdown the latch debug checking */
  274. static void shutdown()
  275. UNIV_NOTHROW;
  276. /** @return the singleton instance */
  277. static LatchDebug* instance()
  278. UNIV_NOTHROW
  279. {
  280. return(s_instance);
  281. }
  282. /** Create the singleton instance */
  283. static void create_instance()
  284. UNIV_NOTHROW
  285. {
  286. ut_ad(s_instance == NULL);
  287. s_instance = UT_NEW_NOKEY(LatchDebug());
  288. }
  289. private:
  290. /** Disable copying */
  291. LatchDebug(const LatchDebug&);
  292. LatchDebug& operator=(const LatchDebug&);
  293. /** Adds a latch and its level in the thread level array. Allocates
  294. the memory for the array if called first time for this OS thread.
  295. Makes the checks against other latch levels stored in the array
  296. for this thread.
  297. @param[in] latch pointer to a mutex or an rw-lock
  298. @param[in] level level in the latching order
  299. @return the thread's latches */
  300. Latches* check_order(
  301. const latch_t* latch,
  302. latch_level_t level)
  303. UNIV_NOTHROW;
  304. /** Print the latches acquired by a thread
  305. @param[in] latches Latches acquired by a thread */
  306. void print_latches(const Latches* latches) const
  307. UNIV_NOTHROW;
  308. /** Special handling for the RTR mutexes. We need to add proper
  309. levels for them if possible.
  310. @param[in] latch Latch to check
  311. @return true if it is a an _RTR_ mutex */
  312. bool is_rtr_mutex(const latch_t* latch) const
  313. UNIV_NOTHROW
  314. {
  315. return(latch->get_id() == LATCH_ID_RTR_ACTIVE_MUTEX
  316. || latch->get_id() == LATCH_ID_RTR_PATH_MUTEX
  317. || latch->get_id() == LATCH_ID_RTR_MATCH_MUTEX
  318. || latch->get_id() == LATCH_ID_RTR_SSN_MUTEX);
  319. }
  320. private:
  321. /** Comparator for the Levels . */
  322. struct latch_level_less
  323. : public std::binary_function<
  324. latch_level_t,
  325. latch_level_t,
  326. bool>
  327. {
  328. /** @return true if lhs < rhs */
  329. bool operator()(
  330. const latch_level_t& lhs,
  331. const latch_level_t& rhs) const
  332. UNIV_NOTHROW
  333. {
  334. return(lhs < rhs);
  335. }
  336. };
  337. typedef std::map<
  338. latch_level_t,
  339. std::string,
  340. latch_level_less,
  341. ut_allocator<std::pair<const latch_level_t, std::string> > >
  342. Levels;
  343. /** Mutex protecting the deadlock detector data structures. */
  344. Mutex m_mutex;
  345. /** Thread specific data. Protected by m_mutex. */
  346. ThreadMap m_threads;
  347. /** Mapping from latche level to its string representation. */
  348. Levels m_levels;
  349. /** The singleton instance. Must be created in single threaded mode. */
  350. static LatchDebug* s_instance;
  351. public:
  352. /** For checking whether this module has been initialised or not. */
  353. static bool s_initialized;
  354. };
  355. /** The latch order checking infra-structure */
  356. LatchDebug* LatchDebug::s_instance = NULL;
  357. bool LatchDebug::s_initialized = false;
  358. #define LEVEL_MAP_INSERT(T) \
  359. do { \
  360. std::pair<Levels::iterator, bool> result = \
  361. m_levels.insert(Levels::value_type(T, #T)); \
  362. ut_ad(result.second); \
  363. } while(0)
  364. /** Setup the mapping from level ID to level name mapping */
  365. LatchDebug::LatchDebug()
  366. {
  367. m_mutex.init();
  368. LEVEL_MAP_INSERT(SYNC_UNKNOWN);
  369. LEVEL_MAP_INSERT(SYNC_MUTEX);
  370. LEVEL_MAP_INSERT(RW_LOCK_SX);
  371. LEVEL_MAP_INSERT(RW_LOCK_X_WAIT);
  372. LEVEL_MAP_INSERT(RW_LOCK_S);
  373. LEVEL_MAP_INSERT(RW_LOCK_X);
  374. LEVEL_MAP_INSERT(RW_LOCK_NOT_LOCKED);
  375. LEVEL_MAP_INSERT(SYNC_MONITOR_MUTEX);
  376. LEVEL_MAP_INSERT(SYNC_ANY_LATCH);
  377. LEVEL_MAP_INSERT(SYNC_DOUBLEWRITE);
  378. LEVEL_MAP_INSERT(SYNC_BUF_FLUSH_LIST);
  379. LEVEL_MAP_INSERT(SYNC_BUF_BLOCK);
  380. LEVEL_MAP_INSERT(SYNC_BUF_PAGE_HASH);
  381. LEVEL_MAP_INSERT(SYNC_BUF_POOL);
  382. LEVEL_MAP_INSERT(SYNC_POOL);
  383. LEVEL_MAP_INSERT(SYNC_POOL_MANAGER);
  384. LEVEL_MAP_INSERT(SYNC_SEARCH_SYS);
  385. LEVEL_MAP_INSERT(SYNC_WORK_QUEUE);
  386. LEVEL_MAP_INSERT(SYNC_FTS_TOKENIZE);
  387. LEVEL_MAP_INSERT(SYNC_FTS_OPTIMIZE);
  388. LEVEL_MAP_INSERT(SYNC_FTS_BG_THREADS);
  389. LEVEL_MAP_INSERT(SYNC_FTS_CACHE_INIT);
  390. LEVEL_MAP_INSERT(SYNC_RECV);
  391. LEVEL_MAP_INSERT(SYNC_LOG_FLUSH_ORDER);
  392. LEVEL_MAP_INSERT(SYNC_LOG);
  393. LEVEL_MAP_INSERT(SYNC_LOG_WRITE);
  394. LEVEL_MAP_INSERT(SYNC_PAGE_CLEANER);
  395. LEVEL_MAP_INSERT(SYNC_PURGE_QUEUE);
  396. LEVEL_MAP_INSERT(SYNC_TRX_SYS_HEADER);
  397. LEVEL_MAP_INSERT(SYNC_REC_LOCK);
  398. LEVEL_MAP_INSERT(SYNC_THREADS);
  399. LEVEL_MAP_INSERT(SYNC_TRX);
  400. LEVEL_MAP_INSERT(SYNC_TRX_SYS);
  401. LEVEL_MAP_INSERT(SYNC_LOCK_SYS);
  402. LEVEL_MAP_INSERT(SYNC_LOCK_WAIT_SYS);
  403. LEVEL_MAP_INSERT(SYNC_INDEX_ONLINE_LOG);
  404. LEVEL_MAP_INSERT(SYNC_IBUF_BITMAP);
  405. LEVEL_MAP_INSERT(SYNC_IBUF_BITMAP_MUTEX);
  406. LEVEL_MAP_INSERT(SYNC_IBUF_TREE_NODE);
  407. LEVEL_MAP_INSERT(SYNC_IBUF_TREE_NODE_NEW);
  408. LEVEL_MAP_INSERT(SYNC_IBUF_INDEX_TREE);
  409. LEVEL_MAP_INSERT(SYNC_IBUF_MUTEX);
  410. LEVEL_MAP_INSERT(SYNC_FSP_PAGE);
  411. LEVEL_MAP_INSERT(SYNC_FSP);
  412. LEVEL_MAP_INSERT(SYNC_EXTERN_STORAGE);
  413. LEVEL_MAP_INSERT(SYNC_TRX_UNDO_PAGE);
  414. LEVEL_MAP_INSERT(SYNC_RSEG_HEADER);
  415. LEVEL_MAP_INSERT(SYNC_RSEG_HEADER_NEW);
  416. LEVEL_MAP_INSERT(SYNC_NOREDO_RSEG);
  417. LEVEL_MAP_INSERT(SYNC_REDO_RSEG);
  418. LEVEL_MAP_INSERT(SYNC_TRX_UNDO);
  419. LEVEL_MAP_INSERT(SYNC_PURGE_LATCH);
  420. LEVEL_MAP_INSERT(SYNC_TREE_NODE);
  421. LEVEL_MAP_INSERT(SYNC_TREE_NODE_FROM_HASH);
  422. LEVEL_MAP_INSERT(SYNC_TREE_NODE_NEW);
  423. LEVEL_MAP_INSERT(SYNC_INDEX_TREE);
  424. LEVEL_MAP_INSERT(SYNC_IBUF_PESS_INSERT_MUTEX);
  425. LEVEL_MAP_INSERT(SYNC_IBUF_HEADER);
  426. LEVEL_MAP_INSERT(SYNC_DICT_HEADER);
  427. LEVEL_MAP_INSERT(SYNC_STATS_AUTO_RECALC);
  428. LEVEL_MAP_INSERT(SYNC_DICT_AUTOINC_MUTEX);
  429. LEVEL_MAP_INSERT(SYNC_DICT);
  430. LEVEL_MAP_INSERT(SYNC_FTS_CACHE);
  431. LEVEL_MAP_INSERT(SYNC_DICT_OPERATION);
  432. LEVEL_MAP_INSERT(SYNC_FILE_FORMAT_TAG);
  433. LEVEL_MAP_INSERT(SYNC_TRX_I_S_LAST_READ);
  434. LEVEL_MAP_INSERT(SYNC_TRX_I_S_RWLOCK);
  435. LEVEL_MAP_INSERT(SYNC_RECV_WRITER);
  436. LEVEL_MAP_INSERT(SYNC_LEVEL_VARYING);
  437. LEVEL_MAP_INSERT(SYNC_NO_ORDER_CHECK);
  438. /* Enum count starts from 0 */
  439. ut_ad(m_levels.size() == SYNC_LEVEL_MAX + 1);
  440. }
  441. /** Print the latches acquired by a thread
  442. @param[in] latches Latches acquired by a thread */
  443. void
  444. LatchDebug::print_latches(const Latches* latches) const
  445. UNIV_NOTHROW
  446. {
  447. ib::error() << "Latches already owned by this thread: ";
  448. Latches::const_iterator end = latches->end();
  449. for (Latches::const_iterator it = latches->begin();
  450. it != end;
  451. ++it) {
  452. ib::error()
  453. << sync_latch_get_name(it->m_latch->get_id())
  454. << " -> "
  455. << it->m_level << " "
  456. << "(" << get_level_name(it->m_level) << ")";
  457. }
  458. }
  459. /** Report error and abort
  460. @param[in] latches thread's existing latches
  461. @param[in] latched The existing latch causing the invariant to fail
  462. @param[in] level The new level request that breaks the order */
  463. void
  464. LatchDebug::crash(
  465. const Latches* latches,
  466. const Latched* latched,
  467. latch_level_t level) const
  468. UNIV_NOTHROW
  469. {
  470. const latch_t* latch = latched->m_latch;
  471. const std::string& in_level_name = get_level_name(level);
  472. const std::string& latch_level_name =
  473. get_level_name(latched->m_level);
  474. ib::error()
  475. << "Thread " << os_thread_pf(os_thread_get_curr_id())
  476. << " already owns a latch "
  477. << sync_latch_get_name(latch->m_id) << " at level"
  478. << " " << latched->m_level << " (" << latch_level_name
  479. << " ), which is at a lower/same level than the"
  480. << " requested latch: "
  481. << level << " (" << in_level_name << "). "
  482. << latch->to_string();
  483. print_latches(latches);
  484. ut_error;
  485. }
  486. /** Check that all the latches already owned by a thread have a lower
  487. level than limit.
  488. @param[in] latches the thread's existing (acquired) latches
  489. @param[in] limit to check against
  490. @return latched info if there is one with a level <= limit . */
  491. const Latched*
  492. LatchDebug::less(
  493. const Latches* latches,
  494. latch_level_t limit) const
  495. UNIV_NOTHROW
  496. {
  497. Latches::const_iterator end = latches->end();
  498. for (Latches::const_iterator it = latches->begin(); it != end; ++it) {
  499. if (it->m_level <= limit) {
  500. return(&(*it));
  501. }
  502. }
  503. return(NULL);
  504. }
  505. /** Do a basic ordering check.
  506. @param[in] latches thread's existing latches
  507. @param[in] requested_level Level requested by latch
  508. @param[in] in_level declared ulint so that we can do level - 1.
  509. The level of the latch that the thread is
  510. trying to acquire
  511. @return true if passes, else crash with error message. */
  512. bool
  513. LatchDebug::basic_check(
  514. const Latches* latches,
  515. latch_level_t requested_level,
  516. ulint in_level) const
  517. UNIV_NOTHROW
  518. {
  519. latch_level_t level = latch_level_t(in_level);
  520. ut_ad(level < SYNC_LEVEL_MAX);
  521. const Latched* latched = less(latches, level);
  522. if (latched != NULL) {
  523. crash(latches, latched, requested_level);
  524. return(false);
  525. }
  526. return(true);
  527. }
  528. /** Create a new instance if one doesn't exist else return the existing one.
  529. @param[in] add add an empty entry if one is not found
  530. (default no)
  531. @return pointer to a thread's acquired latches. */
  532. Latches*
  533. LatchDebug::thread_latches(bool add)
  534. UNIV_NOTHROW
  535. {
  536. m_mutex.enter();
  537. os_thread_id_t thread_id = os_thread_get_curr_id();
  538. ThreadMap::iterator lb = m_threads.lower_bound(thread_id);
  539. if (lb != m_threads.end()
  540. && !(m_threads.key_comp()(thread_id, lb->first))) {
  541. Latches* latches = lb->second;
  542. m_mutex.exit();
  543. return(latches);
  544. } else if (!add) {
  545. m_mutex.exit();
  546. return(NULL);
  547. } else {
  548. typedef ThreadMap::value_type value_type;
  549. Latches* latches = UT_NEW_NOKEY(Latches());
  550. ut_a(latches != NULL);
  551. latches->reserve(32);
  552. m_threads.insert(lb, value_type(thread_id, latches));
  553. m_mutex.exit();
  554. return(latches);
  555. }
  556. }
  557. /** Checks if the level value exists in the thread's acquired latches.
  558. @param[in] levels the thread's existing (acquired) latches
  559. @param[in] level to lookup
  560. @return latch if found or 0 */
  561. const latch_t*
  562. LatchDebug::find(
  563. const Latches* latches,
  564. latch_level_t level) const UNIV_NOTHROW
  565. {
  566. Latches::const_iterator end = latches->end();
  567. for (Latches::const_iterator it = latches->begin(); it != end; ++it) {
  568. if (it->m_level == level) {
  569. return(it->m_latch);
  570. }
  571. }
  572. return(0);
  573. }
  574. /** Checks if the level value exists in the thread's acquired latches.
  575. @param[in] level The level to lookup
  576. @return latch if found or NULL */
  577. const latch_t*
  578. LatchDebug::find(latch_level_t level)
  579. UNIV_NOTHROW
  580. {
  581. return(find(thread_latches(), level));
  582. }
  583. /**
  584. Adds a latch and its level in the thread level array. Allocates the memory
  585. for the array if called first time for this OS thread. Makes the checks
  586. against other latch levels stored in the array for this thread.
  587. @param[in] latch pointer to a mutex or an rw-lock
  588. @param[in] level level in the latching order
  589. @return the thread's latches */
  590. Latches*
  591. LatchDebug::check_order(
  592. const latch_t* latch,
  593. latch_level_t level)
  594. UNIV_NOTHROW
  595. {
  596. ut_ad(latch->get_level() != SYNC_LEVEL_VARYING);
  597. Latches* latches = thread_latches(true);
  598. /* NOTE that there is a problem with _NODE and _LEAF levels: if the
  599. B-tree height changes, then a leaf can change to an internal node
  600. or the other way around. We do not know at present if this can cause
  601. unnecessary assertion failures below. */
  602. switch (level) {
  603. case SYNC_NO_ORDER_CHECK:
  604. case SYNC_EXTERN_STORAGE:
  605. case SYNC_TREE_NODE_FROM_HASH:
  606. /* Do no order checking */
  607. break;
  608. case SYNC_TRX_SYS_HEADER:
  609. if (srv_is_being_started) {
  610. /* This is violated during trx_sys_create_rsegs()
  611. when creating additional rollback segments when
  612. upgrading in innobase_start_or_create_for_mysql(). */
  613. break;
  614. }
  615. /* Fall through */
  616. case SYNC_MONITOR_MUTEX:
  617. case SYNC_RECV:
  618. case SYNC_FTS_BG_THREADS:
  619. case SYNC_WORK_QUEUE:
  620. case SYNC_FTS_TOKENIZE:
  621. case SYNC_FTS_OPTIMIZE:
  622. case SYNC_FTS_CACHE:
  623. case SYNC_FTS_CACHE_INIT:
  624. case SYNC_PAGE_CLEANER:
  625. case SYNC_LOG:
  626. case SYNC_LOG_WRITE:
  627. case SYNC_LOG_FLUSH_ORDER:
  628. case SYNC_FILE_FORMAT_TAG:
  629. case SYNC_DOUBLEWRITE:
  630. case SYNC_SEARCH_SYS:
  631. case SYNC_THREADS:
  632. case SYNC_LOCK_SYS:
  633. case SYNC_LOCK_WAIT_SYS:
  634. case SYNC_TRX_SYS:
  635. case SYNC_IBUF_BITMAP_MUTEX:
  636. case SYNC_REDO_RSEG:
  637. case SYNC_NOREDO_RSEG:
  638. case SYNC_TRX_UNDO:
  639. case SYNC_PURGE_LATCH:
  640. case SYNC_PURGE_QUEUE:
  641. case SYNC_DICT_AUTOINC_MUTEX:
  642. case SYNC_DICT_OPERATION:
  643. case SYNC_DICT_HEADER:
  644. case SYNC_TRX_I_S_RWLOCK:
  645. case SYNC_TRX_I_S_LAST_READ:
  646. case SYNC_IBUF_MUTEX:
  647. case SYNC_INDEX_ONLINE_LOG:
  648. case SYNC_STATS_AUTO_RECALC:
  649. case SYNC_POOL:
  650. case SYNC_POOL_MANAGER:
  651. case SYNC_RECV_WRITER:
  652. basic_check(latches, level, level);
  653. break;
  654. case SYNC_ANY_LATCH:
  655. /* Temporary workaround for LATCH_ID_RTR_*_MUTEX */
  656. if (is_rtr_mutex(latch)) {
  657. const Latched* latched = less(latches, level);
  658. if (latched == NULL
  659. || (latched != NULL
  660. && is_rtr_mutex(latched->m_latch))) {
  661. /* No violation */
  662. break;
  663. }
  664. crash(latches, latched, level);
  665. } else {
  666. basic_check(latches, level, level);
  667. }
  668. break;
  669. case SYNC_TRX:
  670. /* Either the thread must own the lock_sys->mutex, or
  671. it is allowed to own only ONE trx_t::mutex. */
  672. if (less(latches, level) != NULL) {
  673. basic_check(latches, level, level - 1);
  674. ut_a(find(latches, SYNC_LOCK_SYS) != 0);
  675. }
  676. break;
  677. case SYNC_BUF_FLUSH_LIST:
  678. case SYNC_BUF_POOL:
  679. /* We can have multiple mutexes of this type therefore we
  680. can only check whether the greater than condition holds. */
  681. basic_check(latches, level, level - 1);
  682. break;
  683. case SYNC_BUF_PAGE_HASH:
  684. /* Multiple page_hash locks are only allowed during
  685. buf_validate and that is where buf_pool mutex is already
  686. held. */
  687. /* Fall through */
  688. case SYNC_BUF_BLOCK:
  689. /* Either the thread must own the (buffer pool) buf_pool->mutex
  690. or it is allowed to latch only ONE of (buffer block)
  691. block->mutex or buf_pool->zip_mutex. */
  692. if (less(latches, level) != NULL) {
  693. basic_check(latches, level, level - 1);
  694. ut_a(find(latches, SYNC_BUF_POOL) != 0);
  695. }
  696. break;
  697. case SYNC_REC_LOCK:
  698. if (find(latches, SYNC_LOCK_SYS) != 0) {
  699. basic_check(latches, level, SYNC_REC_LOCK - 1);
  700. } else {
  701. basic_check(latches, level, SYNC_REC_LOCK);
  702. }
  703. break;
  704. case SYNC_IBUF_BITMAP:
  705. /* Either the thread must own the master mutex to all
  706. the bitmap pages, or it is allowed to latch only ONE
  707. bitmap page. */
  708. if (find(latches, SYNC_IBUF_BITMAP_MUTEX) != 0) {
  709. basic_check(latches, level, SYNC_IBUF_BITMAP - 1);
  710. } else if (!srv_is_being_started) {
  711. /* This is violated during trx_sys_create_rsegs()
  712. when creating additional rollback segments during
  713. upgrade. */
  714. basic_check(latches, level, SYNC_IBUF_BITMAP);
  715. }
  716. break;
  717. case SYNC_FSP_PAGE:
  718. ut_a(find(latches, SYNC_FSP) != 0);
  719. break;
  720. case SYNC_FSP:
  721. ut_a(find(latches, SYNC_FSP) != 0
  722. || basic_check(latches, level, SYNC_FSP));
  723. break;
  724. case SYNC_TRX_UNDO_PAGE:
  725. /* Purge is allowed to read in as many UNDO pages as it likes.
  726. The purge thread can read the UNDO pages without any covering
  727. mutex. */
  728. ut_a(find(latches, SYNC_TRX_UNDO) != 0
  729. || find(latches, SYNC_REDO_RSEG) != 0
  730. || find(latches, SYNC_NOREDO_RSEG) != 0
  731. || basic_check(latches, level, level - 1));
  732. break;
  733. case SYNC_RSEG_HEADER:
  734. ut_a(find(latches, SYNC_REDO_RSEG) != 0
  735. || find(latches, SYNC_NOREDO_RSEG) != 0);
  736. break;
  737. case SYNC_RSEG_HEADER_NEW:
  738. ut_a(find(latches, SYNC_FSP_PAGE) != 0);
  739. break;
  740. case SYNC_TREE_NODE:
  741. {
  742. const latch_t* fsp_latch;
  743. fsp_latch = find(latches, SYNC_FSP);
  744. ut_a((fsp_latch != NULL
  745. && fsp_latch->is_temp_fsp())
  746. || find(latches, SYNC_INDEX_TREE) != 0
  747. || find(latches, SYNC_DICT_OPERATION)
  748. || basic_check(latches,
  749. level, SYNC_TREE_NODE - 1));
  750. }
  751. break;
  752. case SYNC_TREE_NODE_NEW:
  753. ut_a(find(latches, SYNC_FSP_PAGE) != 0);
  754. break;
  755. case SYNC_INDEX_TREE:
  756. basic_check(latches, level, SYNC_TREE_NODE - 1);
  757. break;
  758. case SYNC_IBUF_TREE_NODE:
  759. ut_a(find(latches, SYNC_IBUF_INDEX_TREE) != 0
  760. || basic_check(latches, level, SYNC_IBUF_TREE_NODE - 1));
  761. break;
  762. case SYNC_IBUF_TREE_NODE_NEW:
  763. /* ibuf_add_free_page() allocates new pages for the change
  764. buffer while only holding the tablespace x-latch. These
  765. pre-allocated new pages may only be used while holding
  766. ibuf_mutex, in btr_page_alloc_for_ibuf(). */
  767. ut_a(find(latches, SYNC_IBUF_MUTEX) != 0
  768. || find(latches, SYNC_FSP) != 0);
  769. break;
  770. case SYNC_IBUF_INDEX_TREE:
  771. if (find(latches, SYNC_FSP) != 0) {
  772. basic_check(latches, level, level - 1);
  773. } else {
  774. basic_check(latches, level, SYNC_IBUF_TREE_NODE - 1);
  775. }
  776. break;
  777. case SYNC_IBUF_PESS_INSERT_MUTEX:
  778. basic_check(latches, level, SYNC_FSP - 1);
  779. ut_a(find(latches, SYNC_IBUF_MUTEX) == 0);
  780. break;
  781. case SYNC_IBUF_HEADER:
  782. basic_check(latches, level, SYNC_FSP - 1);
  783. ut_a(find(latches, SYNC_IBUF_MUTEX) == NULL);
  784. ut_a(find(latches, SYNC_IBUF_PESS_INSERT_MUTEX) == NULL);
  785. break;
  786. case SYNC_DICT:
  787. basic_check(latches, level, SYNC_DICT);
  788. break;
  789. case SYNC_MUTEX:
  790. case SYNC_UNKNOWN:
  791. case SYNC_LEVEL_VARYING:
  792. case RW_LOCK_X:
  793. case RW_LOCK_X_WAIT:
  794. case RW_LOCK_S:
  795. case RW_LOCK_SX:
  796. case RW_LOCK_NOT_LOCKED:
  797. /* These levels should never be set for a latch. */
  798. ut_error;
  799. break;
  800. }
  801. return(latches);
  802. }
  803. /** Removes a latch from the thread level array if it is found there.
  804. @param[in] latch that was released/unlocked
  805. @param[in] level level of the latch
  806. @return true if found in the array; it is not an error if the latch is
  807. not found, as we presently are not able to determine the level for
  808. every latch reservation the program does */
  809. void
  810. LatchDebug::unlock(const latch_t* latch)
  811. UNIV_NOTHROW
  812. {
  813. if (latch->get_level() == SYNC_LEVEL_VARYING) {
  814. // We don't have varying level mutexes
  815. ut_ad(latch->m_rw_lock);
  816. }
  817. Latches* latches;
  818. if (*latch->get_name() == '.') {
  819. /* Ignore diagnostic latches, starting with '.' */
  820. } else if ((latches = thread_latches()) != NULL) {
  821. Latches::reverse_iterator rend = latches->rend();
  822. for (Latches::reverse_iterator it = latches->rbegin();
  823. it != rend;
  824. ++it) {
  825. if (it->m_latch != latch) {
  826. continue;
  827. }
  828. Latches::iterator i = it.base();
  829. latches->erase(--i);
  830. /* If this thread doesn't own any more
  831. latches remove from the map.
  832. FIXME: Perhaps use the master thread
  833. to do purge. Or, do it from close connection.
  834. This could be expensive. */
  835. if (latches->empty()) {
  836. m_mutex.enter();
  837. os_thread_id_t thread_id;
  838. thread_id = os_thread_get_curr_id();
  839. m_threads.erase(thread_id);
  840. m_mutex.exit();
  841. UT_DELETE(latches);
  842. }
  843. return;
  844. }
  845. if (latch->get_level() != SYNC_LEVEL_VARYING) {
  846. ib::error()
  847. << "Couldn't find latch "
  848. << sync_latch_get_name(latch->get_id());
  849. print_latches(latches);
  850. /** Must find the latch. */
  851. ut_error;
  852. }
  853. }
  854. }
  855. /** Get the latch id from a latch name.
  856. @param[in] name Latch name
  857. @return latch id if found else LATCH_ID_NONE. */
  858. latch_id_t
  859. sync_latch_get_id(const char* name)
  860. {
  861. LatchMetaData::const_iterator end = latch_meta.end();
  862. /* Linear scan should be OK, this should be extremely rare. */
  863. for (LatchMetaData::const_iterator it = latch_meta.begin();
  864. it != end;
  865. ++it) {
  866. if (*it == NULL || (*it)->get_id() == LATCH_ID_NONE) {
  867. continue;
  868. } else if (strcmp((*it)->get_name(), name) == 0) {
  869. return((*it)->get_id());
  870. }
  871. }
  872. return(LATCH_ID_NONE);
  873. }
  874. /** Get the latch name from a sync level
  875. @param[in] level Latch level to lookup
  876. @return NULL if not found. */
  877. const char*
  878. sync_latch_get_name(latch_level_t level)
  879. {
  880. LatchMetaData::const_iterator end = latch_meta.end();
  881. /* Linear scan should be OK, this should be extremely rare. */
  882. for (LatchMetaData::const_iterator it = latch_meta.begin();
  883. it != end;
  884. ++it) {
  885. if (*it == NULL || (*it)->get_id() == LATCH_ID_NONE) {
  886. continue;
  887. } else if ((*it)->get_level() == level) {
  888. return((*it)->get_name());
  889. }
  890. }
  891. return(0);
  892. }
  893. /** Check if it is OK to acquire the latch.
  894. @param[in] latch latch type */
  895. void
  896. sync_check_lock_validate(const latch_t* latch)
  897. {
  898. if (LatchDebug::instance() != NULL) {
  899. LatchDebug::instance()->lock_validate(
  900. latch, latch->get_level());
  901. }
  902. }
  903. /** Note that the lock has been granted
  904. @param[in] latch latch type */
  905. void
  906. sync_check_lock_granted(const latch_t* latch)
  907. {
  908. if (LatchDebug::instance() != NULL) {
  909. LatchDebug::instance()->lock_granted(latch, latch->get_level());
  910. }
  911. }
  912. /** Check if it is OK to acquire the latch.
  913. @param[in] latch latch type
  914. @param[in] level Latch level */
  915. void
  916. sync_check_lock(
  917. const latch_t* latch,
  918. latch_level_t level)
  919. {
  920. if (LatchDebug::instance() != NULL) {
  921. ut_ad(latch->get_level() == SYNC_LEVEL_VARYING);
  922. ut_ad(latch->get_id() == LATCH_ID_BUF_BLOCK_LOCK);
  923. LatchDebug::instance()->lock_validate(latch, level);
  924. LatchDebug::instance()->lock_granted(latch, level);
  925. }
  926. }
  927. /** Check if it is OK to re-acquire the lock.
  928. @param[in] latch RW-LOCK to relock (recursive X locks) */
  929. void
  930. sync_check_relock(const latch_t* latch)
  931. {
  932. if (LatchDebug::instance() != NULL) {
  933. LatchDebug::instance()->relock(latch);
  934. }
  935. }
  936. /** Removes a latch from the thread level array if it is found there.
  937. @param[in] latch The latch to unlock */
  938. void
  939. sync_check_unlock(const latch_t* latch)
  940. {
  941. if (LatchDebug::instance() != NULL) {
  942. LatchDebug::instance()->unlock(latch);
  943. }
  944. }
  945. /** Checks if the level array for the current thread contains a
  946. mutex or rw-latch at the specified level.
  947. @param[in] level to find
  948. @return a matching latch, or NULL if not found */
  949. const latch_t*
  950. sync_check_find(latch_level_t level)
  951. {
  952. if (LatchDebug::instance() != NULL) {
  953. return(LatchDebug::instance()->find(level));
  954. }
  955. return(NULL);
  956. }
  957. /** Iterate over the thread's latches.
  958. @param[in,out] functor called for each element.
  959. @return true if the functor returns true for any element */
  960. bool
  961. sync_check_iterate(const sync_check_functor_t& functor)
  962. {
  963. if (LatchDebug* debug = LatchDebug::instance()) {
  964. return(debug->for_each(functor));
  965. }
  966. return(false);
  967. }
  968. /** Enable sync order checking.
  969. Note: We don't enforce any synchronisation checks. The caller must ensure
  970. that no races can occur */
  971. void
  972. sync_check_enable()
  973. {
  974. if (!srv_sync_debug) {
  975. return;
  976. }
  977. /* We should always call this before we create threads. */
  978. LatchDebug::create_instance();
  979. }
  980. /** Initialise the debug data structures */
  981. void
  982. LatchDebug::init()
  983. UNIV_NOTHROW
  984. {
  985. mutex_create(LATCH_ID_RW_LOCK_DEBUG, &rw_lock_debug_mutex);
  986. }
  987. /** Shutdown the latch debug checking
  988. Note: We don't enforce any synchronisation checks. The caller must ensure
  989. that no races can occur */
  990. void
  991. LatchDebug::shutdown()
  992. UNIV_NOTHROW
  993. {
  994. mutex_free(&rw_lock_debug_mutex);
  995. ut_a(s_initialized);
  996. s_initialized = false;
  997. UT_DELETE(s_instance);
  998. LatchDebug::s_instance = NULL;
  999. }
  1000. /** Acquires the debug mutex. We cannot use the mutex defined in sync0sync,
  1001. because the debug mutex is also acquired in sync0arr while holding the OS
  1002. mutex protecting the sync array, and the ordinary mutex_enter might
  1003. recursively call routines in sync0arr, leading to a deadlock on the OS
  1004. mutex. */
  1005. void
  1006. rw_lock_debug_mutex_enter()
  1007. {
  1008. mutex_enter(&rw_lock_debug_mutex);
  1009. }
  1010. /** Releases the debug mutex. */
  1011. void
  1012. rw_lock_debug_mutex_exit()
  1013. {
  1014. mutex_exit(&rw_lock_debug_mutex);
  1015. }
  1016. #endif /* UNIV_DEBUG */
  1017. /* Meta data for all the InnoDB latches. If the latch is not in recorded
  1018. here then it will be be considered for deadlock checks. */
  1019. LatchMetaData latch_meta;
  1020. /** Load the latch meta data. */
  1021. static
  1022. void
  1023. sync_latch_meta_init()
  1024. UNIV_NOTHROW
  1025. {
  1026. latch_meta.resize(LATCH_ID_MAX);
  1027. /* The latches should be ordered on latch_id_t. So that we can
  1028. index directly into the vector to update and fetch meta-data. */
  1029. LATCH_ADD_MUTEX(AUTOINC, SYNC_DICT_AUTOINC_MUTEX, autoinc_mutex_key);
  1030. #if defined PFS_SKIP_BUFFER_MUTEX_RWLOCK || defined PFS_GROUP_BUFFER_SYNC
  1031. LATCH_ADD_MUTEX(BUF_BLOCK_MUTEX, SYNC_BUF_BLOCK, PFS_NOT_INSTRUMENTED);
  1032. #else
  1033. LATCH_ADD_MUTEX(BUF_BLOCK_MUTEX, SYNC_BUF_BLOCK,
  1034. buffer_block_mutex_key);
  1035. #endif /* PFS_SKIP_BUFFER_MUTEX_RWLOCK || PFS_GROUP_BUFFER_SYNC */
  1036. LATCH_ADD_MUTEX(BUF_POOL, SYNC_BUF_POOL, buf_pool_mutex_key);
  1037. LATCH_ADD_MUTEX(BUF_POOL_ZIP, SYNC_BUF_BLOCK, buf_pool_zip_mutex_key);
  1038. LATCH_ADD_MUTEX(CACHE_LAST_READ, SYNC_TRX_I_S_LAST_READ,
  1039. cache_last_read_mutex_key);
  1040. LATCH_ADD_MUTEX(DICT_FOREIGN_ERR, SYNC_NO_ORDER_CHECK,
  1041. dict_foreign_err_mutex_key);
  1042. LATCH_ADD_MUTEX(DICT_SYS, SYNC_DICT, dict_sys_mutex_key);
  1043. LATCH_ADD_MUTEX(FILE_FORMAT_MAX, SYNC_FILE_FORMAT_TAG,
  1044. file_format_max_mutex_key);
  1045. LATCH_ADD_MUTEX(FIL_SYSTEM, SYNC_ANY_LATCH, fil_system_mutex_key);
  1046. LATCH_ADD_MUTEX(FLUSH_LIST, SYNC_BUF_FLUSH_LIST, flush_list_mutex_key);
  1047. LATCH_ADD_MUTEX(FTS_BG_THREADS, SYNC_FTS_BG_THREADS,
  1048. fts_bg_threads_mutex_key);
  1049. LATCH_ADD_MUTEX(FTS_DELETE, SYNC_FTS_OPTIMIZE, fts_delete_mutex_key);
  1050. LATCH_ADD_MUTEX(FTS_OPTIMIZE, SYNC_FTS_OPTIMIZE,
  1051. fts_optimize_mutex_key);
  1052. LATCH_ADD_MUTEX(FTS_DOC_ID, SYNC_FTS_OPTIMIZE, fts_doc_id_mutex_key);
  1053. LATCH_ADD_MUTEX(FTS_PLL_TOKENIZE, SYNC_FTS_TOKENIZE,
  1054. fts_pll_tokenize_mutex_key);
  1055. LATCH_ADD_MUTEX(HASH_TABLE_MUTEX, SYNC_BUF_PAGE_HASH,
  1056. hash_table_mutex_key);
  1057. LATCH_ADD_MUTEX(IBUF_BITMAP, SYNC_IBUF_BITMAP_MUTEX,
  1058. ibuf_bitmap_mutex_key);
  1059. LATCH_ADD_MUTEX(IBUF, SYNC_IBUF_MUTEX, ibuf_mutex_key);
  1060. LATCH_ADD_MUTEX(IBUF_PESSIMISTIC_INSERT, SYNC_IBUF_PESS_INSERT_MUTEX,
  1061. ibuf_pessimistic_insert_mutex_key);
  1062. LATCH_ADD_MUTEX(LOG_SYS, SYNC_LOG, log_sys_mutex_key);
  1063. LATCH_ADD_MUTEX(LOG_WRITE, SYNC_LOG_WRITE, log_sys_write_mutex_key);
  1064. LATCH_ADD_MUTEX(LOG_FLUSH_ORDER, SYNC_LOG_FLUSH_ORDER,
  1065. log_flush_order_mutex_key);
  1066. LATCH_ADD_MUTEX(MUTEX_LIST, SYNC_NO_ORDER_CHECK, mutex_list_mutex_key);
  1067. LATCH_ADD_MUTEX(PAGE_CLEANER, SYNC_PAGE_CLEANER,
  1068. page_cleaner_mutex_key);
  1069. LATCH_ADD_MUTEX(PURGE_SYS_PQ, SYNC_PURGE_QUEUE,
  1070. purge_sys_pq_mutex_key);
  1071. LATCH_ADD_MUTEX(RECALC_POOL, SYNC_STATS_AUTO_RECALC,
  1072. recalc_pool_mutex_key);
  1073. LATCH_ADD_MUTEX(RECV_SYS, SYNC_RECV, recv_sys_mutex_key);
  1074. LATCH_ADD_MUTEX(RECV_WRITER, SYNC_RECV_WRITER, recv_writer_mutex_key);
  1075. LATCH_ADD_MUTEX(REDO_RSEG, SYNC_REDO_RSEG, redo_rseg_mutex_key);
  1076. LATCH_ADD_MUTEX(NOREDO_RSEG, SYNC_NOREDO_RSEG, noredo_rseg_mutex_key);
  1077. #ifdef UNIV_DEBUG
  1078. /* Mutex names starting with '.' are not tracked. They are assumed
  1079. to be diagnostic mutexes used in debugging. */
  1080. latch_meta[LATCH_ID_RW_LOCK_DEBUG] =
  1081. LATCH_ADD_MUTEX(RW_LOCK_DEBUG,
  1082. SYNC_NO_ORDER_CHECK,
  1083. rw_lock_debug_mutex_key);
  1084. #endif /* UNIV_DEBUG */
  1085. LATCH_ADD_MUTEX(RTR_SSN_MUTEX, SYNC_ANY_LATCH, rtr_ssn_mutex_key);
  1086. LATCH_ADD_MUTEX(RTR_ACTIVE_MUTEX, SYNC_ANY_LATCH,
  1087. rtr_active_mutex_key);
  1088. LATCH_ADD_MUTEX(RTR_MATCH_MUTEX, SYNC_ANY_LATCH, rtr_match_mutex_key);
  1089. LATCH_ADD_MUTEX(RTR_PATH_MUTEX, SYNC_ANY_LATCH, rtr_path_mutex_key);
  1090. LATCH_ADD_MUTEX(RW_LOCK_LIST, SYNC_NO_ORDER_CHECK,
  1091. rw_lock_list_mutex_key);
  1092. LATCH_ADD_MUTEX(RW_LOCK_MUTEX, SYNC_NO_ORDER_CHECK, rw_lock_mutex_key);
  1093. LATCH_ADD_MUTEX(SRV_INNODB_MONITOR, SYNC_NO_ORDER_CHECK,
  1094. srv_innodb_monitor_mutex_key);
  1095. LATCH_ADD_MUTEX(SRV_MISC_TMPFILE, SYNC_ANY_LATCH,
  1096. srv_misc_tmpfile_mutex_key);
  1097. LATCH_ADD_MUTEX(SRV_MONITOR_FILE, SYNC_NO_ORDER_CHECK,
  1098. srv_monitor_file_mutex_key);
  1099. LATCH_ADD_MUTEX(BUF_DBLWR, SYNC_DOUBLEWRITE, buf_dblwr_mutex_key);
  1100. LATCH_ADD_MUTEX(TRX_UNDO, SYNC_TRX_UNDO, trx_undo_mutex_key);
  1101. LATCH_ADD_MUTEX(TRX_POOL, SYNC_POOL, trx_pool_mutex_key);
  1102. LATCH_ADD_MUTEX(TRX_POOL_MANAGER, SYNC_POOL_MANAGER,
  1103. trx_pool_manager_mutex_key);
  1104. LATCH_ADD_MUTEX(TRX, SYNC_TRX, trx_mutex_key);
  1105. LATCH_ADD_MUTEX(LOCK_SYS, SYNC_LOCK_SYS, lock_mutex_key);
  1106. LATCH_ADD_MUTEX(LOCK_SYS_WAIT, SYNC_LOCK_WAIT_SYS,
  1107. lock_wait_mutex_key);
  1108. LATCH_ADD_MUTEX(TRX_SYS, SYNC_TRX_SYS, trx_sys_mutex_key);
  1109. LATCH_ADD_MUTEX(SRV_SYS, SYNC_THREADS, srv_sys_mutex_key);
  1110. LATCH_ADD_MUTEX(SRV_SYS_TASKS, SYNC_ANY_LATCH, srv_threads_mutex_key);
  1111. LATCH_ADD_MUTEX(PAGE_ZIP_STAT_PER_INDEX, SYNC_ANY_LATCH,
  1112. page_zip_stat_per_index_mutex_key);
  1113. #ifndef PFS_SKIP_EVENT_MUTEX
  1114. LATCH_ADD_MUTEX(EVENT_MANAGER, SYNC_NO_ORDER_CHECK,
  1115. event_manager_mutex_key);
  1116. #else
  1117. LATCH_ADD_MUTEX(EVENT_MANAGER, SYNC_NO_ORDER_CHECK,
  1118. PFS_NOT_INSTRUMENTED);
  1119. #endif /* !PFS_SKIP_EVENT_MUTEX */
  1120. LATCH_ADD_MUTEX(EVENT_MUTEX, SYNC_NO_ORDER_CHECK, event_mutex_key);
  1121. LATCH_ADD_MUTEX(SYNC_ARRAY_MUTEX, SYNC_NO_ORDER_CHECK,
  1122. sync_array_mutex_key);
  1123. LATCH_ADD_MUTEX(ZIP_PAD_MUTEX, SYNC_NO_ORDER_CHECK, zip_pad_mutex_key);
  1124. LATCH_ADD_MUTEX(OS_AIO_READ_MUTEX, SYNC_NO_ORDER_CHECK,
  1125. PFS_NOT_INSTRUMENTED);
  1126. LATCH_ADD_MUTEX(OS_AIO_WRITE_MUTEX, SYNC_NO_ORDER_CHECK,
  1127. PFS_NOT_INSTRUMENTED);
  1128. LATCH_ADD_MUTEX(OS_AIO_LOG_MUTEX, SYNC_NO_ORDER_CHECK,
  1129. PFS_NOT_INSTRUMENTED);
  1130. LATCH_ADD_MUTEX(OS_AIO_IBUF_MUTEX, SYNC_NO_ORDER_CHECK,
  1131. PFS_NOT_INSTRUMENTED);
  1132. LATCH_ADD_MUTEX(OS_AIO_SYNC_MUTEX, SYNC_NO_ORDER_CHECK,
  1133. PFS_NOT_INSTRUMENTED);
  1134. LATCH_ADD_MUTEX(ROW_DROP_LIST, SYNC_NO_ORDER_CHECK,
  1135. row_drop_list_mutex_key);
  1136. LATCH_ADD_MUTEX(INDEX_ONLINE_LOG, SYNC_INDEX_ONLINE_LOG,
  1137. index_online_log_key);
  1138. LATCH_ADD_MUTEX(WORK_QUEUE, SYNC_WORK_QUEUE, PFS_NOT_INSTRUMENTED);
  1139. // Add the RW locks
  1140. LATCH_ADD_RWLOCK(BTR_SEARCH, SYNC_SEARCH_SYS, btr_search_latch_key);
  1141. LATCH_ADD_RWLOCK(BUF_BLOCK_LOCK, SYNC_LEVEL_VARYING,
  1142. buf_block_lock_key);
  1143. #ifdef UNIV_DEBUG
  1144. LATCH_ADD_RWLOCK(BUF_BLOCK_DEBUG, SYNC_LEVEL_VARYING,
  1145. buf_block_debug_latch_key);
  1146. #endif /* UNIV_DEBUG */
  1147. LATCH_ADD_RWLOCK(DICT_OPERATION, SYNC_DICT_OPERATION,
  1148. dict_operation_lock_key);
  1149. LATCH_ADD_RWLOCK(CHECKPOINT, SYNC_NO_ORDER_CHECK, checkpoint_lock_key);
  1150. LATCH_ADD_RWLOCK(FIL_SPACE, SYNC_FSP, fil_space_latch_key);
  1151. LATCH_ADD_RWLOCK(FTS_CACHE, SYNC_FTS_CACHE, fts_cache_rw_lock_key);
  1152. LATCH_ADD_RWLOCK(FTS_CACHE_INIT, SYNC_FTS_CACHE_INIT,
  1153. fts_cache_init_rw_lock_key);
  1154. LATCH_ADD_RWLOCK(TRX_I_S_CACHE, SYNC_TRX_I_S_RWLOCK,
  1155. trx_i_s_cache_lock_key);
  1156. LATCH_ADD_RWLOCK(TRX_PURGE, SYNC_PURGE_LATCH, trx_purge_latch_key);
  1157. LATCH_ADD_RWLOCK(IBUF_INDEX_TREE, SYNC_IBUF_INDEX_TREE,
  1158. index_tree_rw_lock_key);
  1159. LATCH_ADD_RWLOCK(INDEX_TREE, SYNC_INDEX_TREE, index_tree_rw_lock_key);
  1160. LATCH_ADD_RWLOCK(DICT_TABLE_STATS, SYNC_INDEX_TREE,
  1161. dict_table_stats_key);
  1162. LATCH_ADD_RWLOCK(HASH_TABLE_RW_LOCK, SYNC_BUF_PAGE_HASH,
  1163. hash_table_locks_key);
  1164. LATCH_ADD_MUTEX(SYNC_DEBUG_MUTEX, SYNC_NO_ORDER_CHECK,
  1165. PFS_NOT_INSTRUMENTED);
  1166. /* JAN: TODO: Add PFS instrumentation */
  1167. LATCH_ADD_MUTEX(SCRUB_STAT_MUTEX, SYNC_NO_ORDER_CHECK,
  1168. PFS_NOT_INSTRUMENTED);
  1169. LATCH_ADD_MUTEX(DEFRAGMENT_MUTEX, SYNC_NO_ORDER_CHECK,
  1170. PFS_NOT_INSTRUMENTED);
  1171. LATCH_ADD_MUTEX(BTR_DEFRAGMENT_MUTEX, SYNC_NO_ORDER_CHECK,
  1172. PFS_NOT_INSTRUMENTED);
  1173. LATCH_ADD_MUTEX(MTFLUSH_THREAD_MUTEX, SYNC_NO_ORDER_CHECK,
  1174. PFS_NOT_INSTRUMENTED);
  1175. LATCH_ADD_MUTEX(MTFLUSH_MUTEX, SYNC_NO_ORDER_CHECK,
  1176. PFS_NOT_INSTRUMENTED);
  1177. LATCH_ADD_MUTEX(FIL_CRYPT_MUTEX, SYNC_NO_ORDER_CHECK,
  1178. PFS_NOT_INSTRUMENTED);
  1179. LATCH_ADD_MUTEX(FIL_CRYPT_STAT_MUTEX, SYNC_NO_ORDER_CHECK,
  1180. PFS_NOT_INSTRUMENTED);
  1181. LATCH_ADD_MUTEX(FIL_CRYPT_DATA_MUTEX, SYNC_NO_ORDER_CHECK,
  1182. PFS_NOT_INSTRUMENTED);
  1183. LATCH_ADD_MUTEX(FIL_CRYPT_THREADS_MUTEX, SYNC_NO_ORDER_CHECK,
  1184. PFS_NOT_INSTRUMENTED);
  1185. latch_id_t id = LATCH_ID_NONE;
  1186. /* The array should be ordered on latch ID.We need to
  1187. index directly into it from the mutex policy to update
  1188. the counters and access the meta-data. */
  1189. for (LatchMetaData::iterator it = latch_meta.begin();
  1190. it != latch_meta.end();
  1191. ++it) {
  1192. const latch_meta_t* meta = *it;
  1193. /* Skip blank entries */
  1194. if (meta == NULL || meta->get_id() == LATCH_ID_NONE) {
  1195. continue;
  1196. }
  1197. ut_a(id < meta->get_id());
  1198. id = meta->get_id();
  1199. }
  1200. }
  1201. /** Destroy the latch meta data */
  1202. static
  1203. void
  1204. sync_latch_meta_destroy()
  1205. {
  1206. for (LatchMetaData::iterator it = latch_meta.begin();
  1207. it != latch_meta.end();
  1208. ++it) {
  1209. UT_DELETE(*it);
  1210. }
  1211. latch_meta.clear();
  1212. }
  1213. /** Track mutex file creation name and line number. This is to avoid storing
  1214. { const char* name; uint16_t line; } in every instance. This results in the
  1215. sizeof(Mutex) > 64. We use a lookup table to store it separately. Fetching
  1216. the values is very rare, only required for diagnostic purposes. And, we
  1217. don't create/destroy mutexes that frequently. */
  1218. struct CreateTracker {
  1219. /** Constructor */
  1220. CreateTracker()
  1221. UNIV_NOTHROW
  1222. {
  1223. m_mutex.init();
  1224. }
  1225. /** Destructor */
  1226. ~CreateTracker()
  1227. UNIV_NOTHROW
  1228. {
  1229. ut_ad(m_files.empty());
  1230. m_mutex.destroy();
  1231. }
  1232. /** Register where the latch was created
  1233. @param[in] ptr Latch instance
  1234. @param[in] filename Where created
  1235. @param[in] line Line number in filename */
  1236. void register_latch(
  1237. const void* ptr,
  1238. const char* filename,
  1239. uint16_t line)
  1240. UNIV_NOTHROW
  1241. {
  1242. m_mutex.enter();
  1243. Files::iterator lb = m_files.lower_bound(ptr);
  1244. ut_ad(lb == m_files.end()
  1245. || m_files.key_comp()(ptr, lb->first));
  1246. typedef Files::value_type value_type;
  1247. m_files.insert(lb, value_type(ptr, File(filename, line)));
  1248. m_mutex.exit();
  1249. }
  1250. /** Deregister a latch - when it is destroyed
  1251. @param[in] ptr Latch instance being destroyed */
  1252. void deregister_latch(const void* ptr)
  1253. UNIV_NOTHROW
  1254. {
  1255. m_mutex.enter();
  1256. Files::iterator lb = m_files.lower_bound(ptr);
  1257. ut_ad(lb != m_files.end()
  1258. && !(m_files.key_comp()(ptr, lb->first)));
  1259. m_files.erase(lb);
  1260. m_mutex.exit();
  1261. }
  1262. /** Get the create string, format is "name:line"
  1263. @param[in] ptr Latch instance
  1264. @return the create string or "" if not found */
  1265. std::string get(const void* ptr)
  1266. UNIV_NOTHROW
  1267. {
  1268. m_mutex.enter();
  1269. std::string created;
  1270. Files::iterator lb = m_files.lower_bound(ptr);
  1271. if (lb != m_files.end()
  1272. && !(m_files.key_comp()(ptr, lb->first))) {
  1273. std::ostringstream msg;
  1274. msg << lb->second.m_name << ":" << lb->second.m_line;
  1275. created = msg.str();
  1276. }
  1277. m_mutex.exit();
  1278. return(created);
  1279. }
  1280. private:
  1281. /** For tracking the filename and line number */
  1282. struct File {
  1283. /** Constructor */
  1284. File() UNIV_NOTHROW : m_name(), m_line() { }
  1285. /** Constructor
  1286. @param[in] name Filename where created
  1287. @param[in] line Line number where created */
  1288. File(const char* name, uint16_t line)
  1289. UNIV_NOTHROW
  1290. :
  1291. m_name(sync_basename(name)),
  1292. m_line(line)
  1293. {
  1294. /* No op */
  1295. }
  1296. /** Filename where created */
  1297. std::string m_name;
  1298. /** Line number where created */
  1299. uint16_t m_line;
  1300. };
  1301. /** Map the mutex instance to where it was created */
  1302. typedef std::map<
  1303. const void*,
  1304. File,
  1305. std::less<const void*>,
  1306. ut_allocator<std::pair<const void* const, File> > >
  1307. Files;
  1308. typedef OSMutex Mutex;
  1309. /** Mutex protecting m_files */
  1310. Mutex m_mutex;
  1311. /** Track the latch creation */
  1312. Files m_files;
  1313. };
  1314. /** Track latch creation location. For reducing the size of the latches */
  1315. static CreateTracker create_tracker;
  1316. /** Register a latch, called when it is created
  1317. @param[in] ptr Latch instance that was created
  1318. @param[in] filename Filename where it was created
  1319. @param[in] line Line number in filename */
  1320. void
  1321. sync_file_created_register(
  1322. const void* ptr,
  1323. const char* filename,
  1324. uint16_t line)
  1325. {
  1326. create_tracker.register_latch(ptr, filename, line);
  1327. }
  1328. /** Deregister a latch, called when it is destroyed
  1329. @param[in] ptr Latch to be destroyed */
  1330. void
  1331. sync_file_created_deregister(const void* ptr)
  1332. {
  1333. create_tracker.deregister_latch(ptr);
  1334. }
  1335. /** Get the string where the file was created. Its format is "name:line"
  1336. @param[in] ptr Latch instance
  1337. @return created information or "" if can't be found */
  1338. std::string
  1339. sync_file_created_get(const void* ptr)
  1340. {
  1341. return(create_tracker.get(ptr));
  1342. }
  1343. /** Initializes the synchronization data structures. */
  1344. void
  1345. sync_check_init()
  1346. {
  1347. ut_ad(!LatchDebug::s_initialized);
  1348. ut_d(LatchDebug::s_initialized = true);
  1349. sync_latch_meta_init();
  1350. /* Init the rw-lock & mutex list and create the mutex to protect it. */
  1351. UT_LIST_INIT(rw_lock_list, &rw_lock_t::list);
  1352. mutex_create(LATCH_ID_RW_LOCK_LIST, &rw_lock_list_mutex);
  1353. ut_d(LatchDebug::init());
  1354. sync_array_init(OS_THREAD_MAX_N);
  1355. }
  1356. /** Free the InnoDB synchronization data structures. */
  1357. void
  1358. sync_check_close()
  1359. {
  1360. ut_d(LatchDebug::shutdown());
  1361. mutex_free(&rw_lock_list_mutex);
  1362. sync_array_close();
  1363. sync_latch_meta_destroy();
  1364. }