You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1357 lines
36 KiB

20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
20 years ago
  1. /******************************************************
  2. Mutex, the basic synchronization primitive
  3. (c) 1995 Innobase Oy
  4. Created 9/5/1995 Heikki Tuuri
  5. *******************************************************/
  6. #include "sync0sync.h"
  7. #ifdef UNIV_NONINL
  8. #include "sync0sync.ic"
  9. #endif
  10. #include "sync0rw.h"
  11. #include "buf0buf.h"
  12. #include "srv0srv.h"
  13. #include "buf0types.h"
  14. /*
  15. REASONS FOR IMPLEMENTING THE SPIN LOCK MUTEX
  16. ============================================
  17. Semaphore operations in operating systems are slow: Solaris on a 1993 Sparc
  18. takes 3 microseconds (us) for a lock-unlock pair and Windows NT on a 1995
  19. Pentium takes 20 microseconds for a lock-unlock pair. Therefore, we have to
  20. implement our own efficient spin lock mutex. Future operating systems may
  21. provide efficient spin locks, but we cannot count on that.
  22. Another reason for implementing a spin lock is that on multiprocessor systems
  23. it can be more efficient for a processor to run a loop waiting for the
  24. semaphore to be released than to switch to a different thread. A thread switch
  25. takes 25 us on both platforms mentioned above. See Gray and Reuter's book
  26. Transaction processing for background.
  27. How long should the spin loop last before suspending the thread? On a
  28. uniprocessor, spinning does not help at all, because if the thread owning the
  29. mutex is not executing, it cannot be released. Spinning actually wastes
  30. resources.
  31. On a multiprocessor, we do not know if the thread owning the mutex is
  32. executing or not. Thus it would make sense to spin as long as the operation
  33. guarded by the mutex would typically last assuming that the thread is
  34. executing. If the mutex is not released by that time, we may assume that the
  35. thread owning the mutex is not executing and suspend the waiting thread.
  36. A typical operation (where no i/o involved) guarded by a mutex or a read-write
  37. lock may last 1 - 20 us on the current Pentium platform. The longest
  38. operations are the binary searches on an index node.
  39. We conclude that the best choice is to set the spin time at 20 us. Then the
  40. system should work well on a multiprocessor. On a uniprocessor we have to
  41. make sure that thread swithches due to mutex collisions are not frequent,
  42. i.e., they do not happen every 100 us or so, because that wastes too much
  43. resources. If the thread switches are not frequent, the 20 us wasted in spin
  44. loop is not too much.
  45. Empirical studies on the effect of spin time should be done for different
  46. platforms.
  47. IMPLEMENTATION OF THE MUTEX
  48. ===========================
  49. For background, see Curt Schimmel's book on Unix implementation on modern
  50. architectures. The key points in the implementation are atomicity and
  51. serialization of memory accesses. The test-and-set instruction (XCHG in
  52. Pentium) must be atomic. As new processors may have weak memory models, also
  53. serialization of memory references may be necessary. The successor of Pentium,
  54. P6, has at least one mode where the memory model is weak. As far as we know,
  55. in Pentium all memory accesses are serialized in the program order and we do
  56. not have to worry about the memory model. On other processors there are
  57. special machine instructions called a fence, memory barrier, or storage
  58. barrier (STBAR in Sparc), which can be used to serialize the memory accesses
  59. to happen in program order relative to the fence instruction.
  60. Leslie Lamport has devised a "bakery algorithm" to implement a mutex without
  61. the atomic test-and-set, but his algorithm should be modified for weak memory
  62. models. We do not use Lamport's algorithm, because we guess it is slower than
  63. the atomic test-and-set.
  64. Our mutex implementation works as follows: After that we perform the atomic
  65. test-and-set instruction on the memory word. If the test returns zero, we
  66. know we got the lock first. If the test returns not zero, some other thread
  67. was quicker and got the lock: then we spin in a loop reading the memory word,
  68. waiting it to become zero. It is wise to just read the word in the loop, not
  69. perform numerous test-and-set instructions, because they generate memory
  70. traffic between the cache and the main memory. The read loop can just access
  71. the cache, saving bus bandwidth.
  72. If we cannot acquire the mutex lock in the specified time, we reserve a cell
  73. in the wait array, set the waiters byte in the mutex to 1. To avoid a race
  74. condition, after setting the waiters byte and before suspending the waiting
  75. thread, we still have to check that the mutex is reserved, because it may
  76. have happened that the thread which was holding the mutex has just released
  77. it and did not see the waiters byte set to 1, a case which would lead the
  78. other thread to an infinite wait.
  79. LEMMA 1: After a thread resets the event of the cell it reserves for waiting
  80. ========
  81. for a mutex, some thread will eventually call sync_array_signal_object with
  82. the mutex as an argument. Thus no infinite wait is possible.
  83. Proof: After making the reservation the thread sets the waiters field in the
  84. mutex to 1. Then it checks that the mutex is still reserved by some thread,
  85. or it reserves the mutex for itself. In any case, some thread (which may be
  86. also some earlier thread, not necessarily the one currently holding the mutex)
  87. will set the waiters field to 0 in mutex_exit, and then call
  88. sync_array_signal_object with the mutex as an argument.
  89. Q.E.D. */
  90. /* The number of system calls made in this module. Intended for performance
  91. monitoring. */
  92. ulint mutex_system_call_count = 0;
  93. /* Number of spin waits on mutexes: for performance monitoring */
  94. /* round=one iteration of a spin loop */
  95. ulint mutex_spin_round_count = 0;
  96. ulint mutex_spin_wait_count = 0;
  97. ulint mutex_os_wait_count = 0;
  98. ulint mutex_exit_count = 0;
  99. /* The global array of wait cells for implementation of the database's own
  100. mutexes and read-write locks */
  101. sync_array_t* sync_primary_wait_array;
  102. /* This variable is set to TRUE when sync_init is called */
  103. ibool sync_initialized = FALSE;
  104. typedef struct sync_level_struct sync_level_t;
  105. typedef struct sync_thread_struct sync_thread_t;
  106. #ifdef UNIV_SYNC_DEBUG
  107. /* The latch levels currently owned by threads are stored in this data
  108. structure; the size of this array is OS_THREAD_MAX_N */
  109. sync_thread_t* sync_thread_level_arrays;
  110. /* Mutex protecting sync_thread_level_arrays */
  111. mutex_t sync_thread_mutex;
  112. #endif /* UNIV_SYNC_DEBUG */
  113. /* Global list of database mutexes (not OS mutexes) created. */
  114. ut_list_base_node_t mutex_list;
  115. /* Mutex protecting the mutex_list variable */
  116. mutex_t mutex_list_mutex;
  117. #ifdef UNIV_SYNC_DEBUG
  118. /* Latching order checks start when this is set TRUE */
  119. ibool sync_order_checks_on = FALSE;
  120. #endif /* UNIV_SYNC_DEBUG */
  121. struct sync_thread_struct{
  122. os_thread_id_t id; /* OS thread id */
  123. sync_level_t* levels; /* level array for this thread; if this is NULL
  124. this slot is unused */
  125. };
  126. /* Number of slots reserved for each OS thread in the sync level array */
  127. #define SYNC_THREAD_N_LEVELS 10000
  128. struct sync_level_struct{
  129. void* latch; /* pointer to a mutex or an rw-lock; NULL means that
  130. the slot is empty */
  131. ulint level; /* level of the latch in the latching order */
  132. };
  133. /**********************************************************************
  134. Creates, or rather, initializes a mutex object in a specified memory
  135. location (which must be appropriately aligned). The mutex is initialized
  136. in the reset state. Explicit freeing of the mutex with mutex_free is
  137. necessary only if the memory block containing it is freed. */
  138. void
  139. mutex_create_func(
  140. /*==============*/
  141. mutex_t* mutex, /* in: pointer to memory */
  142. #ifdef UNIV_DEBUG
  143. const char* cmutex_name, /* in: mutex name */
  144. # ifdef UNIV_SYNC_DEBUG
  145. ulint level, /* in: level */
  146. # endif /* UNIV_SYNC_DEBUG */
  147. #endif /* UNIV_DEBUG */
  148. const char* cfile_name, /* in: file name where created */
  149. ulint cline) /* in: file line where created */
  150. {
  151. #if defined(_WIN32) && defined(UNIV_CAN_USE_X86_ASSEMBLER)
  152. mutex_reset_lock_word(mutex);
  153. #else
  154. os_fast_mutex_init(&(mutex->os_fast_mutex));
  155. mutex->lock_word = 0;
  156. #endif
  157. mutex_set_waiters(mutex, 0);
  158. #ifdef UNIV_DEBUG
  159. mutex->magic_n = MUTEX_MAGIC_N;
  160. #endif /* UNIV_DEBUG */
  161. #ifdef UNIV_SYNC_DEBUG
  162. mutex->line = 0;
  163. mutex->file_name = "not yet reserved";
  164. mutex->level = level;
  165. #endif /* UNIV_SYNC_DEBUG */
  166. mutex->cfile_name = cfile_name;
  167. mutex->cline = cline;
  168. #ifndef UNIV_HOTBACKUP
  169. mutex->count_os_wait = 0;
  170. # ifdef UNIV_DEBUG
  171. mutex->cmutex_name= cmutex_name;
  172. mutex->count_using= 0;
  173. mutex->mutex_type= 0;
  174. mutex->lspent_time= 0;
  175. mutex->lmax_spent_time= 0;
  176. mutex->count_spin_loop= 0;
  177. mutex->count_spin_rounds= 0;
  178. mutex->count_os_yield= 0;
  179. # endif /* UNIV_DEBUG */
  180. #endif /* !UNIV_HOTBACKUP */
  181. /* Check that lock_word is aligned; this is important on Intel */
  182. ut_ad(((ulint)(&(mutex->lock_word))) % 4 == 0);
  183. /* NOTE! The very first mutexes are not put to the mutex list */
  184. if ((mutex == &mutex_list_mutex)
  185. #ifdef UNIV_SYNC_DEBUG
  186. || (mutex == &sync_thread_mutex)
  187. #endif /* UNIV_SYNC_DEBUG */
  188. ) {
  189. return;
  190. }
  191. mutex_enter(&mutex_list_mutex);
  192. ut_ad(UT_LIST_GET_LEN(mutex_list) == 0
  193. || UT_LIST_GET_FIRST(mutex_list)->magic_n == MUTEX_MAGIC_N);
  194. UT_LIST_ADD_FIRST(list, mutex_list, mutex);
  195. mutex_exit(&mutex_list_mutex);
  196. }
  197. /**********************************************************************
  198. Calling this function is obligatory only if the memory buffer containing
  199. the mutex is freed. Removes a mutex object from the mutex list. The mutex
  200. is checked to be in the reset state. */
  201. void
  202. mutex_free(
  203. /*=======*/
  204. mutex_t* mutex) /* in: mutex */
  205. {
  206. ut_ad(mutex_validate(mutex));
  207. ut_a(mutex_get_lock_word(mutex) == 0);
  208. ut_a(mutex_get_waiters(mutex) == 0);
  209. if (mutex != &mutex_list_mutex
  210. #ifdef UNIV_SYNC_DEBUG
  211. && mutex != &sync_thread_mutex
  212. #endif /* UNIV_SYNC_DEBUG */
  213. ) {
  214. mutex_enter(&mutex_list_mutex);
  215. ut_ad(!UT_LIST_GET_PREV(list, mutex)
  216. || UT_LIST_GET_PREV(list, mutex)->magic_n
  217. == MUTEX_MAGIC_N);
  218. ut_ad(!UT_LIST_GET_NEXT(list, mutex)
  219. || UT_LIST_GET_NEXT(list, mutex)->magic_n
  220. == MUTEX_MAGIC_N);
  221. UT_LIST_REMOVE(list, mutex_list, mutex);
  222. mutex_exit(&mutex_list_mutex);
  223. }
  224. #if !defined(_WIN32) || !defined(UNIV_CAN_USE_X86_ASSEMBLER)
  225. os_fast_mutex_free(&(mutex->os_fast_mutex));
  226. #endif
  227. /* If we free the mutex protecting the mutex list (freeing is
  228. not necessary), we have to reset the magic number AFTER removing
  229. it from the list. */
  230. #ifdef UNIV_DEBUG
  231. mutex->magic_n = 0;
  232. #endif /* UNIV_DEBUG */
  233. }
  234. /************************************************************************
  235. NOTE! Use the corresponding macro in the header file, not this function
  236. directly. Tries to lock the mutex for the current thread. If the lock is not
  237. acquired immediately, returns with return value 1. */
  238. ulint
  239. mutex_enter_nowait_func(
  240. /*====================*/
  241. /* out: 0 if succeed, 1 if not */
  242. mutex_t* mutex, /* in: pointer to mutex */
  243. const char* file_name __attribute__((unused)),
  244. /* in: file name where mutex
  245. requested */
  246. ulint line __attribute__((unused)))
  247. /* in: line where requested */
  248. {
  249. ut_ad(mutex_validate(mutex));
  250. if (!mutex_test_and_set(mutex)) {
  251. ut_d(mutex->thread_id = os_thread_get_curr_id());
  252. #ifdef UNIV_SYNC_DEBUG
  253. mutex_set_debug_info(mutex, file_name, line);
  254. #endif
  255. return(0); /* Succeeded! */
  256. }
  257. return(1);
  258. }
  259. #ifdef UNIV_DEBUG
  260. /**********************************************************************
  261. Checks that the mutex has been initialized. */
  262. ibool
  263. mutex_validate(
  264. /*===========*/
  265. const mutex_t* mutex)
  266. {
  267. ut_a(mutex);
  268. ut_a(mutex->magic_n == MUTEX_MAGIC_N);
  269. return(TRUE);
  270. }
  271. /**********************************************************************
  272. Checks that the current thread owns the mutex. Works only in the debug
  273. version. */
  274. ibool
  275. mutex_own(
  276. /*======*/
  277. /* out: TRUE if owns */
  278. const mutex_t* mutex) /* in: mutex */
  279. {
  280. ut_ad(mutex_validate(mutex));
  281. return(mutex_get_lock_word(mutex) == 1
  282. && os_thread_eq(mutex->thread_id, os_thread_get_curr_id()));
  283. }
  284. #endif /* UNIV_DEBUG */
  285. /**********************************************************************
  286. Sets the waiters field in a mutex. */
  287. void
  288. mutex_set_waiters(
  289. /*==============*/
  290. mutex_t* mutex, /* in: mutex */
  291. ulint n) /* in: value to set */
  292. {
  293. volatile ulint* ptr; /* declared volatile to ensure that
  294. the value is stored to memory */
  295. ut_ad(mutex);
  296. ptr = &(mutex->waiters);
  297. *ptr = n; /* Here we assume that the write of a single
  298. word in memory is atomic */
  299. }
  300. /**********************************************************************
  301. Reserves a mutex for the current thread. If the mutex is reserved, the
  302. function spins a preset time (controlled by SYNC_SPIN_ROUNDS), waiting
  303. for the mutex before suspending the thread. */
  304. void
  305. mutex_spin_wait(
  306. /*============*/
  307. mutex_t* mutex, /* in: pointer to mutex */
  308. const char* file_name, /* in: file name where mutex
  309. requested */
  310. ulint line) /* in: line where requested */
  311. {
  312. ulint index; /* index of the reserved wait cell */
  313. ulint i; /* spin round count */
  314. #if defined UNIV_DEBUG && !defined UNIV_HOTBACKUP
  315. ib_longlong lstart_time = 0, lfinish_time; /* for timing os_wait */
  316. ulint ltime_diff;
  317. ulint sec;
  318. ulint ms;
  319. uint timer_started = 0;
  320. #endif /* UNIV_DEBUG && !UNIV_HOTBACKUP */
  321. ut_ad(mutex);
  322. mutex_loop:
  323. i = 0;
  324. /* Spin waiting for the lock word to become zero. Note that we do
  325. not have to assume that the read access to the lock word is atomic,
  326. as the actual locking is always committed with atomic test-and-set.
  327. In reality, however, all processors probably have an atomic read of
  328. a memory word. */
  329. spin_loop:
  330. #if defined UNIV_DEBUG && !defined UNIV_HOTBACKUP
  331. mutex_spin_wait_count++;
  332. mutex->count_spin_loop++;
  333. #endif /* UNIV_DEBUG && !UNIV_HOTBACKUP */
  334. while (mutex_get_lock_word(mutex) != 0 && i < SYNC_SPIN_ROUNDS) {
  335. if (srv_spin_wait_delay) {
  336. ut_delay(ut_rnd_interval(0, srv_spin_wait_delay));
  337. }
  338. i++;
  339. }
  340. if (i == SYNC_SPIN_ROUNDS) {
  341. #if defined UNIV_DEBUG && !defined UNIV_HOTBACKUP
  342. mutex->count_os_yield++;
  343. if (timed_mutexes == 1 && timer_started==0) {
  344. ut_usectime(&sec, &ms);
  345. lstart_time= (ib_longlong)sec * 1000000 + ms;
  346. timer_started = 1;
  347. }
  348. #endif /* UNIV_DEBUG && !UNIV_HOTBACKUP */
  349. os_thread_yield();
  350. }
  351. #ifdef UNIV_SRV_PRINT_LATCH_WAITS
  352. fprintf(stderr,
  353. "Thread %lu spin wait mutex at %p"
  354. " cfile %s cline %lu rnds %lu\n",
  355. (ulong) os_thread_pf(os_thread_get_curr_id()), (void*) mutex,
  356. mutex->cfile_name, (ulong) mutex->cline, (ulong) i);
  357. #endif
  358. mutex_spin_round_count += i;
  359. #if defined UNIV_DEBUG && !defined UNIV_HOTBACKUP
  360. mutex->count_spin_rounds += i;
  361. #endif /* UNIV_DEBUG && !UNIV_HOTBACKUP */
  362. if (mutex_test_and_set(mutex) == 0) {
  363. /* Succeeded! */
  364. ut_d(mutex->thread_id = os_thread_get_curr_id());
  365. #ifdef UNIV_SYNC_DEBUG
  366. mutex_set_debug_info(mutex, file_name, line);
  367. #endif
  368. goto finish_timing;
  369. }
  370. /* We may end up with a situation where lock_word is 0 but the OS
  371. fast mutex is still reserved. On FreeBSD the OS does not seem to
  372. schedule a thread which is constantly calling pthread_mutex_trylock
  373. (in mutex_test_and_set implementation). Then we could end up
  374. spinning here indefinitely. The following 'i++' stops this infinite
  375. spin. */
  376. i++;
  377. if (i < SYNC_SPIN_ROUNDS) {
  378. goto spin_loop;
  379. }
  380. sync_array_reserve_cell(sync_primary_wait_array, mutex,
  381. SYNC_MUTEX, file_name, line, &index);
  382. mutex_system_call_count++;
  383. /* The memory order of the array reservation and the change in the
  384. waiters field is important: when we suspend a thread, we first
  385. reserve the cell and then set waiters field to 1. When threads are
  386. released in mutex_exit, the waiters field is first set to zero and
  387. then the event is set to the signaled state. */
  388. mutex_set_waiters(mutex, 1);
  389. /* Try to reserve still a few times */
  390. for (i = 0; i < 4; i++) {
  391. if (mutex_test_and_set(mutex) == 0) {
  392. /* Succeeded! Free the reserved wait cell */
  393. sync_array_free_cell_protected(sync_primary_wait_array,
  394. index);
  395. ut_d(mutex->thread_id = os_thread_get_curr_id());
  396. #ifdef UNIV_SYNC_DEBUG
  397. mutex_set_debug_info(mutex, file_name, line);
  398. #endif
  399. #ifdef UNIV_SRV_PRINT_LATCH_WAITS
  400. fprintf(stderr, "Thread %lu spin wait succeeds at 2:"
  401. " mutex at %p\n",
  402. (ulong) os_thread_pf(os_thread_get_curr_id()),
  403. (void*) mutex);
  404. #endif
  405. goto finish_timing;
  406. /* Note that in this case we leave the waiters field
  407. set to 1. We cannot reset it to zero, as we do not
  408. know if there are other waiters. */
  409. }
  410. }
  411. /* Now we know that there has been some thread holding the mutex
  412. after the change in the wait array and the waiters field was made.
  413. Now there is no risk of infinite wait on the event. */
  414. #ifdef UNIV_SRV_PRINT_LATCH_WAITS
  415. fprintf(stderr,
  416. "Thread %lu OS wait mutex at %p cfile %s cline %lu rnds %lu\n",
  417. (ulong) os_thread_pf(os_thread_get_curr_id()), (void*) mutex,
  418. mutex->cfile_name, (ulong) mutex->cline, (ulong) i);
  419. #endif
  420. mutex_system_call_count++;
  421. mutex_os_wait_count++;
  422. #ifndef UNIV_HOTBACKUP
  423. mutex->count_os_wait++;
  424. # ifdef UNIV_DEBUG
  425. /* !!!!! Sometimes os_wait can be called without os_thread_yield */
  426. if (timed_mutexes == 1 && timer_started==0) {
  427. ut_usectime(&sec, &ms);
  428. lstart_time= (ib_longlong)sec * 1000000 + ms;
  429. timer_started = 1;
  430. }
  431. # endif /* UNIV_DEBUG */
  432. #endif /* !UNIV_HOTBACKUP */
  433. sync_array_wait_event(sync_primary_wait_array, index);
  434. goto mutex_loop;
  435. finish_timing:
  436. #if defined UNIV_DEBUG && !defined UNIV_HOTBACKUP
  437. if (timed_mutexes == 1 && timer_started==1) {
  438. ut_usectime(&sec, &ms);
  439. lfinish_time= (ib_longlong)sec * 1000000 + ms;
  440. ltime_diff= (ulint) (lfinish_time - lstart_time);
  441. mutex->lspent_time += ltime_diff;
  442. if (mutex->lmax_spent_time < ltime_diff) {
  443. mutex->lmax_spent_time= ltime_diff;
  444. }
  445. }
  446. #endif /* UNIV_DEBUG && !UNIV_HOTBACKUP */
  447. return;
  448. }
  449. /**********************************************************************
  450. Releases the threads waiting in the primary wait array for this mutex. */
  451. void
  452. mutex_signal_object(
  453. /*================*/
  454. mutex_t* mutex) /* in: mutex */
  455. {
  456. mutex_set_waiters(mutex, 0);
  457. /* The memory order of resetting the waiters field and
  458. signaling the object is important. See LEMMA 1 above. */
  459. sync_array_signal_object(sync_primary_wait_array, mutex);
  460. }
  461. #ifdef UNIV_SYNC_DEBUG
  462. /**********************************************************************
  463. Sets the debug information for a reserved mutex. */
  464. void
  465. mutex_set_debug_info(
  466. /*=================*/
  467. mutex_t* mutex, /* in: mutex */
  468. const char* file_name, /* in: file where requested */
  469. ulint line) /* in: line where requested */
  470. {
  471. ut_ad(mutex);
  472. ut_ad(file_name);
  473. sync_thread_add_level(mutex, mutex->level);
  474. mutex->file_name = file_name;
  475. mutex->line = line;
  476. }
  477. /**********************************************************************
  478. Gets the debug information for a reserved mutex. */
  479. void
  480. mutex_get_debug_info(
  481. /*=================*/
  482. mutex_t* mutex, /* in: mutex */
  483. const char** file_name, /* out: file where requested */
  484. ulint* line, /* out: line where requested */
  485. os_thread_id_t* thread_id) /* out: id of the thread which owns
  486. the mutex */
  487. {
  488. ut_ad(mutex);
  489. *file_name = mutex->file_name;
  490. *line = mutex->line;
  491. *thread_id = mutex->thread_id;
  492. }
  493. /**********************************************************************
  494. Prints debug info of currently reserved mutexes. */
  495. static
  496. void
  497. mutex_list_print_info(
  498. /*==================*/
  499. FILE* file) /* in: file where to print */
  500. {
  501. mutex_t* mutex;
  502. const char* file_name;
  503. ulint line;
  504. os_thread_id_t thread_id;
  505. ulint count = 0;
  506. fputs("----------\n"
  507. "MUTEX INFO\n"
  508. "----------\n", file);
  509. mutex_enter(&mutex_list_mutex);
  510. mutex = UT_LIST_GET_FIRST(mutex_list);
  511. while (mutex != NULL) {
  512. count++;
  513. if (mutex_get_lock_word(mutex) != 0) {
  514. mutex_get_debug_info(mutex, &file_name, &line,
  515. &thread_id);
  516. fprintf(file,
  517. "Locked mutex: addr %p thread %ld"
  518. " file %s line %ld\n",
  519. (void*) mutex, os_thread_pf(thread_id),
  520. file_name, line);
  521. }
  522. mutex = UT_LIST_GET_NEXT(list, mutex);
  523. }
  524. fprintf(file, "Total number of mutexes %ld\n", count);
  525. mutex_exit(&mutex_list_mutex);
  526. }
  527. /**********************************************************************
  528. Counts currently reserved mutexes. Works only in the debug version. */
  529. ulint
  530. mutex_n_reserved(void)
  531. /*==================*/
  532. {
  533. mutex_t* mutex;
  534. ulint count = 0;
  535. mutex_enter(&mutex_list_mutex);
  536. mutex = UT_LIST_GET_FIRST(mutex_list);
  537. while (mutex != NULL) {
  538. if (mutex_get_lock_word(mutex) != 0) {
  539. count++;
  540. }
  541. mutex = UT_LIST_GET_NEXT(list, mutex);
  542. }
  543. mutex_exit(&mutex_list_mutex);
  544. ut_a(count >= 1);
  545. return(count - 1); /* Subtract one, because this function itself
  546. was holding one mutex (mutex_list_mutex) */
  547. }
  548. /**********************************************************************
  549. Returns TRUE if no mutex or rw-lock is currently locked. Works only in
  550. the debug version. */
  551. ibool
  552. sync_all_freed(void)
  553. /*================*/
  554. {
  555. return(mutex_n_reserved() + rw_lock_n_locked() == 0);
  556. }
  557. /**********************************************************************
  558. Gets the value in the nth slot in the thread level arrays. */
  559. static
  560. sync_thread_t*
  561. sync_thread_level_arrays_get_nth(
  562. /*=============================*/
  563. /* out: pointer to thread slot */
  564. ulint n) /* in: slot number */
  565. {
  566. ut_ad(n < OS_THREAD_MAX_N);
  567. return(sync_thread_level_arrays + n);
  568. }
  569. /**********************************************************************
  570. Looks for the thread slot for the calling thread. */
  571. static
  572. sync_thread_t*
  573. sync_thread_level_arrays_find_slot(void)
  574. /*====================================*/
  575. /* out: pointer to thread slot, NULL if not found */
  576. {
  577. sync_thread_t* slot;
  578. os_thread_id_t id;
  579. ulint i;
  580. id = os_thread_get_curr_id();
  581. for (i = 0; i < OS_THREAD_MAX_N; i++) {
  582. slot = sync_thread_level_arrays_get_nth(i);
  583. if (slot->levels && os_thread_eq(slot->id, id)) {
  584. return(slot);
  585. }
  586. }
  587. return(NULL);
  588. }
  589. /**********************************************************************
  590. Looks for an unused thread slot. */
  591. static
  592. sync_thread_t*
  593. sync_thread_level_arrays_find_free(void)
  594. /*====================================*/
  595. /* out: pointer to thread slot */
  596. {
  597. sync_thread_t* slot;
  598. ulint i;
  599. for (i = 0; i < OS_THREAD_MAX_N; i++) {
  600. slot = sync_thread_level_arrays_get_nth(i);
  601. if (slot->levels == NULL) {
  602. return(slot);
  603. }
  604. }
  605. return(NULL);
  606. }
  607. /**********************************************************************
  608. Gets the value in the nth slot in the thread level array. */
  609. static
  610. sync_level_t*
  611. sync_thread_levels_get_nth(
  612. /*=======================*/
  613. /* out: pointer to level slot */
  614. sync_level_t* arr, /* in: pointer to level array for an OS
  615. thread */
  616. ulint n) /* in: slot number */
  617. {
  618. ut_ad(n < SYNC_THREAD_N_LEVELS);
  619. return(arr + n);
  620. }
  621. /**********************************************************************
  622. Checks if all the level values stored in the level array are greater than
  623. the given limit. */
  624. static
  625. ibool
  626. sync_thread_levels_g(
  627. /*=================*/
  628. /* out: TRUE if all greater */
  629. sync_level_t* arr, /* in: pointer to level array for an OS
  630. thread */
  631. ulint limit) /* in: level limit */
  632. {
  633. sync_level_t* slot;
  634. rw_lock_t* lock;
  635. mutex_t* mutex;
  636. ulint i;
  637. for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
  638. slot = sync_thread_levels_get_nth(arr, i);
  639. if (slot->latch != NULL) {
  640. if (slot->level <= limit) {
  641. lock = slot->latch;
  642. mutex = slot->latch;
  643. fprintf(stderr,
  644. "InnoDB error: sync levels should be"
  645. " > %lu but a level is %lu\n",
  646. (ulong) limit, (ulong) slot->level);
  647. if (mutex->magic_n == MUTEX_MAGIC_N) {
  648. fprintf(stderr,
  649. "Mutex created at %s %lu\n",
  650. mutex->cfile_name,
  651. (ulong) mutex->cline);
  652. if (mutex_get_lock_word(mutex) != 0) {
  653. const char* file_name;
  654. ulint line;
  655. os_thread_id_t thread_id;
  656. mutex_get_debug_info(
  657. mutex, &file_name,
  658. &line, &thread_id);
  659. fprintf(stderr,
  660. "InnoDB: Locked mutex:"
  661. " addr %p thread %ld"
  662. " file %s line %ld\n",
  663. (void*) mutex,
  664. os_thread_pf(
  665. thread_id),
  666. file_name,
  667. (ulong) line);
  668. } else {
  669. fputs("Not locked\n", stderr);
  670. }
  671. } else {
  672. rw_lock_print(lock);
  673. }
  674. return(FALSE);
  675. }
  676. }
  677. }
  678. return(TRUE);
  679. }
  680. /**********************************************************************
  681. Checks if the level value is stored in the level array. */
  682. static
  683. ibool
  684. sync_thread_levels_contain(
  685. /*=======================*/
  686. /* out: TRUE if stored */
  687. sync_level_t* arr, /* in: pointer to level array for an OS
  688. thread */
  689. ulint level) /* in: level */
  690. {
  691. sync_level_t* slot;
  692. ulint i;
  693. for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
  694. slot = sync_thread_levels_get_nth(arr, i);
  695. if (slot->latch != NULL) {
  696. if (slot->level == level) {
  697. return(TRUE);
  698. }
  699. }
  700. }
  701. return(FALSE);
  702. }
  703. /**********************************************************************
  704. Checks that the level array for the current thread is empty. */
  705. ibool
  706. sync_thread_levels_empty_gen(
  707. /*=========================*/
  708. /* out: TRUE if empty except the
  709. exceptions specified below */
  710. ibool dict_mutex_allowed) /* in: TRUE if dictionary mutex is
  711. allowed to be owned by the thread,
  712. also purge_is_running mutex is
  713. allowed */
  714. {
  715. sync_level_t* arr;
  716. sync_thread_t* thread_slot;
  717. sync_level_t* slot;
  718. ulint i;
  719. if (!sync_order_checks_on) {
  720. return(TRUE);
  721. }
  722. mutex_enter(&sync_thread_mutex);
  723. thread_slot = sync_thread_level_arrays_find_slot();
  724. if (thread_slot == NULL) {
  725. mutex_exit(&sync_thread_mutex);
  726. return(TRUE);
  727. }
  728. arr = thread_slot->levels;
  729. for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
  730. slot = sync_thread_levels_get_nth(arr, i);
  731. if (slot->latch != NULL
  732. && (!dict_mutex_allowed
  733. || (slot->level != SYNC_DICT
  734. && slot->level != SYNC_DICT_OPERATION))) {
  735. mutex_exit(&sync_thread_mutex);
  736. ut_error;
  737. return(FALSE);
  738. }
  739. }
  740. mutex_exit(&sync_thread_mutex);
  741. return(TRUE);
  742. }
  743. /**********************************************************************
  744. Checks that the level array for the current thread is empty. */
  745. ibool
  746. sync_thread_levels_empty(void)
  747. /*==========================*/
  748. /* out: TRUE if empty */
  749. {
  750. return(sync_thread_levels_empty_gen(FALSE));
  751. }
  752. /**********************************************************************
  753. Adds a latch and its level in the thread level array. Allocates the memory
  754. for the array if called first time for this OS thread. Makes the checks
  755. against other latch levels stored in the array for this thread. */
  756. void
  757. sync_thread_add_level(
  758. /*==================*/
  759. void* latch, /* in: pointer to a mutex or an rw-lock */
  760. ulint level) /* in: level in the latching order; if
  761. SYNC_LEVEL_VARYING, nothing is done */
  762. {
  763. sync_level_t* array;
  764. sync_level_t* slot;
  765. sync_thread_t* thread_slot;
  766. ulint i;
  767. if (!sync_order_checks_on) {
  768. return;
  769. }
  770. if ((latch == (void*)&sync_thread_mutex)
  771. || (latch == (void*)&mutex_list_mutex)
  772. || (latch == (void*)&rw_lock_debug_mutex)
  773. || (latch == (void*)&rw_lock_list_mutex)) {
  774. return;
  775. }
  776. if (level == SYNC_LEVEL_VARYING) {
  777. return;
  778. }
  779. mutex_enter(&sync_thread_mutex);
  780. thread_slot = sync_thread_level_arrays_find_slot();
  781. if (thread_slot == NULL) {
  782. /* We have to allocate the level array for a new thread */
  783. array = ut_malloc(sizeof(sync_level_t) * SYNC_THREAD_N_LEVELS);
  784. thread_slot = sync_thread_level_arrays_find_free();
  785. thread_slot->id = os_thread_get_curr_id();
  786. thread_slot->levels = array;
  787. for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
  788. slot = sync_thread_levels_get_nth(array, i);
  789. slot->latch = NULL;
  790. }
  791. }
  792. array = thread_slot->levels;
  793. /* NOTE that there is a problem with _NODE and _LEAF levels: if the
  794. B-tree height changes, then a leaf can change to an internal node
  795. or the other way around. We do not know at present if this can cause
  796. unnecessary assertion failures below. */
  797. switch (level) {
  798. case SYNC_NO_ORDER_CHECK:
  799. case SYNC_EXTERN_STORAGE:
  800. case SYNC_TREE_NODE_FROM_HASH:
  801. /* Do no order checking */
  802. break;
  803. case SYNC_MEM_POOL:
  804. ut_a(sync_thread_levels_g(array, SYNC_MEM_POOL));
  805. break;
  806. case SYNC_MEM_HASH:
  807. ut_a(sync_thread_levels_g(array, SYNC_MEM_HASH));
  808. break;
  809. case SYNC_RECV:
  810. ut_a(sync_thread_levels_g(array, SYNC_RECV));
  811. break;
  812. case SYNC_WORK_QUEUE:
  813. ut_a(sync_thread_levels_g(array, SYNC_WORK_QUEUE));
  814. break;
  815. case SYNC_LOG:
  816. ut_a(sync_thread_levels_g(array, SYNC_LOG));
  817. break;
  818. case SYNC_THR_LOCAL:
  819. ut_a(sync_thread_levels_g(array, SYNC_THR_LOCAL));
  820. break;
  821. case SYNC_ANY_LATCH:
  822. ut_a(sync_thread_levels_g(array, SYNC_ANY_LATCH));
  823. break;
  824. case SYNC_TRX_SYS_HEADER:
  825. ut_a(sync_thread_levels_g(array, SYNC_TRX_SYS_HEADER));
  826. break;
  827. case SYNC_DOUBLEWRITE:
  828. ut_a(sync_thread_levels_g(array, SYNC_DOUBLEWRITE));
  829. break;
  830. case SYNC_BUF_BLOCK:
  831. ut_a((sync_thread_levels_contain(array, SYNC_BUF_POOL)
  832. && sync_thread_levels_g(array, SYNC_BUF_BLOCK - 1))
  833. || sync_thread_levels_g(array, SYNC_BUF_BLOCK));
  834. break;
  835. case SYNC_BUF_POOL:
  836. ut_a(sync_thread_levels_g(array, SYNC_BUF_POOL));
  837. break;
  838. case SYNC_SEARCH_SYS:
  839. ut_a(sync_thread_levels_g(array, SYNC_SEARCH_SYS));
  840. break;
  841. case SYNC_TRX_LOCK_HEAP:
  842. ut_a(sync_thread_levels_g(array, SYNC_TRX_LOCK_HEAP));
  843. break;
  844. case SYNC_REC_LOCK:
  845. ut_a((sync_thread_levels_contain(array, SYNC_KERNEL)
  846. && sync_thread_levels_g(array, SYNC_REC_LOCK - 1))
  847. || sync_thread_levels_g(array, SYNC_REC_LOCK));
  848. break;
  849. case SYNC_KERNEL:
  850. ut_a(sync_thread_levels_g(array, SYNC_KERNEL));
  851. break;
  852. case SYNC_IBUF_BITMAP:
  853. ut_a((sync_thread_levels_contain(array, SYNC_IBUF_BITMAP_MUTEX)
  854. && sync_thread_levels_g(array, SYNC_IBUF_BITMAP - 1))
  855. || sync_thread_levels_g(array, SYNC_IBUF_BITMAP));
  856. break;
  857. case SYNC_IBUF_BITMAP_MUTEX:
  858. ut_a(sync_thread_levels_g(array, SYNC_IBUF_BITMAP_MUTEX));
  859. break;
  860. case SYNC_FSP_PAGE:
  861. ut_a(sync_thread_levels_contain(array, SYNC_FSP));
  862. break;
  863. case SYNC_FSP:
  864. ut_a(sync_thread_levels_contain(array, SYNC_FSP)
  865. || sync_thread_levels_g(array, SYNC_FSP));
  866. break;
  867. case SYNC_TRX_UNDO_PAGE:
  868. ut_a(sync_thread_levels_contain(array, SYNC_TRX_UNDO)
  869. || sync_thread_levels_contain(array, SYNC_RSEG)
  870. || sync_thread_levels_contain(array, SYNC_PURGE_SYS)
  871. || sync_thread_levels_g(array, SYNC_TRX_UNDO_PAGE));
  872. break;
  873. case SYNC_RSEG_HEADER:
  874. ut_a(sync_thread_levels_contain(array, SYNC_RSEG));
  875. break;
  876. case SYNC_RSEG_HEADER_NEW:
  877. ut_a(sync_thread_levels_contain(array, SYNC_KERNEL)
  878. && sync_thread_levels_contain(array, SYNC_FSP_PAGE));
  879. break;
  880. case SYNC_RSEG:
  881. ut_a(sync_thread_levels_g(array, SYNC_RSEG));
  882. break;
  883. case SYNC_TRX_UNDO:
  884. ut_a(sync_thread_levels_g(array, SYNC_TRX_UNDO));
  885. break;
  886. case SYNC_PURGE_LATCH:
  887. ut_a(sync_thread_levels_g(array, SYNC_PURGE_LATCH));
  888. break;
  889. case SYNC_PURGE_SYS:
  890. ut_a(sync_thread_levels_g(array, SYNC_PURGE_SYS));
  891. break;
  892. case SYNC_TREE_NODE:
  893. ut_a(sync_thread_levels_contain(array, SYNC_INDEX_TREE)
  894. || sync_thread_levels_g(array, SYNC_TREE_NODE - 1));
  895. break;
  896. case SYNC_TREE_NODE_NEW:
  897. ut_a(sync_thread_levels_contain(array, SYNC_FSP_PAGE)
  898. || sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
  899. break;
  900. case SYNC_INDEX_TREE:
  901. ut_a((sync_thread_levels_contain(array, SYNC_IBUF_MUTEX)
  902. && sync_thread_levels_contain(array, SYNC_FSP)
  903. && sync_thread_levels_g(array, SYNC_FSP_PAGE - 1))
  904. || sync_thread_levels_g(array, SYNC_TREE_NODE - 1));
  905. break;
  906. case SYNC_IBUF_MUTEX:
  907. ut_a(sync_thread_levels_g(array, SYNC_FSP_PAGE - 1));
  908. break;
  909. case SYNC_IBUF_PESS_INSERT_MUTEX:
  910. ut_a(sync_thread_levels_g(array, SYNC_FSP - 1)
  911. && !sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
  912. break;
  913. case SYNC_IBUF_HEADER:
  914. ut_a(sync_thread_levels_g(array, SYNC_FSP - 1)
  915. && !sync_thread_levels_contain(array, SYNC_IBUF_MUTEX)
  916. && !sync_thread_levels_contain(
  917. array, SYNC_IBUF_PESS_INSERT_MUTEX));
  918. break;
  919. case SYNC_DICT_AUTOINC_MUTEX:
  920. ut_a(sync_thread_levels_g(array, SYNC_DICT_AUTOINC_MUTEX));
  921. break;
  922. case SYNC_DICT_OPERATION:
  923. ut_a(sync_thread_levels_g(array, SYNC_DICT_OPERATION));
  924. break;
  925. case SYNC_DICT_HEADER:
  926. ut_a(sync_thread_levels_g(array, SYNC_DICT_HEADER));
  927. break;
  928. case SYNC_DICT:
  929. #ifdef UNIV_DEBUG
  930. ut_a(buf_debug_prints
  931. || sync_thread_levels_g(array, SYNC_DICT));
  932. #else /* UNIV_DEBUG */
  933. ut_a(sync_thread_levels_g(array, SYNC_DICT));
  934. #endif /* UNIV_DEBUG */
  935. break;
  936. default:
  937. ut_error;
  938. }
  939. for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
  940. slot = sync_thread_levels_get_nth(array, i);
  941. if (slot->latch == NULL) {
  942. slot->latch = latch;
  943. slot->level = level;
  944. break;
  945. }
  946. }
  947. ut_a(i < SYNC_THREAD_N_LEVELS);
  948. mutex_exit(&sync_thread_mutex);
  949. }
  950. /**********************************************************************
  951. Removes a latch from the thread level array if it is found there. */
  952. ibool
  953. sync_thread_reset_level(
  954. /*====================*/
  955. /* out: TRUE if found from the array; it is an error
  956. if the latch is not found */
  957. void* latch) /* in: pointer to a mutex or an rw-lock */
  958. {
  959. sync_level_t* array;
  960. sync_level_t* slot;
  961. sync_thread_t* thread_slot;
  962. ulint i;
  963. if (!sync_order_checks_on) {
  964. return(FALSE);
  965. }
  966. if ((latch == (void*)&sync_thread_mutex)
  967. || (latch == (void*)&mutex_list_mutex)
  968. || (latch == (void*)&rw_lock_debug_mutex)
  969. || (latch == (void*)&rw_lock_list_mutex)) {
  970. return(FALSE);
  971. }
  972. mutex_enter(&sync_thread_mutex);
  973. thread_slot = sync_thread_level_arrays_find_slot();
  974. if (thread_slot == NULL) {
  975. ut_error;
  976. mutex_exit(&sync_thread_mutex);
  977. return(FALSE);
  978. }
  979. array = thread_slot->levels;
  980. for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
  981. slot = sync_thread_levels_get_nth(array, i);
  982. if (slot->latch == latch) {
  983. slot->latch = NULL;
  984. mutex_exit(&sync_thread_mutex);
  985. return(TRUE);
  986. }
  987. }
  988. ut_error;
  989. mutex_exit(&sync_thread_mutex);
  990. return(FALSE);
  991. }
  992. #endif /* UNIV_SYNC_DEBUG */
  993. /**********************************************************************
  994. Initializes the synchronization data structures. */
  995. void
  996. sync_init(void)
  997. /*===========*/
  998. {
  999. #ifdef UNIV_SYNC_DEBUG
  1000. sync_thread_t* thread_slot;
  1001. ulint i;
  1002. #endif /* UNIV_SYNC_DEBUG */
  1003. ut_a(sync_initialized == FALSE);
  1004. sync_initialized = TRUE;
  1005. /* Create the primary system wait array which is protected by an OS
  1006. mutex */
  1007. sync_primary_wait_array = sync_array_create(OS_THREAD_MAX_N,
  1008. SYNC_ARRAY_OS_MUTEX);
  1009. #ifdef UNIV_SYNC_DEBUG
  1010. /* Create the thread latch level array where the latch levels
  1011. are stored for each OS thread */
  1012. sync_thread_level_arrays = ut_malloc(OS_THREAD_MAX_N
  1013. * sizeof(sync_thread_t));
  1014. for (i = 0; i < OS_THREAD_MAX_N; i++) {
  1015. thread_slot = sync_thread_level_arrays_get_nth(i);
  1016. thread_slot->levels = NULL;
  1017. }
  1018. #endif /* UNIV_SYNC_DEBUG */
  1019. /* Init the mutex list and create the mutex to protect it. */
  1020. UT_LIST_INIT(mutex_list);
  1021. mutex_create(&mutex_list_mutex, SYNC_NO_ORDER_CHECK);
  1022. #ifdef UNIV_SYNC_DEBUG
  1023. mutex_create(&sync_thread_mutex, SYNC_NO_ORDER_CHECK);
  1024. #endif /* UNIV_SYNC_DEBUG */
  1025. /* Init the rw-lock list and create the mutex to protect it. */
  1026. UT_LIST_INIT(rw_lock_list);
  1027. mutex_create(&rw_lock_list_mutex, SYNC_NO_ORDER_CHECK);
  1028. #ifdef UNIV_SYNC_DEBUG
  1029. mutex_create(&rw_lock_debug_mutex, SYNC_NO_ORDER_CHECK);
  1030. rw_lock_debug_event = os_event_create(NULL);
  1031. rw_lock_debug_waiters = FALSE;
  1032. #endif /* UNIV_SYNC_DEBUG */
  1033. }
  1034. /**********************************************************************
  1035. Frees the resources in InnoDB's own synchronization data structures. Use
  1036. os_sync_free() after calling this. */
  1037. void
  1038. sync_close(void)
  1039. /*===========*/
  1040. {
  1041. mutex_t* mutex;
  1042. sync_array_free(sync_primary_wait_array);
  1043. mutex = UT_LIST_GET_FIRST(mutex_list);
  1044. while (mutex) {
  1045. mutex_free(mutex);
  1046. mutex = UT_LIST_GET_FIRST(mutex_list);
  1047. }
  1048. mutex_free(&mutex_list_mutex);
  1049. #ifdef UNIV_SYNC_DEBUG
  1050. mutex_free(&sync_thread_mutex);
  1051. #endif /* UNIV_SYNC_DEBUG */
  1052. }
  1053. /***********************************************************************
  1054. Prints wait info of the sync system. */
  1055. void
  1056. sync_print_wait_info(
  1057. /*=================*/
  1058. FILE* file) /* in: file where to print */
  1059. {
  1060. #ifdef UNIV_SYNC_DEBUG
  1061. fprintf(file, "Mutex exits %lu, rws exits %lu, rwx exits %lu\n",
  1062. mutex_exit_count, rw_s_exit_count, rw_x_exit_count);
  1063. #endif
  1064. fprintf(file,
  1065. "Mutex spin waits %lu, rounds %lu, OS waits %lu\n"
  1066. "RW-shared spins %lu, OS waits %lu;"
  1067. " RW-excl spins %lu, OS waits %lu\n",
  1068. (ulong) mutex_spin_wait_count,
  1069. (ulong) mutex_spin_round_count,
  1070. (ulong) mutex_os_wait_count,
  1071. (ulong) rw_s_spin_wait_count,
  1072. (ulong) rw_s_os_wait_count,
  1073. (ulong) rw_x_spin_wait_count,
  1074. (ulong) rw_x_os_wait_count);
  1075. }
  1076. /***********************************************************************
  1077. Prints info of the sync system. */
  1078. void
  1079. sync_print(
  1080. /*=======*/
  1081. FILE* file) /* in: file where to print */
  1082. {
  1083. #ifdef UNIV_SYNC_DEBUG
  1084. mutex_list_print_info(file);
  1085. rw_lock_list_print_info(file);
  1086. #endif /* UNIV_SYNC_DEBUG */
  1087. sync_array_print_info(file, sync_primary_wait_array);
  1088. sync_print_wait_info(file);
  1089. }