gooderp18绿色标准版
選択できるのは25トピックまでです。 トピックは、先頭が英数字で、英数字とダッシュ('-')を使用した35文字以内のものにしてください。

528 行
15KB

  1. /*-------------------------------------------------------------------------
  2. *
  3. * atomics.h
  4. * Atomic operations.
  5. *
  6. * Hardware and compiler dependent functions for manipulating memory
  7. * atomically and dealing with cache coherency. Used to implement locking
  8. * facilities and lockless algorithms/data structures.
  9. *
  10. * To bring up postgres on a platform/compiler at the very least
  11. * implementations for the following operations should be provided:
  12. * * pg_compiler_barrier(), pg_write_barrier(), pg_read_barrier()
  13. * * pg_atomic_compare_exchange_u32(), pg_atomic_fetch_add_u32()
  14. * * pg_atomic_test_set_flag(), pg_atomic_init_flag(), pg_atomic_clear_flag()
  15. * * PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY should be defined if appropriate.
  16. *
  17. * There exist generic, hardware independent, implementations for several
  18. * compilers which might be sufficient, although possibly not optimal, for a
  19. * new platform. If no such generic implementation is available spinlocks (or
  20. * even OS provided semaphores) will be used to implement the API.
  21. *
  22. * Implement _u64 atomics if and only if your platform can use them
  23. * efficiently (and obviously correctly).
  24. *
  25. * Use higher level functionality (lwlocks, spinlocks, heavyweight locks)
  26. * whenever possible. Writing correct code using these facilities is hard.
  27. *
  28. * For an introduction to using memory barriers within the PostgreSQL backend,
  29. * see src/backend/storage/lmgr/README.barrier
  30. *
  31. * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
  32. * Portions Copyright (c) 1994, Regents of the University of California
  33. *
  34. * src/include/port/atomics.h
  35. *
  36. *-------------------------------------------------------------------------
  37. */
  38. #ifndef ATOMICS_H
  39. #define ATOMICS_H
  40. #ifdef FRONTEND
  41. #error "atomics.h may not be included from frontend code"
  42. #endif
  43. #define INSIDE_ATOMICS_H
  44. #include <limits.h>
  45. /*
  46. * First a set of architecture specific files is included.
  47. *
  48. * These files can provide the full set of atomics or can do pretty much
  49. * nothing if all the compilers commonly used on these platforms provide
  50. * usable generics.
  51. *
  52. * Don't add an inline assembly of the actual atomic operations if all the
  53. * common implementations of your platform provide intrinsics. Intrinsics are
  54. * much easier to understand and potentially support more architectures.
  55. *
  56. * It will often make sense to define memory barrier semantics here, since
  57. * e.g. generic compiler intrinsics for x86 memory barriers can't know that
  58. * postgres doesn't need x86 read/write barriers do anything more than a
  59. * compiler barrier.
  60. *
  61. */
  62. #if defined(__arm__) || defined(__arm) || \
  63. defined(__aarch64__) || defined(__aarch64)
  64. #include "port/atomics/arch-arm.h"
  65. #elif defined(__i386__) || defined(__i386) || defined(__x86_64__)
  66. #include "port/atomics/arch-x86.h"
  67. #elif defined(__ia64__) || defined(__ia64)
  68. #include "port/atomics/arch-ia64.h"
  69. #elif defined(__ppc__) || defined(__powerpc__) || defined(__ppc64__) || defined(__powerpc64__)
  70. #include "port/atomics/arch-ppc.h"
  71. #elif defined(__hppa) || defined(__hppa__)
  72. #include "port/atomics/arch-hppa.h"
  73. #endif
  74. /*
  75. * Compiler specific, but architecture independent implementations.
  76. *
  77. * Provide architecture independent implementations of the atomic
  78. * facilities. At the very least compiler barriers should be provided, but a
  79. * full implementation of
  80. * * pg_compiler_barrier(), pg_write_barrier(), pg_read_barrier()
  81. * * pg_atomic_compare_exchange_u32(), pg_atomic_fetch_add_u32()
  82. * using compiler intrinsics are a good idea.
  83. */
  84. /*
  85. * Given a gcc-compatible xlc compiler, prefer the xlc implementation. The
  86. * ppc64le "IBM XL C/C++ for Linux, V13.1.2" implements both interfaces, but
  87. * __sync_lock_test_and_set() of one-byte types elicits SIGSEGV.
  88. */
  89. #if defined(__IBMC__) || defined(__IBMCPP__)
  90. #include "port/atomics/generic-xlc.h"
  91. /* gcc or compatible, including clang and icc */
  92. #elif defined(__GNUC__) || defined(__INTEL_COMPILER)
  93. #include "port/atomics/generic-gcc.h"
  94. #elif defined(_MSC_VER)
  95. #include "port/atomics/generic-msvc.h"
  96. #elif defined(__hpux) && defined(__ia64) && !defined(__GNUC__)
  97. #include "port/atomics/generic-acc.h"
  98. #elif defined(__SUNPRO_C) && !defined(__GNUC__)
  99. #include "port/atomics/generic-sunpro.h"
  100. #else
  101. /*
  102. * Unsupported compiler, we'll likely use slower fallbacks... At least
  103. * compiler barriers should really be provided.
  104. */
  105. #endif
  106. /*
  107. * Provide a full fallback of the pg_*_barrier(), pg_atomic**_flag and
  108. * pg_atomic_* APIs for platforms without sufficient spinlock and/or atomics
  109. * support. In the case of spinlock backed atomics the emulation is expected
  110. * to be efficient, although less so than native atomics support.
  111. */
  112. #include "port/atomics/fallback.h"
  113. /*
  114. * Provide additional operations using supported infrastructure. These are
  115. * expected to be efficient if the underlying atomic operations are efficient.
  116. */
  117. #include "port/atomics/generic.h"
  118. /*
  119. * pg_compiler_barrier - prevent the compiler from moving code across
  120. *
  121. * A compiler barrier need not (and preferably should not) emit any actual
  122. * machine code, but must act as an optimization fence: the compiler must not
  123. * reorder loads or stores to main memory around the barrier. However, the
  124. * CPU may still reorder loads or stores at runtime, if the architecture's
  125. * memory model permits this.
  126. */
  127. #define pg_compiler_barrier() pg_compiler_barrier_impl()
  128. /*
  129. * pg_memory_barrier - prevent the CPU from reordering memory access
  130. *
  131. * A memory barrier must act as a compiler barrier, and in addition must
  132. * guarantee that all loads and stores issued prior to the barrier are
  133. * completed before any loads or stores issued after the barrier. Unless
  134. * loads and stores are totally ordered (which is not the case on most
  135. * architectures) this requires issuing some sort of memory fencing
  136. * instruction.
  137. */
  138. #define pg_memory_barrier() pg_memory_barrier_impl()
  139. /*
  140. * pg_(read|write)_barrier - prevent the CPU from reordering memory access
  141. *
  142. * A read barrier must act as a compiler barrier, and in addition must
  143. * guarantee that any loads issued prior to the barrier are completed before
  144. * any loads issued after the barrier. Similarly, a write barrier acts
  145. * as a compiler barrier, and also orders stores. Read and write barriers
  146. * are thus weaker than a full memory barrier, but stronger than a compiler
  147. * barrier. In practice, on machines with strong memory ordering, read and
  148. * write barriers may require nothing more than a compiler barrier.
  149. */
  150. #define pg_read_barrier() pg_read_barrier_impl()
  151. #define pg_write_barrier() pg_write_barrier_impl()
  152. /*
  153. * Spinloop delay - Allow CPU to relax in busy loops
  154. */
  155. #define pg_spin_delay() pg_spin_delay_impl()
  156. /*
  157. * pg_atomic_init_flag - initialize atomic flag.
  158. *
  159. * No barrier semantics.
  160. */
  161. static inline void
  162. pg_atomic_init_flag(volatile pg_atomic_flag *ptr)
  163. {
  164. pg_atomic_init_flag_impl(ptr);
  165. }
  166. /*
  167. * pg_atomic_test_and_set_flag - TAS()
  168. *
  169. * Returns true if the flag has successfully been set, false otherwise.
  170. *
  171. * Acquire (including read barrier) semantics.
  172. */
  173. static inline bool
  174. pg_atomic_test_set_flag(volatile pg_atomic_flag *ptr)
  175. {
  176. return pg_atomic_test_set_flag_impl(ptr);
  177. }
  178. /*
  179. * pg_atomic_unlocked_test_flag - Check if the lock is free
  180. *
  181. * Returns true if the flag currently is not set, false otherwise.
  182. *
  183. * No barrier semantics.
  184. */
  185. static inline bool
  186. pg_atomic_unlocked_test_flag(volatile pg_atomic_flag *ptr)
  187. {
  188. return pg_atomic_unlocked_test_flag_impl(ptr);
  189. }
  190. /*
  191. * pg_atomic_clear_flag - release lock set by TAS()
  192. *
  193. * Release (including write barrier) semantics.
  194. */
  195. static inline void
  196. pg_atomic_clear_flag(volatile pg_atomic_flag *ptr)
  197. {
  198. pg_atomic_clear_flag_impl(ptr);
  199. }
  200. /*
  201. * pg_atomic_init_u32 - initialize atomic variable
  202. *
  203. * Has to be done before any concurrent usage..
  204. *
  205. * No barrier semantics.
  206. */
  207. static inline void
  208. pg_atomic_init_u32(volatile pg_atomic_uint32 *ptr, uint32 val)
  209. {
  210. AssertPointerAlignment(ptr, 4);
  211. pg_atomic_init_u32_impl(ptr, val);
  212. }
  213. /*
  214. * pg_atomic_read_u32 - unlocked read from atomic variable.
  215. *
  216. * The read is guaranteed to return a value as it has been written by this or
  217. * another process at some point in the past. There's however no cache
  218. * coherency interaction guaranteeing the value hasn't since been written to
  219. * again.
  220. *
  221. * No barrier semantics.
  222. */
  223. static inline uint32
  224. pg_atomic_read_u32(volatile pg_atomic_uint32 *ptr)
  225. {
  226. AssertPointerAlignment(ptr, 4);
  227. return pg_atomic_read_u32_impl(ptr);
  228. }
  229. /*
  230. * pg_atomic_write_u32 - write to atomic variable.
  231. *
  232. * The write is guaranteed to succeed as a whole, i.e. it's not possible to
  233. * observe a partial write for any reader. Note that this correctly interacts
  234. * with pg_atomic_compare_exchange_u32, in contrast to
  235. * pg_atomic_unlocked_write_u32().
  236. *
  237. * No barrier semantics.
  238. */
  239. static inline void
  240. pg_atomic_write_u32(volatile pg_atomic_uint32 *ptr, uint32 val)
  241. {
  242. AssertPointerAlignment(ptr, 4);
  243. pg_atomic_write_u32_impl(ptr, val);
  244. }
  245. /*
  246. * pg_atomic_unlocked_write_u32 - unlocked write to atomic variable.
  247. *
  248. * The write is guaranteed to succeed as a whole, i.e. it's not possible to
  249. * observe a partial write for any reader. But note that writing this way is
  250. * not guaranteed to correctly interact with read-modify-write operations like
  251. * pg_atomic_compare_exchange_u32. This should only be used in cases where
  252. * minor performance regressions due to atomics emulation are unacceptable.
  253. *
  254. * No barrier semantics.
  255. */
  256. static inline void
  257. pg_atomic_unlocked_write_u32(volatile pg_atomic_uint32 *ptr, uint32 val)
  258. {
  259. AssertPointerAlignment(ptr, 4);
  260. pg_atomic_unlocked_write_u32_impl(ptr, val);
  261. }
  262. /*
  263. * pg_atomic_exchange_u32 - exchange newval with current value
  264. *
  265. * Returns the old value of 'ptr' before the swap.
  266. *
  267. * Full barrier semantics.
  268. */
  269. static inline uint32
  270. pg_atomic_exchange_u32(volatile pg_atomic_uint32 *ptr, uint32 newval)
  271. {
  272. AssertPointerAlignment(ptr, 4);
  273. return pg_atomic_exchange_u32_impl(ptr, newval);
  274. }
  275. /*
  276. * pg_atomic_compare_exchange_u32 - CAS operation
  277. *
  278. * Atomically compare the current value of ptr with *expected and store newval
  279. * iff ptr and *expected have the same value. The current value of *ptr will
  280. * always be stored in *expected.
  281. *
  282. * Return true if values have been exchanged, false otherwise.
  283. *
  284. * Full barrier semantics.
  285. */
  286. static inline bool
  287. pg_atomic_compare_exchange_u32(volatile pg_atomic_uint32 *ptr,
  288. uint32 *expected, uint32 newval)
  289. {
  290. AssertPointerAlignment(ptr, 4);
  291. AssertPointerAlignment(expected, 4);
  292. return pg_atomic_compare_exchange_u32_impl(ptr, expected, newval);
  293. }
  294. /*
  295. * pg_atomic_fetch_add_u32 - atomically add to variable
  296. *
  297. * Returns the value of ptr before the arithmetic operation.
  298. *
  299. * Full barrier semantics.
  300. */
  301. static inline uint32
  302. pg_atomic_fetch_add_u32(volatile pg_atomic_uint32 *ptr, int32 add_)
  303. {
  304. AssertPointerAlignment(ptr, 4);
  305. return pg_atomic_fetch_add_u32_impl(ptr, add_);
  306. }
  307. /*
  308. * pg_atomic_fetch_sub_u32 - atomically subtract from variable
  309. *
  310. * Returns the value of ptr before the arithmetic operation. Note that sub_
  311. * may not be INT_MIN due to platform limitations.
  312. *
  313. * Full barrier semantics.
  314. */
  315. static inline uint32
  316. pg_atomic_fetch_sub_u32(volatile pg_atomic_uint32 *ptr, int32 sub_)
  317. {
  318. AssertPointerAlignment(ptr, 4);
  319. Assert(sub_ != INT_MIN);
  320. return pg_atomic_fetch_sub_u32_impl(ptr, sub_);
  321. }
  322. /*
  323. * pg_atomic_fetch_and_u32 - atomically bit-and and_ with variable
  324. *
  325. * Returns the value of ptr before the arithmetic operation.
  326. *
  327. * Full barrier semantics.
  328. */
  329. static inline uint32
  330. pg_atomic_fetch_and_u32(volatile pg_atomic_uint32 *ptr, uint32 and_)
  331. {
  332. AssertPointerAlignment(ptr, 4);
  333. return pg_atomic_fetch_and_u32_impl(ptr, and_);
  334. }
  335. /*
  336. * pg_atomic_fetch_or_u32 - atomically bit-or or_ with variable
  337. *
  338. * Returns the value of ptr before the arithmetic operation.
  339. *
  340. * Full barrier semantics.
  341. */
  342. static inline uint32
  343. pg_atomic_fetch_or_u32(volatile pg_atomic_uint32 *ptr, uint32 or_)
  344. {
  345. AssertPointerAlignment(ptr, 4);
  346. return pg_atomic_fetch_or_u32_impl(ptr, or_);
  347. }
  348. /*
  349. * pg_atomic_add_fetch_u32 - atomically add to variable
  350. *
  351. * Returns the value of ptr after the arithmetic operation.
  352. *
  353. * Full barrier semantics.
  354. */
  355. static inline uint32
  356. pg_atomic_add_fetch_u32(volatile pg_atomic_uint32 *ptr, int32 add_)
  357. {
  358. AssertPointerAlignment(ptr, 4);
  359. return pg_atomic_add_fetch_u32_impl(ptr, add_);
  360. }
  361. /*
  362. * pg_atomic_sub_fetch_u32 - atomically subtract from variable
  363. *
  364. * Returns the value of ptr after the arithmetic operation. Note that sub_ may
  365. * not be INT_MIN due to platform limitations.
  366. *
  367. * Full barrier semantics.
  368. */
  369. static inline uint32
  370. pg_atomic_sub_fetch_u32(volatile pg_atomic_uint32 *ptr, int32 sub_)
  371. {
  372. AssertPointerAlignment(ptr, 4);
  373. Assert(sub_ != INT_MIN);
  374. return pg_atomic_sub_fetch_u32_impl(ptr, sub_);
  375. }
  376. /* ----
  377. * The 64 bit operations have the same semantics as their 32bit counterparts
  378. * if they are available. Check the corresponding 32bit function for
  379. * documentation.
  380. * ----
  381. */
  382. static inline void
  383. pg_atomic_init_u64(volatile pg_atomic_uint64 *ptr, uint64 val)
  384. {
  385. /*
  386. * Can't necessarily enforce alignment - and don't need it - when using
  387. * the spinlock based fallback implementation. Therefore only assert when
  388. * not using it.
  389. */
  390. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  391. AssertPointerAlignment(ptr, 8);
  392. #endif
  393. pg_atomic_init_u64_impl(ptr, val);
  394. }
  395. static inline uint64
  396. pg_atomic_read_u64(volatile pg_atomic_uint64 *ptr)
  397. {
  398. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  399. AssertPointerAlignment(ptr, 8);
  400. #endif
  401. return pg_atomic_read_u64_impl(ptr);
  402. }
  403. static inline void
  404. pg_atomic_write_u64(volatile pg_atomic_uint64 *ptr, uint64 val)
  405. {
  406. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  407. AssertPointerAlignment(ptr, 8);
  408. #endif
  409. pg_atomic_write_u64_impl(ptr, val);
  410. }
  411. static inline uint64
  412. pg_atomic_exchange_u64(volatile pg_atomic_uint64 *ptr, uint64 newval)
  413. {
  414. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  415. AssertPointerAlignment(ptr, 8);
  416. #endif
  417. return pg_atomic_exchange_u64_impl(ptr, newval);
  418. }
  419. static inline bool
  420. pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,
  421. uint64 *expected, uint64 newval)
  422. {
  423. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  424. AssertPointerAlignment(ptr, 8);
  425. AssertPointerAlignment(expected, 8);
  426. #endif
  427. return pg_atomic_compare_exchange_u64_impl(ptr, expected, newval);
  428. }
  429. static inline uint64
  430. pg_atomic_fetch_add_u64(volatile pg_atomic_uint64 *ptr, int64 add_)
  431. {
  432. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  433. AssertPointerAlignment(ptr, 8);
  434. #endif
  435. return pg_atomic_fetch_add_u64_impl(ptr, add_);
  436. }
  437. static inline uint64
  438. pg_atomic_fetch_sub_u64(volatile pg_atomic_uint64 *ptr, int64 sub_)
  439. {
  440. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  441. AssertPointerAlignment(ptr, 8);
  442. #endif
  443. Assert(sub_ != PG_INT64_MIN);
  444. return pg_atomic_fetch_sub_u64_impl(ptr, sub_);
  445. }
  446. static inline uint64
  447. pg_atomic_fetch_and_u64(volatile pg_atomic_uint64 *ptr, uint64 and_)
  448. {
  449. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  450. AssertPointerAlignment(ptr, 8);
  451. #endif
  452. return pg_atomic_fetch_and_u64_impl(ptr, and_);
  453. }
  454. static inline uint64
  455. pg_atomic_fetch_or_u64(volatile pg_atomic_uint64 *ptr, uint64 or_)
  456. {
  457. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  458. AssertPointerAlignment(ptr, 8);
  459. #endif
  460. return pg_atomic_fetch_or_u64_impl(ptr, or_);
  461. }
  462. static inline uint64
  463. pg_atomic_add_fetch_u64(volatile pg_atomic_uint64 *ptr, int64 add_)
  464. {
  465. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  466. AssertPointerAlignment(ptr, 8);
  467. #endif
  468. return pg_atomic_add_fetch_u64_impl(ptr, add_);
  469. }
  470. static inline uint64
  471. pg_atomic_sub_fetch_u64(volatile pg_atomic_uint64 *ptr, int64 sub_)
  472. {
  473. #ifndef PG_HAVE_ATOMIC_U64_SIMULATION
  474. AssertPointerAlignment(ptr, 8);
  475. #endif
  476. Assert(sub_ != PG_INT64_MIN);
  477. return pg_atomic_sub_fetch_u64_impl(ptr, sub_);
  478. }
  479. #undef INSIDE_ATOMICS_H
  480. #endif /* ATOMICS_H */
上海开阖软件有限公司 沪ICP备12045867号-1