gooderp18绿色标准版
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

593 lines
46KB

  1. <?xml version="1.0" encoding="UTF-8" standalone="no"?>
  2. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>19.4. Resource Consumption</title><link rel="stylesheet" type="text/css" href="stylesheet.css" /><link rev="made" href="pgsql-docs@lists.postgresql.org" /><meta name="generator" content="DocBook XSL Stylesheets V1.79.1" /><link rel="prev" href="runtime-config-connection.html" title="19.3. Connections and Authentication" /><link rel="next" href="runtime-config-wal.html" title="19.5. Write Ahead Log" /></head><body><div xmlns="http://www.w3.org/TR/xhtml1/transitional" class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="5" align="center">19.4. Resource Consumption</th></tr><tr><td width="10%" align="left"><a accesskey="p" href="runtime-config-connection.html" title="19.3. Connections and Authentication">Prev</a> </td><td width="10%" align="left"><a accesskey="u" href="runtime-config.html" title="Chapter 19. Server Configuration">Up</a></td><th width="60%" align="center">Chapter 19. Server Configuration</th><td width="10%" align="right"><a accesskey="h" href="index.html" title="PostgreSQL 12.4 Documentation">Home</a></td><td width="10%" align="right"> <a accesskey="n" href="runtime-config-wal.html" title="19.5. Write Ahead Log">Next</a></td></tr></table><hr></hr></div><div class="sect1" id="RUNTIME-CONFIG-RESOURCE"><div class="titlepage"><div><div><h2 class="title" style="clear: both">19.4. Resource Consumption</h2></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect2"><a href="runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY">19.4.1. Memory</a></span></dt><dt><span class="sect2"><a href="runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-DISK">19.4.2. Disk</a></span></dt><dt><span class="sect2"><a href="runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-KERNEL">19.4.3. Kernel Resource Usage</a></span></dt><dt><span class="sect2"><a href="runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST">19.4.4. Cost-based Vacuum Delay</a></span></dt><dt><span class="sect2"><a href="runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-BACKGROUND-WRITER">19.4.5. Background Writer</a></span></dt><dt><span class="sect2"><a href="runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR">19.4.6. Asynchronous Behavior</a></span></dt></dl></div><div class="sect2" id="RUNTIME-CONFIG-RESOURCE-MEMORY"><div class="titlepage"><div><div><h3 class="title">19.4.1. Memory</h3></div></div></div><div class="variablelist"><dl class="variablelist"><dt id="GUC-SHARED-BUFFERS"><span class="term"><code class="varname">shared_buffers</code> (<code class="type">integer</code>)
  3. <a id="id-1.6.6.7.2.2.1.1.3" class="indexterm"></a>
  4. </span></dt><dd><p>
  5. Sets the amount of memory the database server uses for shared
  6. memory buffers. The default is typically 128 megabytes
  7. (<code class="literal">128MB</code>), but might be less if your kernel settings will
  8. not support it (as determined during <span class="application">initdb</span>).
  9. This setting must be at least 128 kilobytes. However,
  10. settings significantly higher than the minimum are usually needed
  11. for good performance.
  12. If this value is specified without units, it is taken as blocks,
  13. that is <code class="symbol">BLCKSZ</code> bytes, typically 8kB.
  14. (Non-default values of <code class="symbol">BLCKSZ</code> change the minimum
  15. value.)
  16. This parameter can only be set at server start.
  17. </p><p>
  18. If you have a dedicated database server with 1GB or more of RAM, a
  19. reasonable starting value for <code class="varname">shared_buffers</code> is 25%
  20. of the memory in your system. There are some workloads where even
  21. larger settings for <code class="varname">shared_buffers</code> are effective, but
  22. because <span class="productname">PostgreSQL</span> also relies on the
  23. operating system cache, it is unlikely that an allocation of more than
  24. 40% of RAM to <code class="varname">shared_buffers</code> will work better than a
  25. smaller amount. Larger settings for <code class="varname">shared_buffers</code>
  26. usually require a corresponding increase in
  27. <code class="varname">max_wal_size</code>, in order to spread out the
  28. process of writing large quantities of new or changed data over a
  29. longer period of time.
  30. </p><p>
  31. On systems with less than 1GB of RAM, a smaller percentage of RAM is
  32. appropriate, so as to leave adequate space for the operating system.
  33. </p></dd><dt id="GUC-HUGE-PAGES"><span class="term"><code class="varname">huge_pages</code> (<code class="type">enum</code>)
  34. <a id="id-1.6.6.7.2.2.2.1.3" class="indexterm"></a>
  35. </span></dt><dd><p>
  36. Controls whether huge pages are requested for the main shared memory
  37. area. Valid values are <code class="literal">try</code> (the default),
  38. <code class="literal">on</code>, and <code class="literal">off</code>. With
  39. <code class="varname">huge_pages</code> set to <code class="literal">try</code>, the
  40. server will try to request huge pages, but fall back to the default if
  41. that fails. With <code class="literal">on</code>, failure to request huge pages
  42. will prevent the server from starting up. With <code class="literal">off</code>,
  43. huge pages will not be requested.
  44. </p><p>
  45. At present, this setting is supported only on Linux and Windows. The
  46. setting is ignored on other systems when set to
  47. <code class="literal">try</code>.
  48. </p><p>
  49. The use of huge pages results in smaller page tables and less CPU time
  50. spent on memory management, increasing performance. For more details about
  51. using huge pages on Linux, see <a class="xref" href="kernel-resources.html#LINUX-HUGE-PAGES" title="18.4.5. Linux Huge Pages">Section 18.4.5</a>.
  52. </p><p>
  53. Huge pages are known as large pages on Windows. To use them, you need to
  54. assign the user right Lock Pages in Memory to the Windows user account
  55. that runs <span class="productname">PostgreSQL</span>.
  56. You can use Windows Group Policy tool (gpedit.msc) to assign the user right
  57. Lock Pages in Memory.
  58. To start the database server on the command prompt as a standalone process,
  59. not as a Windows service, the command prompt must be run as an administrator or
  60. User Access Control (UAC) must be disabled. When the UAC is enabled, the normal
  61. command prompt revokes the user right Lock Pages in Memory when started.
  62. </p><p>
  63. Note that this setting only affects the main shared memory area.
  64. Operating systems such as Linux, FreeBSD, and Illumos can also use
  65. huge pages (also known as <span class="quote">“<span class="quote">super</span>”</span> pages or
  66. <span class="quote">“<span class="quote">large</span>”</span> pages) automatically for normal memory
  67. allocation, without an explicit request from
  68. <span class="productname">PostgreSQL</span>. On Linux, this is called
  69. <span class="quote">“<span class="quote">transparent huge pages</span>”</span><a id="id-1.6.6.7.2.2.2.2.5.5" class="indexterm"></a> (THP). That feature has been known to
  70. cause performance degradation with
  71. <span class="productname">PostgreSQL</span> for some users on some Linux
  72. versions, so its use is currently discouraged (unlike explicit use of
  73. <code class="varname">huge_pages</code>).
  74. </p></dd><dt id="GUC-TEMP-BUFFERS"><span class="term"><code class="varname">temp_buffers</code> (<code class="type">integer</code>)
  75. <a id="id-1.6.6.7.2.2.3.1.3" class="indexterm"></a>
  76. </span></dt><dd><p>
  77. Sets the maximum amount of memory used for temporary buffers within
  78. each database session. These are session-local buffers used only
  79. for access to temporary tables.
  80. If this value is specified without units, it is taken as blocks,
  81. that is <code class="symbol">BLCKSZ</code> bytes, typically 8kB.
  82. The default is eight megabytes (<code class="literal">8MB</code>).
  83. (If <code class="symbol">BLCKSZ</code> is not 8kB, the default value scales
  84. proportionally to it.)
  85. This setting can be changed within individual
  86. sessions, but only before the first use of temporary tables
  87. within the session; subsequent attempts to change the value will
  88. have no effect on that session.
  89. </p><p>
  90. A session will allocate temporary buffers as needed up to the limit
  91. given by <code class="varname">temp_buffers</code>. The cost of setting a large
  92. value in sessions that do not actually need many temporary
  93. buffers is only a buffer descriptor, or about 64 bytes, per
  94. increment in <code class="varname">temp_buffers</code>. However if a buffer is
  95. actually used an additional 8192 bytes will be consumed for it
  96. (or in general, <code class="symbol">BLCKSZ</code> bytes).
  97. </p></dd><dt id="GUC-MAX-PREPARED-TRANSACTIONS"><span class="term"><code class="varname">max_prepared_transactions</code> (<code class="type">integer</code>)
  98. <a id="id-1.6.6.7.2.2.4.1.3" class="indexterm"></a>
  99. </span></dt><dd><p>
  100. Sets the maximum number of transactions that can be in the
  101. <span class="quote">“<span class="quote">prepared</span>”</span> state simultaneously (see <a class="xref" href="sql-prepare-transaction.html" title="PREPARE TRANSACTION"><span class="refentrytitle">PREPARE TRANSACTION</span></a>).
  102. Setting this parameter to zero (which is the default)
  103. disables the prepared-transaction feature.
  104. This parameter can only be set at server start.
  105. </p><p>
  106. If you are not planning to use prepared transactions, this parameter
  107. should be set to zero to prevent accidental creation of prepared
  108. transactions. If you are using prepared transactions, you will
  109. probably want <code class="varname">max_prepared_transactions</code> to be at
  110. least as large as <a class="xref" href="runtime-config-connection.html#GUC-MAX-CONNECTIONS">max_connections</a>, so that every
  111. session can have a prepared transaction pending.
  112. </p><p>
  113. When running a standby server, you must set this parameter to the
  114. same or higher value than on the master server. Otherwise, queries
  115. will not be allowed in the standby server.
  116. </p></dd><dt id="GUC-WORK-MEM"><span class="term"><code class="varname">work_mem</code> (<code class="type">integer</code>)
  117. <a id="id-1.6.6.7.2.2.5.1.3" class="indexterm"></a>
  118. </span></dt><dd><p>
  119. Sets the maximum amount of memory to be used by a query operation
  120. (such as a sort or hash table) before writing to temporary disk files.
  121. If this value is specified without units, it is taken as kilobytes.
  122. The default value is four megabytes (<code class="literal">4MB</code>).
  123. Note that for a complex query, several sort or hash operations might be
  124. running in parallel; each operation will be allowed to use as much memory
  125. as this value specifies before it starts to write data into temporary
  126. files. Also, several running sessions could be doing such operations
  127. concurrently. Therefore, the total memory used could be many
  128. times the value of <code class="varname">work_mem</code>; it is necessary to
  129. keep this fact in mind when choosing the value. Sort operations are
  130. used for <code class="literal">ORDER BY</code>, <code class="literal">DISTINCT</code>, and
  131. merge joins.
  132. Hash tables are used in hash joins, hash-based aggregation, and
  133. hash-based processing of <code class="literal">IN</code> subqueries.
  134. </p></dd><dt id="GUC-MAINTENANCE-WORK-MEM"><span class="term"><code class="varname">maintenance_work_mem</code> (<code class="type">integer</code>)
  135. <a id="id-1.6.6.7.2.2.6.1.3" class="indexterm"></a>
  136. </span></dt><dd><p>
  137. Specifies the maximum amount of memory to be used by maintenance
  138. operations, such as <code class="command">VACUUM</code>, <code class="command">CREATE
  139. INDEX</code>, and <code class="command">ALTER TABLE ADD FOREIGN KEY</code>.
  140. If this value is specified without units, it is taken as kilobytes.
  141. It defaults
  142. to 64 megabytes (<code class="literal">64MB</code>). Since only one of these
  143. operations can be executed at a time by a database session, and
  144. an installation normally doesn't have many of them running
  145. concurrently, it's safe to set this value significantly larger
  146. than <code class="varname">work_mem</code>. Larger settings might improve
  147. performance for vacuuming and for restoring database dumps.
  148. </p><p>
  149. Note that when autovacuum runs, up to
  150. <a class="xref" href="runtime-config-autovacuum.html#GUC-AUTOVACUUM-MAX-WORKERS">autovacuum_max_workers</a> times this memory
  151. may be allocated, so be careful not to set the default value
  152. too high. It may be useful to control for this by separately
  153. setting <a class="xref" href="runtime-config-resource.html#GUC-AUTOVACUUM-WORK-MEM">autovacuum_work_mem</a>.
  154. </p></dd><dt id="GUC-AUTOVACUUM-WORK-MEM"><span class="term"><code class="varname">autovacuum_work_mem</code> (<code class="type">integer</code>)
  155. <a id="id-1.6.6.7.2.2.7.1.3" class="indexterm"></a>
  156. </span></dt><dd><p>
  157. Specifies the maximum amount of memory to be used by each
  158. autovacuum worker process.
  159. If this value is specified without units, it is taken as kilobytes.
  160. It defaults to -1, indicating that
  161. the value of <a class="xref" href="runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM">maintenance_work_mem</a> should
  162. be used instead. The setting has no effect on the behavior of
  163. <code class="command">VACUUM</code> when run in other contexts.
  164. </p></dd><dt id="GUC-MAX-STACK-DEPTH"><span class="term"><code class="varname">max_stack_depth</code> (<code class="type">integer</code>)
  165. <a id="id-1.6.6.7.2.2.8.1.3" class="indexterm"></a>
  166. </span></dt><dd><p>
  167. Specifies the maximum safe depth of the server's execution stack.
  168. The ideal setting for this parameter is the actual stack size limit
  169. enforced by the kernel (as set by <code class="literal">ulimit -s</code> or local
  170. equivalent), less a safety margin of a megabyte or so. The safety
  171. margin is needed because the stack depth is not checked in every
  172. routine in the server, but only in key potentially-recursive routines.
  173. If this value is specified without units, it is taken as kilobytes.
  174. The default setting is two megabytes (<code class="literal">2MB</code>), which
  175. is conservatively small and unlikely to risk crashes. However,
  176. it might be too small to allow execution of complex functions.
  177. Only superusers can change this setting.
  178. </p><p>
  179. Setting <code class="varname">max_stack_depth</code> higher than
  180. the actual kernel limit will mean that a runaway recursive function
  181. can crash an individual backend process. On platforms where
  182. <span class="productname">PostgreSQL</span> can determine the kernel limit,
  183. the server will not allow this variable to be set to an unsafe
  184. value. However, not all platforms provide the information,
  185. so caution is recommended in selecting a value.
  186. </p></dd><dt id="GUC-SHARED-MEMORY-TYPE"><span class="term"><code class="varname">shared_memory_type</code> (<code class="type">enum</code>)
  187. <a id="id-1.6.6.7.2.2.9.1.3" class="indexterm"></a>
  188. </span></dt><dd><p>
  189. Specifies the shared memory implementation that the server
  190. should use for the main shared memory region that holds
  191. <span class="productname">PostgreSQL</span>'s shared buffers and other
  192. shared data. Possible values are <code class="literal">mmap</code> (for
  193. anonymous shared memory allocated using <code class="function">mmap</code>),
  194. <code class="literal">sysv</code> (for System V shared memory allocated via
  195. <code class="function">shmget</code>) and <code class="literal">windows</code> (for Windows
  196. shared memory). Not all values are supported on all platforms; the
  197. first supported option is the default for that platform. The use of
  198. the <code class="literal">sysv</code> option, which is not the default on any
  199. platform, is generally discouraged because it typically requires
  200. non-default kernel settings to allow for large allocations (see <a class="xref" href="kernel-resources.html#SYSVIPC" title="18.4.1. Shared Memory and Semaphores">Section 18.4.1</a>).
  201. </p></dd><dt id="GUC-DYNAMIC-SHARED-MEMORY-TYPE"><span class="term"><code class="varname">dynamic_shared_memory_type</code> (<code class="type">enum</code>)
  202. <a id="id-1.6.6.7.2.2.10.1.3" class="indexterm"></a>
  203. </span></dt><dd><p>
  204. Specifies the dynamic shared memory implementation that the server
  205. should use. Possible values are <code class="literal">posix</code> (for POSIX shared
  206. memory allocated using <code class="literal">shm_open</code>), <code class="literal">sysv</code>
  207. (for System V shared memory allocated via <code class="literal">shmget</code>),
  208. <code class="literal">windows</code> (for Windows shared memory),
  209. and <code class="literal">mmap</code> (to simulate shared memory using
  210. memory-mapped files stored in the data directory).
  211. Not all values are supported on all platforms; the first supported
  212. option is the default for that platform. The use of the
  213. <code class="literal">mmap</code> option, which is not the default on any platform,
  214. is generally discouraged because the operating system may write
  215. modified pages back to disk repeatedly, increasing system I/O load;
  216. however, it may be useful for debugging, when the
  217. <code class="literal">pg_dynshmem</code> directory is stored on a RAM disk, or when
  218. other shared memory facilities are not available.
  219. </p></dd></dl></div></div><div class="sect2" id="RUNTIME-CONFIG-RESOURCE-DISK"><div class="titlepage"><div><div><h3 class="title">19.4.2. Disk</h3></div></div></div><div class="variablelist"><dl class="variablelist"><dt id="GUC-TEMP-FILE-LIMIT"><span class="term"><code class="varname">temp_file_limit</code> (<code class="type">integer</code>)
  220. <a id="id-1.6.6.7.3.2.1.1.3" class="indexterm"></a>
  221. </span></dt><dd><p>
  222. Specifies the maximum amount of disk space that a process can use
  223. for temporary files, such as sort and hash temporary files, or the
  224. storage file for a held cursor. A transaction attempting to exceed
  225. this limit will be canceled.
  226. If this value is specified without units, it is taken as kilobytes.
  227. <code class="literal">-1</code> (the default) means no limit.
  228. Only superusers can change this setting.
  229. </p><p>
  230. This setting constrains the total space used at any instant by all
  231. temporary files used by a given <span class="productname">PostgreSQL</span> process.
  232. It should be noted that disk space used for explicit temporary
  233. tables, as opposed to temporary files used behind-the-scenes in query
  234. execution, does <span class="emphasis"><em>not</em></span> count against this limit.
  235. </p></dd></dl></div></div><div class="sect2" id="RUNTIME-CONFIG-RESOURCE-KERNEL"><div class="titlepage"><div><div><h3 class="title">19.4.3. Kernel Resource Usage</h3></div></div></div><div class="variablelist"><dl class="variablelist"><dt id="GUC-MAX-FILES-PER-PROCESS"><span class="term"><code class="varname">max_files_per_process</code> (<code class="type">integer</code>)
  236. <a id="id-1.6.6.7.4.2.1.1.3" class="indexterm"></a>
  237. </span></dt><dd><p>
  238. Sets the maximum number of simultaneously open files allowed to each
  239. server subprocess. The default is one thousand files. If the kernel is enforcing
  240. a safe per-process limit, you don't need to worry about this setting.
  241. But on some platforms (notably, most BSD systems), the kernel will
  242. allow individual processes to open many more files than the system
  243. can actually support if many processes all try to open
  244. that many files. If you find yourself seeing <span class="quote">“<span class="quote">Too many open
  245. files</span>”</span> failures, try reducing this setting.
  246. This parameter can only be set at server start.
  247. </p></dd></dl></div></div><div class="sect2" id="RUNTIME-CONFIG-RESOURCE-VACUUM-COST"><div class="titlepage"><div><div><h3 class="title">19.4.4. Cost-based Vacuum Delay</h3></div></div></div><p>
  248. During the execution of <a class="xref" href="sql-vacuum.html" title="VACUUM"><span class="refentrytitle">VACUUM</span></a>
  249. and <a class="xref" href="sql-analyze.html" title="ANALYZE"><span class="refentrytitle">ANALYZE</span></a>
  250. commands, the system maintains an
  251. internal counter that keeps track of the estimated cost of the
  252. various I/O operations that are performed. When the accumulated
  253. cost reaches a limit (specified by
  254. <code class="varname">vacuum_cost_limit</code>), the process performing
  255. the operation will sleep for a short period of time, as specified by
  256. <code class="varname">vacuum_cost_delay</code>. Then it will reset the
  257. counter and continue execution.
  258. </p><p>
  259. The intent of this feature is to allow administrators to reduce
  260. the I/O impact of these commands on concurrent database
  261. activity. There are many situations where it is not
  262. important that maintenance commands like
  263. <code class="command">VACUUM</code> and <code class="command">ANALYZE</code> finish
  264. quickly; however, it is usually very important that these
  265. commands do not significantly interfere with the ability of the
  266. system to perform other database operations. Cost-based vacuum
  267. delay provides a way for administrators to achieve this.
  268. </p><p>
  269. This feature is disabled by default for manually issued
  270. <code class="command">VACUUM</code> commands. To enable it, set the
  271. <code class="varname">vacuum_cost_delay</code> variable to a nonzero
  272. value.
  273. </p><div class="variablelist"><dl class="variablelist"><dt id="GUC-VACUUM-COST-DELAY"><span class="term"><code class="varname">vacuum_cost_delay</code> (<code class="type">floating point</code>)
  274. <a id="id-1.6.6.7.5.5.1.1.3" class="indexterm"></a>
  275. </span></dt><dd><p>
  276. The amount of time that the process will sleep
  277. when the cost limit has been exceeded.
  278. If this value is specified without units, it is taken as milliseconds.
  279. The default value is zero, which disables the cost-based vacuum
  280. delay feature. Positive values enable cost-based vacuuming.
  281. </p><p>
  282. When using cost-based vacuuming, appropriate values for
  283. <code class="varname">vacuum_cost_delay</code> are usually quite small, perhaps
  284. less than 1 millisecond. While <code class="varname">vacuum_cost_delay</code>
  285. can be set to fractional-millisecond values, such delays may not be
  286. measured accurately on older platforms. On such platforms,
  287. increasing <code class="command">VACUUM</code>'s throttled resource consumption
  288. above what you get at 1ms will require changing the other vacuum cost
  289. parameters. You should, nonetheless,
  290. keep <code class="varname">vacuum_cost_delay</code> as small as your platform
  291. will consistently measure; large delays are not helpful.
  292. </p></dd><dt id="GUC-VACUUM-COST-PAGE-HIT"><span class="term"><code class="varname">vacuum_cost_page_hit</code> (<code class="type">integer</code>)
  293. <a id="id-1.6.6.7.5.5.2.1.3" class="indexterm"></a>
  294. </span></dt><dd><p>
  295. The estimated cost for vacuuming a buffer found in the shared buffer
  296. cache. It represents the cost to lock the buffer pool, lookup
  297. the shared hash table and scan the content of the page. The
  298. default value is one.
  299. </p></dd><dt id="GUC-VACUUM-COST-PAGE-MISS"><span class="term"><code class="varname">vacuum_cost_page_miss</code> (<code class="type">integer</code>)
  300. <a id="id-1.6.6.7.5.5.3.1.3" class="indexterm"></a>
  301. </span></dt><dd><p>
  302. The estimated cost for vacuuming a buffer that has to be read from
  303. disk. This represents the effort to lock the buffer pool,
  304. lookup the shared hash table, read the desired block in from
  305. the disk and scan its content. The default value is 10.
  306. </p></dd><dt id="GUC-VACUUM-COST-PAGE-DIRTY"><span class="term"><code class="varname">vacuum_cost_page_dirty</code> (<code class="type">integer</code>)
  307. <a id="id-1.6.6.7.5.5.4.1.3" class="indexterm"></a>
  308. </span></dt><dd><p>
  309. The estimated cost charged when vacuum modifies a block that was
  310. previously clean. It represents the extra I/O required to
  311. flush the dirty block out to disk again. The default value is
  312. 20.
  313. </p></dd><dt id="GUC-VACUUM-COST-LIMIT"><span class="term"><code class="varname">vacuum_cost_limit</code> (<code class="type">integer</code>)
  314. <a id="id-1.6.6.7.5.5.5.1.3" class="indexterm"></a>
  315. </span></dt><dd><p>
  316. The accumulated cost that will cause the vacuuming process to sleep.
  317. The default value is 200.
  318. </p></dd></dl></div><div class="note"><h3 class="title">Note</h3><p>
  319. There are certain operations that hold critical locks and should
  320. therefore complete as quickly as possible. Cost-based vacuum
  321. delays do not occur during such operations. Therefore it is
  322. possible that the cost accumulates far higher than the specified
  323. limit. To avoid uselessly long delays in such cases, the actual
  324. delay is calculated as <code class="varname">vacuum_cost_delay</code> *
  325. <code class="varname">accumulated_balance</code> /
  326. <code class="varname">vacuum_cost_limit</code> with a maximum of
  327. <code class="varname">vacuum_cost_delay</code> * 4.
  328. </p></div></div><div class="sect2" id="RUNTIME-CONFIG-RESOURCE-BACKGROUND-WRITER"><div class="titlepage"><div><div><h3 class="title">19.4.5. Background Writer</h3></div></div></div><p>
  329. There is a separate server
  330. process called the <em class="firstterm">background writer</em>, whose function
  331. is to issue writes of <span class="quote">“<span class="quote">dirty</span>”</span> (new or modified) shared
  332. buffers. It writes shared buffers so server processes handling
  333. user queries seldom or never need to wait for a write to occur.
  334. However, the background writer does cause a net overall
  335. increase in I/O load, because while a repeatedly-dirtied page might
  336. otherwise be written only once per checkpoint interval, the
  337. background writer might write it several times as it is dirtied
  338. in the same interval. The parameters discussed in this subsection
  339. can be used to tune the behavior for local needs.
  340. </p><div class="variablelist"><dl class="variablelist"><dt id="GUC-BGWRITER-DELAY"><span class="term"><code class="varname">bgwriter_delay</code> (<code class="type">integer</code>)
  341. <a id="id-1.6.6.7.6.3.1.1.3" class="indexterm"></a>
  342. </span></dt><dd><p>
  343. Specifies the delay between activity rounds for the
  344. background writer. In each round the writer issues writes
  345. for some number of dirty buffers (controllable by the
  346. following parameters). It then sleeps for
  347. the length of <code class="varname">bgwriter_delay</code>, and repeats.
  348. When there are no dirty buffers in the
  349. buffer pool, though, it goes into a longer sleep regardless of
  350. <code class="varname">bgwriter_delay</code>.
  351. If this value is specified without units, it is taken as milliseconds.
  352. The default value is 200
  353. milliseconds (<code class="literal">200ms</code>). Note that on many systems, the
  354. effective resolution of sleep delays is 10 milliseconds; setting
  355. <code class="varname">bgwriter_delay</code> to a value that is not a multiple of 10
  356. might have the same results as setting it to the next higher multiple
  357. of 10. This parameter can only be set in the
  358. <code class="filename">postgresql.conf</code> file or on the server command line.
  359. </p></dd><dt id="GUC-BGWRITER-LRU-MAXPAGES"><span class="term"><code class="varname">bgwriter_lru_maxpages</code> (<code class="type">integer</code>)
  360. <a id="id-1.6.6.7.6.3.2.1.3" class="indexterm"></a>
  361. </span></dt><dd><p>
  362. In each round, no more than this many buffers will be written
  363. by the background writer. Setting this to zero disables
  364. background writing. (Note that checkpoints, which are managed by
  365. a separate, dedicated auxiliary process, are unaffected.)
  366. The default value is 100 buffers.
  367. This parameter can only be set in the <code class="filename">postgresql.conf</code>
  368. file or on the server command line.
  369. </p></dd><dt id="GUC-BGWRITER-LRU-MULTIPLIER"><span class="term"><code class="varname">bgwriter_lru_multiplier</code> (<code class="type">floating point</code>)
  370. <a id="id-1.6.6.7.6.3.3.1.3" class="indexterm"></a>
  371. </span></dt><dd><p>
  372. The number of dirty buffers written in each round is based on the
  373. number of new buffers that have been needed by server processes
  374. during recent rounds. The average recent need is multiplied by
  375. <code class="varname">bgwriter_lru_multiplier</code> to arrive at an estimate of the
  376. number of buffers that will be needed during the next round. Dirty
  377. buffers are written until there are that many clean, reusable buffers
  378. available. (However, no more than <code class="varname">bgwriter_lru_maxpages</code>
  379. buffers will be written per round.)
  380. Thus, a setting of 1.0 represents a <span class="quote">“<span class="quote">just in time</span>”</span> policy
  381. of writing exactly the number of buffers predicted to be needed.
  382. Larger values provide some cushion against spikes in demand,
  383. while smaller values intentionally leave writes to be done by
  384. server processes.
  385. The default is 2.0.
  386. This parameter can only be set in the <code class="filename">postgresql.conf</code>
  387. file or on the server command line.
  388. </p></dd><dt id="GUC-BGWRITER-FLUSH-AFTER"><span class="term"><code class="varname">bgwriter_flush_after</code> (<code class="type">integer</code>)
  389. <a id="id-1.6.6.7.6.3.4.1.3" class="indexterm"></a>
  390. </span></dt><dd><p>
  391. Whenever more than this amount of data has
  392. been written by the background writer, attempt to force the OS to issue these
  393. writes to the underlying storage. Doing so will limit the amount of
  394. dirty data in the kernel's page cache, reducing the likelihood of
  395. stalls when an <code class="function">fsync</code> is issued at the end of a checkpoint, or when
  396. the OS writes data back in larger batches in the background. Often
  397. that will result in greatly reduced transaction latency, but there
  398. also are some cases, especially with workloads that are bigger than
  399. <a class="xref" href="runtime-config-resource.html#GUC-SHARED-BUFFERS">shared_buffers</a>, but smaller than the OS's page
  400. cache, where performance might degrade. This setting may have no
  401. effect on some platforms.
  402. If this value is specified without units, it is taken as blocks,
  403. that is <code class="symbol">BLCKSZ</code> bytes, typically 8kB.
  404. The valid range is between
  405. <code class="literal">0</code>, which disables forced writeback, and
  406. <code class="literal">2MB</code>. The default is <code class="literal">512kB</code> on Linux,
  407. <code class="literal">0</code> elsewhere. (If <code class="symbol">BLCKSZ</code> is not 8kB,
  408. the default and maximum values scale proportionally to it.)
  409. This parameter can only be set in the <code class="filename">postgresql.conf</code>
  410. file or on the server command line.
  411. </p></dd></dl></div><p>
  412. Smaller values of <code class="varname">bgwriter_lru_maxpages</code> and
  413. <code class="varname">bgwriter_lru_multiplier</code> reduce the extra I/O load
  414. caused by the background writer, but make it more likely that server
  415. processes will have to issue writes for themselves, delaying interactive
  416. queries.
  417. </p></div><div class="sect2" id="RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR"><div class="titlepage"><div><div><h3 class="title">19.4.6. Asynchronous Behavior</h3></div></div></div><div class="variablelist"><dl class="variablelist"><dt id="GUC-EFFECTIVE-IO-CONCURRENCY"><span class="term"><code class="varname">effective_io_concurrency</code> (<code class="type">integer</code>)
  418. <a id="id-1.6.6.7.7.2.1.1.3" class="indexterm"></a>
  419. </span></dt><dd><p>
  420. Sets the number of concurrent disk I/O operations that
  421. <span class="productname">PostgreSQL</span> expects can be executed
  422. simultaneously. Raising this value will increase the number of I/O
  423. operations that any individual <span class="productname">PostgreSQL</span> session
  424. attempts to initiate in parallel. The allowed range is 1 to 1000,
  425. or zero to disable issuance of asynchronous I/O requests. Currently,
  426. this setting only affects bitmap heap scans.
  427. </p><p>
  428. For magnetic drives, a good starting point for this setting is the
  429. number of separate
  430. drives comprising a RAID 0 stripe or RAID 1 mirror being used for the
  431. database. (For RAID 5 the parity drive should not be counted.)
  432. However, if the database is often busy with multiple queries issued in
  433. concurrent sessions, lower values may be sufficient to keep the disk
  434. array busy. A value higher than needed to keep the disks busy will
  435. only result in extra CPU overhead.
  436. SSDs and other memory-based storage can often process many
  437. concurrent requests, so the best value might be in the hundreds.
  438. </p><p>
  439. Asynchronous I/O depends on an effective <code class="function">posix_fadvise</code>
  440. function, which some operating systems lack. If the function is not
  441. present then setting this parameter to anything but zero will result
  442. in an error. On some operating systems (e.g., Solaris), the function
  443. is present but does not actually do anything.
  444. </p><p>
  445. The default is 1 on supported systems, otherwise 0. This value can
  446. be overridden for tables in a particular tablespace by setting the
  447. tablespace parameter of the same name (see
  448. <a class="xref" href="sql-altertablespace.html" title="ALTER TABLESPACE"><span class="refentrytitle">ALTER TABLESPACE</span></a>).
  449. </p></dd><dt id="GUC-MAX-WORKER-PROCESSES"><span class="term"><code class="varname">max_worker_processes</code> (<code class="type">integer</code>)
  450. <a id="id-1.6.6.7.7.2.2.1.3" class="indexterm"></a>
  451. </span></dt><dd><p>
  452. Sets the maximum number of background processes that the system
  453. can support. This parameter can only be set at server start. The
  454. default is 8.
  455. </p><p>
  456. When running a standby server, you must set this parameter to the
  457. same or higher value than on the master server. Otherwise, queries
  458. will not be allowed in the standby server.
  459. </p><p>
  460. When changing this value, consider also adjusting
  461. <a class="xref" href="runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS">max_parallel_workers</a>,
  462. <a class="xref" href="runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-MAINTENANCE">max_parallel_maintenance_workers</a>, and
  463. <a class="xref" href="runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-PER-GATHER">max_parallel_workers_per_gather</a>.
  464. </p></dd><dt id="GUC-MAX-PARALLEL-WORKERS-PER-GATHER"><span class="term"><code class="varname">max_parallel_workers_per_gather</code> (<code class="type">integer</code>)
  465. <a id="id-1.6.6.7.7.2.3.1.3" class="indexterm"></a>
  466. </span></dt><dd><p>
  467. Sets the maximum number of workers that can be started by a single
  468. <code class="literal">Gather</code> or <code class="literal">Gather Merge</code> node.
  469. Parallel workers are taken from the pool of processes established by
  470. <a class="xref" href="runtime-config-resource.html#GUC-MAX-WORKER-PROCESSES">max_worker_processes</a>, limited by
  471. <a class="xref" href="runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS">max_parallel_workers</a>. Note that the requested
  472. number of workers may not actually be available at run time. If this
  473. occurs, the plan will run with fewer workers than expected, which may
  474. be inefficient. The default value is 2. Setting this value to 0
  475. disables parallel query execution.
  476. </p><p>
  477. Note that parallel queries may consume very substantially more
  478. resources than non-parallel queries, because each worker process is
  479. a completely separate process which has roughly the same impact on the
  480. system as an additional user session. This should be taken into
  481. account when choosing a value for this setting, as well as when
  482. configuring other settings that control resource utilization, such
  483. as <a class="xref" href="runtime-config-resource.html#GUC-WORK-MEM">work_mem</a>. Resource limits such as
  484. <code class="varname">work_mem</code> are applied individually to each worker,
  485. which means the total utilization may be much higher across all
  486. processes than it would normally be for any single process.
  487. For example, a parallel query using 4 workers may use up to 5 times
  488. as much CPU time, memory, I/O bandwidth, and so forth as a query which
  489. uses no workers at all.
  490. </p><p>
  491. For more information on parallel query, see
  492. <a class="xref" href="parallel-query.html" title="Chapter 15. Parallel Query">Chapter 15</a>.
  493. </p></dd><dt id="GUC-MAX-PARALLEL-WORKERS-MAINTENANCE"><span class="term"><code class="varname">max_parallel_maintenance_workers</code> (<code class="type">integer</code>)
  494. <a id="id-1.6.6.7.7.2.4.1.3" class="indexterm"></a>
  495. </span></dt><dd><p>
  496. Sets the maximum number of parallel workers that can be
  497. started by a single utility command. Currently, the only
  498. parallel utility command that supports the use of parallel
  499. workers is <code class="command">CREATE INDEX</code>, and only when
  500. building a B-tree index. Parallel workers are taken from the
  501. pool of processes established by <a class="xref" href="runtime-config-resource.html#GUC-MAX-WORKER-PROCESSES">max_worker_processes</a>, limited by <a class="xref" href="runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS">max_parallel_workers</a>. Note that the requested
  502. number of workers may not actually be available at run time.
  503. If this occurs, the utility operation will run with fewer
  504. workers than expected. The default value is 2. Setting this
  505. value to 0 disables the use of parallel workers by utility
  506. commands.
  507. </p><p>
  508. Note that parallel utility commands should not consume
  509. substantially more memory than equivalent non-parallel
  510. operations. This strategy differs from that of parallel
  511. query, where resource limits generally apply per worker
  512. process. Parallel utility commands treat the resource limit
  513. <code class="varname">maintenance_work_mem</code> as a limit to be applied to
  514. the entire utility command, regardless of the number of
  515. parallel worker processes. However, parallel utility
  516. commands may still consume substantially more CPU resources
  517. and I/O bandwidth.
  518. </p></dd><dt id="GUC-MAX-PARALLEL-WORKERS"><span class="term"><code class="varname">max_parallel_workers</code> (<code class="type">integer</code>)
  519. <a id="id-1.6.6.7.7.2.5.1.3" class="indexterm"></a>
  520. </span></dt><dd><p>
  521. Sets the maximum number of workers that the system can support for
  522. parallel operations. The default value is 8. When increasing or
  523. decreasing this value, consider also adjusting
  524. <a class="xref" href="runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-MAINTENANCE">max_parallel_maintenance_workers</a> and
  525. <a class="xref" href="runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-PER-GATHER">max_parallel_workers_per_gather</a>.
  526. Also, note that a setting for this value which is higher than
  527. <a class="xref" href="runtime-config-resource.html#GUC-MAX-WORKER-PROCESSES">max_worker_processes</a> will have no effect,
  528. since parallel workers are taken from the pool of worker processes
  529. established by that setting.
  530. </p></dd><dt id="GUC-BACKEND-FLUSH-AFTER"><span class="term"><code class="varname">backend_flush_after</code> (<code class="type">integer</code>)
  531. <a id="id-1.6.6.7.7.2.6.1.3" class="indexterm"></a>
  532. </span></dt><dd><p>
  533. Whenever more than this amount of data has
  534. been written by a single backend, attempt to force the OS to issue
  535. these writes to the underlying storage. Doing so will limit the
  536. amount of dirty data in the kernel's page cache, reducing the
  537. likelihood of stalls when an <code class="function">fsync</code> is issued at the end of a
  538. checkpoint, or when the OS writes data back in larger batches in the
  539. background. Often that will result in greatly reduced transaction
  540. latency, but there also are some cases, especially with workloads
  541. that are bigger than <a class="xref" href="runtime-config-resource.html#GUC-SHARED-BUFFERS">shared_buffers</a>, but smaller
  542. than the OS's page cache, where performance might degrade. This
  543. setting may have no effect on some platforms.
  544. If this value is specified without units, it is taken as blocks,
  545. that is <code class="symbol">BLCKSZ</code> bytes, typically 8kB.
  546. The valid range is
  547. between <code class="literal">0</code>, which disables forced writeback,
  548. and <code class="literal">2MB</code>. The default is <code class="literal">0</code>, i.e., no
  549. forced writeback. (If <code class="symbol">BLCKSZ</code> is not 8kB,
  550. the maximum value scales proportionally to it.)
  551. </p></dd><dt id="GUC-OLD-SNAPSHOT-THRESHOLD"><span class="term"><code class="varname">old_snapshot_threshold</code> (<code class="type">integer</code>)
  552. <a id="id-1.6.6.7.7.2.7.1.3" class="indexterm"></a>
  553. </span></dt><dd><p>
  554. Sets the minimum amount of time that a query snapshot can be used
  555. without risk of a <span class="quote">“<span class="quote">snapshot too old</span>”</span> error occurring
  556. when using the snapshot. Data that has been dead for longer than
  557. this threshold is allowed to be vacuumed away. This can help
  558. prevent bloat in the face of snapshots which remain in use for a
  559. long time. To prevent incorrect results due to cleanup of data which
  560. would otherwise be visible to the snapshot, an error is generated
  561. when the snapshot is older than this threshold and the snapshot is
  562. used to read a page which has been modified since the snapshot was
  563. built.
  564. </p><p>
  565. If this value is specified without units, it is taken as minutes.
  566. A value of <code class="literal">-1</code> (the default) disables this feature,
  567. effectively setting the snapshot age limit to infinity.
  568. This parameter can only be set at server start.
  569. </p><p>
  570. Useful values for production work probably range from a small number
  571. of hours to a few days. Small values (such as <code class="literal">0</code> or
  572. <code class="literal">1min</code>) are only allowed because they may sometimes be
  573. useful for testing. While a setting as high as <code class="literal">60d</code> is
  574. allowed, please note that in many workloads extreme bloat or
  575. transaction ID wraparound may occur in much shorter time frames.
  576. </p><p>
  577. When this feature is enabled, freed space at the end of a relation
  578. cannot be released to the operating system, since that could remove
  579. information needed to detect the <span class="quote">“<span class="quote">snapshot too old</span>”</span>
  580. condition. All space allocated to a relation remains associated with
  581. that relation for reuse only within that relation unless explicitly
  582. freed (for example, with <code class="command">VACUUM FULL</code>).
  583. </p><p>
  584. This setting does not attempt to guarantee that an error will be
  585. generated under any particular circumstances. In fact, if the
  586. correct results can be generated from (for example) a cursor which
  587. has materialized a result set, no error will be generated even if the
  588. underlying rows in the referenced table have been vacuumed away.
  589. Some tables cannot safely be vacuumed early, and so will not be
  590. affected by this setting, such as system catalogs. For such tables
  591. this setting will neither reduce bloat nor create a possibility
  592. of a <span class="quote">“<span class="quote">snapshot too old</span>”</span> error on scanning.
  593. </p></dd></dl></div></div></div><div class="navfooter"><hr /><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="runtime-config-connection.html">Prev</a> </td><td width="20%" align="center"><a accesskey="u" href="runtime-config.html">Up</a></td><td width="40%" align="right"> <a accesskey="n" href="runtime-config-wal.html">Next</a></td></tr><tr><td width="40%" align="left" valign="top">19.3. Connections and Authentication </td><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td><td width="40%" align="right" valign="top"> 19.5. Write Ahead Log</td></tr></table></div></body></html>
上海开阖软件有限公司 沪ICP备12045867号-1