gooderp18绿色标准版
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

132 lines
11KB

  1. <?xml version="1.0" encoding="UTF-8" standalone="no"?>
  2. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>26.4. Alternative Method for Log Shipping</title><link rel="stylesheet" type="text/css" href="stylesheet.css" /><link rev="made" href="pgsql-docs@lists.postgresql.org" /><meta name="generator" content="DocBook XSL Stylesheets V1.79.1" /><link rel="prev" href="warm-standby-failover.html" title="26.3. Failover" /><link rel="next" href="hot-standby.html" title="26.5. Hot Standby" /></head><body><div xmlns="http://www.w3.org/TR/xhtml1/transitional" class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="5" align="center">26.4. Alternative Method for Log Shipping</th></tr><tr><td width="10%" align="left"><a accesskey="p" href="warm-standby-failover.html" title="26.3. Failover">Prev</a> </td><td width="10%" align="left"><a accesskey="u" href="high-availability.html" title="Chapter 26. High Availability, Load Balancing, and Replication">Up</a></td><th width="60%" align="center">Chapter 26. High Availability, Load Balancing, and Replication</th><td width="10%" align="right"><a accesskey="h" href="index.html" title="PostgreSQL 12.4 Documentation">Home</a></td><td width="10%" align="right"> <a accesskey="n" href="hot-standby.html" title="26.5. Hot Standby">Next</a></td></tr></table><hr></hr></div><div class="sect1" id="LOG-SHIPPING-ALTERNATIVE"><div class="titlepage"><div><div><h2 class="title" style="clear: both">26.4. Alternative Method for Log Shipping</h2></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect2"><a href="log-shipping-alternative.html#WARM-STANDBY-CONFIG">26.4.1. Implementation</a></span></dt><dt><span class="sect2"><a href="log-shipping-alternative.html#WARM-STANDBY-RECORD">26.4.2. Record-Based Log Shipping</a></span></dt></dl></div><p>
  3. An alternative to the built-in standby mode described in the previous
  4. sections is to use a <code class="varname">restore_command</code> that polls the archive location.
  5. This was the only option available in versions 8.4 and below. See the
  6. <a class="xref" href="pgstandby.html" title="pg_standby"><span class="refentrytitle"><span class="application">pg_standby</span></span></a> module for a reference implementation of this.
  7. </p><p>
  8. Note that in this mode, the server will apply WAL one file at a
  9. time, so if you use the standby server for queries (see Hot Standby),
  10. there is a delay between an action in the master and when the
  11. action becomes visible in the standby, corresponding the time it takes
  12. to fill up the WAL file. <code class="varname">archive_timeout</code> can be used to make that delay
  13. shorter. Also note that you can't combine streaming replication with
  14. this method.
  15. </p><p>
  16. The operations that occur on both primary and standby servers are
  17. normal continuous archiving and recovery tasks. The only point of
  18. contact between the two database servers is the archive of WAL files
  19. that both share: primary writing to the archive, standby reading from
  20. the archive. Care must be taken to ensure that WAL archives from separate
  21. primary servers do not become mixed together or confused. The archive
  22. need not be large if it is only required for standby operation.
  23. </p><p>
  24. The magic that makes the two loosely coupled servers work together is
  25. simply a <code class="varname">restore_command</code> used on the standby that,
  26. when asked for the next WAL file, waits for it to become available from
  27. the primary. Normal recovery
  28. processing would request a file from the WAL archive, reporting failure
  29. if the file was unavailable. For standby processing it is normal for
  30. the next WAL file to be unavailable, so the standby must wait for
  31. it to appear. For files ending in
  32. <code class="literal">.history</code> there is no need to wait, and a non-zero return
  33. code must be returned. A waiting <code class="varname">restore_command</code> can be
  34. written as a custom script that loops after polling for the existence of
  35. the next WAL file. There must also be some way to trigger failover, which
  36. should interrupt the <code class="varname">restore_command</code>, break the loop and
  37. return a file-not-found error to the standby server. This ends recovery
  38. and the standby will then come up as a normal server.
  39. </p><p>
  40. Pseudocode for a suitable <code class="varname">restore_command</code> is:
  41. </p><pre class="programlisting">
  42. triggered = false;
  43. while (!NextWALFileReady() &amp;&amp; !triggered)
  44. {
  45. sleep(100000L); /* wait for ~0.1 sec */
  46. if (CheckForExternalTrigger())
  47. triggered = true;
  48. }
  49. if (!triggered)
  50. CopyWALFileForRecovery();
  51. </pre><p>
  52. </p><p>
  53. A working example of a waiting <code class="varname">restore_command</code> is provided
  54. in the <a class="xref" href="pgstandby.html" title="pg_standby"><span class="refentrytitle"><span class="application">pg_standby</span></span></a> module. It
  55. should be used as a reference on how to correctly implement the logic
  56. described above. It can also be extended as needed to support specific
  57. configurations and environments.
  58. </p><p>
  59. The method for triggering failover is an important part of planning
  60. and design. One potential option is the <code class="varname">restore_command</code>
  61. command. It is executed once for each WAL file, but the process
  62. running the <code class="varname">restore_command</code> is created and dies for
  63. each file, so there is no daemon or server process, and
  64. signals or a signal handler cannot be used. Therefore, the
  65. <code class="varname">restore_command</code> is not suitable to trigger failover.
  66. It is possible to use a simple timeout facility, especially if
  67. used in conjunction with a known <code class="varname">archive_timeout</code>
  68. setting on the primary. However, this is somewhat error prone
  69. since a network problem or busy primary server might be sufficient
  70. to initiate failover. A notification mechanism such as the explicit
  71. creation of a trigger file is ideal, if this can be arranged.
  72. </p><div class="sect2" id="WARM-STANDBY-CONFIG"><div class="titlepage"><div><div><h3 class="title">26.4.1. Implementation</h3></div></div></div><p>
  73. The short procedure for configuring a standby server using this alternative
  74. method is as follows. For
  75. full details of each step, refer to previous sections as noted.
  76. </p><div class="orderedlist"><ol class="orderedlist" type="1"><li class="listitem"><p>
  77. Set up primary and standby systems as nearly identical as
  78. possible, including two identical copies of
  79. <span class="productname">PostgreSQL</span> at the same release level.
  80. </p></li><li class="listitem"><p>
  81. Set up continuous archiving from the primary to a WAL archive
  82. directory on the standby server. Ensure that
  83. <a class="xref" href="runtime-config-wal.html#GUC-ARCHIVE-MODE">archive_mode</a>,
  84. <a class="xref" href="runtime-config-wal.html#GUC-ARCHIVE-COMMAND">archive_command</a> and
  85. <a class="xref" href="runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT">archive_timeout</a>
  86. are set appropriately on the primary
  87. (see <a class="xref" href="continuous-archiving.html#BACKUP-ARCHIVING-WAL" title="25.3.1. Setting Up WAL Archiving">Section 25.3.1</a>).
  88. </p></li><li class="listitem"><p>
  89. Make a base backup of the primary server (see <a class="xref" href="continuous-archiving.html#BACKUP-BASE-BACKUP" title="25.3.2. Making a Base Backup">Section 25.3.2</a>), and load this data onto the standby.
  90. </p></li><li class="listitem"><p>
  91. Begin recovery on the standby server from the local WAL
  92. archive, using <code class="varname">restore_command</code> that waits
  93. as described previously (see <a class="xref" href="continuous-archiving.html#BACKUP-PITR-RECOVERY" title="25.3.4. Recovering Using a Continuous Archive Backup">Section 25.3.4</a>).
  94. </p></li></ol></div><p>
  95. </p><p>
  96. Recovery treats the WAL archive as read-only, so once a WAL file has
  97. been copied to the standby system it can be copied to tape at the same
  98. time as it is being read by the standby database server.
  99. Thus, running a standby server for high availability can be performed at
  100. the same time as files are stored for longer term disaster recovery
  101. purposes.
  102. </p><p>
  103. For testing purposes, it is possible to run both primary and standby
  104. servers on the same system. This does not provide any worthwhile
  105. improvement in server robustness, nor would it be described as HA.
  106. </p></div><div class="sect2" id="WARM-STANDBY-RECORD"><div class="titlepage"><div><div><h3 class="title">26.4.2. Record-Based Log Shipping</h3></div></div></div><p>
  107. It is also possible to implement record-based log shipping using this
  108. alternative method, though this requires custom development, and changes
  109. will still only become visible to hot standby queries after a full WAL
  110. file has been shipped.
  111. </p><p>
  112. An external program can call the <code class="function">pg_walfile_name_offset()</code>
  113. function (see <a class="xref" href="functions-admin.html" title="9.26. System Administration Functions">Section 9.26</a>)
  114. to find out the file name and the exact byte offset within it of
  115. the current end of WAL. It can then access the WAL file directly
  116. and copy the data from the last known end of WAL through the current end
  117. over to the standby servers. With this approach, the window for data
  118. loss is the polling cycle time of the copying program, which can be very
  119. small, and there is no wasted bandwidth from forcing partially-used
  120. segment files to be archived. Note that the standby servers'
  121. <code class="varname">restore_command</code> scripts can only deal with whole WAL files,
  122. so the incrementally copied data is not ordinarily made available to
  123. the standby servers. It is of use only when the primary dies —
  124. then the last partial WAL file is fed to the standby before allowing
  125. it to come up. The correct implementation of this process requires
  126. cooperation of the <code class="varname">restore_command</code> script with the data
  127. copying program.
  128. </p><p>
  129. Starting with <span class="productname">PostgreSQL</span> version 9.0, you can use
  130. streaming replication (see <a class="xref" href="warm-standby.html#STREAMING-REPLICATION" title="26.2.5. Streaming Replication">Section 26.2.5</a>) to
  131. achieve the same benefits with less effort.
  132. </p></div></div><div class="navfooter"><hr /><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="warm-standby-failover.html">Prev</a> </td><td width="20%" align="center"><a accesskey="u" href="high-availability.html">Up</a></td><td width="40%" align="right"> <a accesskey="n" href="hot-standby.html">Next</a></td></tr><tr><td width="40%" align="left" valign="top">26.3. Failover </td><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td><td width="40%" align="right" valign="top"> 26.5. Hot Standby</td></tr></table></div></body></html>
上海开阖软件有限公司 沪ICP备12045867号-1