[Linux-ha-jp] Heartbeat+SFEX構成の実装について

Back to archive index

アマノユウ arrivederci_yu****@me*****
2010年 4月 30日 (金) 12:25:54 JST


中平さん

はじめまして。天野です。

ご回答頂き、誠にありがとうございます。

確認致しましたところ、sfex_initによるデバイスの初期化をしておりませんでした。
リソースエージェントのスクリプト内にも初期化手順が備考として記載されていたのを確認しておりませんでした(汗)

初期化を致しましたところ、無事Heartbeatが動作するようになり検証を始める事が出来ました。

ご丁寧にご回答頂きありがとうございます。

よろしくお願い致します。


>
>TO:天野さん
>
>はじめまして。中平と申します。
>
>"sfex_lock: ERROR: magic number mismatched."
>
>というエラーについてですが、これは SFEXのコマンドが
>共有ディスク上に置かれているリソースの排他制御情報を
>読み込もうとした際、正しいデータを読み込めないと出力されます。
>
>このエラーの原因ですが、SFEXの排他制御情報を読み書きする
>デバイスの指定が間違っているか、排他制御情報の初期化に
>失敗しているか、のどちらかだと思われます。
>
>念のため、以下の2点をご確認頂いた上で、再度 Heartbeatを
>起動してみてください。
>
>1. 排他制御情報の書き込み先は正しい?
> sfexリソースの設定パラメータの中で、下記の行に指定してあるデバイスは
> 排他制御情報の置き場所として意図したとおりの場所でしょうか?
>
><nvpair id="prmExPostgreSQLDB-instance_attributes-device" name="device"
>value="/dev/sdc1"/>
>
>2. 排他制御情報の初期化はされているか?
> 上記デバイスが、排他制御情報の置き場所として意図したとおりのもの
> でしたら、下記コマンドを実行し排他制御情報が正しく初期化されているか
> 確認してみてください。
>
># /usr/lib64/heartbeat/sfex_stat -i 1 /dev/sdc1
>control data:
>  magic: 0x01, 0x1f, 0x71, 0x7f
>  version: 1
>  revision: 30
>  blocksize: 512
>  numlocks: 1
>lock data #1:
>  status: unlock
>  count: 0
>  nodename:
>sfex_stat: status is UNLOCKED.
>
> 排他制御情報の初期化コマンド実行時のオプション指定によって
> 多少表示される情報が変わりますが、上記のような実行結果が
> 出力されていれば、正常に排他制御情報が初期化されています。
>
> 逆に下記のようなエラーメッセージが出力された場合は、
> 正常に排他制御情報の初期化が行われていません。
>
>sfex_stat: ERROR: magic number mismatched.
>
> この場合は、下記コマンドを実行し、
> SFEXの排他制御情報の初期化を実行してください。
>
># /usr/lib64/heartbeat/sfex_init /dev/sdc1
>※ 念のため、初期化対象パーティション上に大切なデータが
>   置かれていないかご確認ください。
>   一度 sfex_initコマンドで初期化してしまうと、このパーティションに
>   置かれていたデータを取り出すことが困難になります。
>
>以上です。
>よろしくお願いいたします。
>
>(2010/04/28 18:14), アマノユウ wrote:
>> はじめまして。天野です。
>> 
>> 現在、Heartbeat+SFEXの構成にてPostgresSQLサーバのHAクラスタリング環境の検証をしております。
>> ローカルPCにて下記の構成をVMWareによる仮想環境にて構築をしておりますが、下記記載のエラーによりSFEXの動作がせず煮詰まっております。
>> 稚拙な質問で大変申し訳ありませんが、ご教授頂けると助かります。
>> 
>> 環境:
>>   CentOS 5.4
>>   Heartbeat 2.1.4
>>   ノード: node01, node02 の2台構成
>>   共有ディスク: iSCSIで両ノードからアクセス可能としています。
>> 
>> Heartbeatの設定(cib.xml)
>> ------------------
>>   <cib admin_epoch="0" epoch="0" num_updates="0">
>>     <configuration>
>>       <crm_config>
>>         <cluster_property_set id="cib-bootstrap-options">
>>           <attributes>
>>             <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.4-Unknown"/>
>>             <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
>>             <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true"/>
>>             <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="INFINITY"/>
>>             <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="-INFINITY"/>
>>             <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="120"/>
>>             <nvpair id="cib-bootstrap-options-startup-fencing" name="startup-fencing" value="true"/>
>>           </attributes>
>>         </cluster_property_set>
>>       </crm_config>
>>       <nodes>
>>         <node id="0fd126d0-5276-4c0a-9a0d-ecf8048e5203" uname="linux_ha_01" type="normal"/>
>>         <node id="7deb3c18-3d0c-4c76-8488-3d2057a2e638" uname="linux_ha_02" type="normal"/>
>>       </nodes>
>>       <resources>
>> 
>>       <group id="grpPostgreSQLDB">
>>         <instance_attributes id="grpPostgreSQLDB-instance_attributes">
>>           <attributes/>
>>         </instance_attributes>
>>         <primitive class="ocf" id="prmExPostgreSQLDB" provider="heartbeat" type="sfex">
>>           <instance_attributes id="prmExPostgreSQLDB-instance_attributes">
>>             <attributes>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-device" name="device" value="/dev/sdc1"/>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-index" name="index" value="1"/>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-collision_timeout" name="collision_timeout" value="1"/>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-lock_timeout" name="lock_timeout" value="100"/>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-monitor_interval" name="monitor_interval" value="10"/>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-fsck" name="fsck" value="/sbin/fsck -p /dev/sdb1"/>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-fsck_mode" name="fsck_mode" value="check"/>
>>               <nvpair id="prmExPostgreSQLDB-instance_attributes-halt" name="halt" value="/sbin/halt -f -n -p"/>
>>             </attributes>
>>           </instance_attributes>
>>           <operations>
>>             <op id="prmExPostgreSQLDB-start" name="start" on_fail="restart" timeout="300s" prereq="fencing"/>
>>             <op id="prmExPostgreSQLDB-stop" name="stop" on_fail="fence" timeout="60s"/>
>>             <op id="prmExPostgreSQLDB-monitor" interval="10s" name="monitor" on_fail="restart" timeout="90s"/>
>>           </operations>
>>         </primitive>
>>         <primitive class="ocf" id="prmFsPostgreSQLDB1" provider="heartbeat" type="Filesystem">
>>           <instance_attributes id="prmFsPostgreSQLDB1-instance_attributes">
>>             <attributes>
>>               <nvpair id="prmFsPostgreSQLDB1-instance_attributes-fstype" name="fstype" value="ext3"/>
>>               <nvpair id="prmFsPostgreSQLDB1-instance_attributes-device" name="device" value="/dev/sdb1"/>
>>               <nvpair id="prmFsPostgreSQLDB1-instance_attributes-directory" name="directory" value="/iscsi"/>
>>             </attributes>
>>           </instance_attributes>
>>           <operations>
>>             <op id="prmFsPostgreSQLDB1-start" name="start" on_fail="restart" timeout="60s" prereq="fencing"/>
>>             <op id="prmFsPostgreSQLDB1-stop" name="stop" on_fail="fence" timeout="60s"/>
>>             <op id="prmFsPostgreSQLDB1-monitor" interval="10s" name="monitor" on_fail="restart" timeout="60s"/>
>>           </operations>
>>         </primitive>
>>         <primitive class="ocf" id="prmIpPostgreSQLDB" provider="heartbeat" type="IPaddr">
>>           <instance_attributes id="prmIpPostgreSQLDB-instance_attributes">
>>             <attributes>
>>               <nvpair id="prmIpPostgreSQLDB-instance_attributes-ip" name="ip" value="172.25.5.61"/>
>>               <nvpair id="prmIpPostgreSQLDB-instance_attributes-nic" name="nic" value="eth0"/>
>>               <nvpair id="prmIpPostgreSQLDB-instance_attributes-cidr_netmask" name="cidr_netmask" value="24"/>
>>             </attributes>
>>           </instance_attributes>
>>           <operations>
>>             <op id="prmIpPostgreSQLDB-start" name="start" on_fail="restart" timeout="60s" prereq="fencing"/>
>>             <op id="prmIpPostgreSQLDB-stop" name="stop" on_fail="fence" timeout="60s"/>
>>             <op id="prmIpPostgreSQLDB-monitor" interval="10s" name="monitor" on_fail="restart" timeout="60s"/>
>>           </operations>
>>         </primitive>
>>         <primitive class="ocf" id="prmApPostgreSQLDB" provider="heartbeat" type="pgsql">
>>           <instance_attributes id="prmApPostgreSQLDB-instance_attributes">
>>             <attributes>
>>               <nvpair id="prmApPostgreSQLDB-instance_attributes-pgctl" name="pgctl" value="/usr/local/pgsql/bin/pg_ctl"/>
>>               <nvpair id="prmApPostgreSQLDB-instance_attributes-start_opt" name="start_opt" value="-p 5432 -h 172.25.5.61"/>
>>               <nvpair id="prmApPostgreSQLDB-instance_attributes-psql" name="psql" value="/usr/local/pgsql/bin/psql"/>
>>               <nvpair id="prmApPostgreSQLDB-instance_attributes-pgdata" name="pgdata" value="/data/pgdata"/>
>>               <nvpair id="prmApPostgreSQLDB-instance_attributes-pgdba" name="pgdba" value="postgres"/>
>>               <nvpair id="prmApPostgreSQLDB-instance_attributes-pgport" name="pgport" value="5432"/>
>>               <nvpair id="prmApPostgreSQLDB-instance_attributes-pgdb" name="pgdb" value="template1"/>
>>             </attributes>
>>           </instance_attributes>
>>           <operations>
>>             <op id="prmApPostgreSQLDB-start" name="start" on_fail="restart" timeout="300s" prereq="fencing"/>
>>             <op id="prmApPostgreSQLDB-stop" name="stop" on_fail="fence" timeout="300s"/>
>>             <op id="prmApPostgreSQLDB-monitor" interval="10s" name="monitor" on_fail="restart" timeout="60s"/>
>>           </operations>
>>         </primitive>
>>       </group>
>> 
>>       </resources>
>> 
>>       <constraints>
>> 
>>         <rsc_location id="grpPostgreSQLDB-node1" rsc="grpPostgreSQLDB">
>>           <rule id="grpPostgreSQLDB-node1-rule:1" score="200">
>>           <expression attribute="linux_ha_01" id="grpPostgreSQLDB-node1-rule:1-expression:1" operation="eq" value="x3650g"/>
>>           </rule>
>>         </rsc_location>
>> 
>>         <rsc_location id="grpPostgreSQLDB-node2" rsc="grpPostgreSQLDB">
>>           <rule id="grpPostgreSQLDB-node2-rule:1" score="100">
>>           <expression attribute="linux_ha_02" id="grpPostgreSQLDB-node2-rule:1-expression:1" operation="eq" value="x3650h"/>
>>           </rule>
>>         </rsc_location>
>> 
>>       </constraints>
>>     </configuration>
>>     <status/>
>>   </cib>
>> ------------------
>> 
>> エラー内容(ha-debug)
>> ※sfex_lock: ERROR: magic number mismatched.というエラーが原因でSFEXがstart出来ていない認識ですが、解決方法が分からない状況です。
>> 
>> ------------------
>> crmd[4877]: 2010/04/28_17:31:23 info: crmd_init: Starting crmd
>> crmd[4877]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 15
>> crmd[4877]: 2010/04/28_17:31:23 info: G_main_add_TriggerHandler: Added signal manual handler
>> crmd[4877]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 17
>> heartbeat[4878]: 2010/04/28_17:31:23 info: Starting "/usr/lib/heartbeat/mgmtd -v" as uid 0  gid 0 (pid 4878)
>> mgmtd[4878]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 15
>> mgmtd[4878]: 2010/04/28_17:31:23 debug: Enabling coredumps
>> mgmtd[4878]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 10
>> mgmtd[4878]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 12
>> mgmtd[4878]: 2010/04/28_17:31:23 WARN: lrm_signon: can not initiate connection
>> mgmtd[4878]: 2010/04/28_17:31:23 info: login to lrm: 0, ret:0
>> ccm[4872]: 2010/04/28_17:31:23 info: Hostname: linux_ha_01
>> attrd[4876]: 2010/04/28_17:31:23 info: register_with_ha: UUID: 0fd126d0-5276-4c0a-9a0d-ecf8048e5203
>> lrmd[4874]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 15
>> lrmd[4874]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 17
>> lrmd[4874]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 10
>> lrmd[4874]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 12
>> lrmd[4874]: 2010/04/28_17:31:23 info: Started.
>> stonithd[4875]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 10
>> stonithd[4875]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 12
>> stonithd[4875]: 2010/04/28_17:31:23 info: Signing in with heartbeat.
>> stonithd[4875]: 2010/04/28_17:31:23 notice: /usr/lib/heartbeat/stonithd start up successfully.
>> stonithd[4875]: 2010/04/28_17:31:23 info: G_main_add_SignalHandler: Added signal handler for signal 17
>> mgmtd[4878]: 2010/04/28_17:31:24 info: init_crm
>> cib[4873]: 2010/04/28_17:31:26 info: ccm_connect: Registering with CCM...
>> cib[4873]: 2010/04/28_17:31:26 WARN: ccm_connect: CCM Activation failed
>> cib[4873]: 2010/04/28_17:31:26 WARN: ccm_connect: CCM Connection failed 2 times (30 max)
>> cib[4873]: 2010/04/28_17:31:29 info: ccm_connect: Registering with CCM...
>> cib[4873]: 2010/04/28_17:31:29 WARN: ccm_connect: CCM Activation failed
>> cib[4873]: 2010/04/28_17:31:29 WARN: ccm_connect: CCM Connection failed 3 times (30 max)
>> ccm[4872]: 2010/04/28_17:31:29 info: G_main_add_SignalHandler: Added signal handler for signal 15
>> heartbeat[4864]: 2010/04/28_17:31:30 WARN: 1 lost packet(s) for [linux_ha_02] [14:16]
>> heartbeat[4864]: 2010/04/28_17:31:30 info: No pkts missing from linux_ha_02!
>> cib[4873]: 2010/04/28_17:31:32 info: ccm_connect: Registering with CCM...
>> heartbeat[4864]: 2010/04/28_17:31:32 WARN: 1 lost packet(s) for [linux_ha_02] [19:21]
>> heartbeat[4864]: 2010/04/28_17:31:32 info: No pkts missing from linux_ha_02!
>> cib[4873]: 2010/04/28_17:31:32 info: cib_init: Starting cib mainloop
>> cib[4884]: 2010/04/28_17:31:32 WARN: validate_cib_digest: No on-disk digest present
>> cib[4884]: 2010/04/28_17:31:32 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4884]: 2010/04/28_17:31:32 WARN: validate_cib_digest: No on-disk digest present
>> cib[4873]: 2010/04/28_17:31:32 info: cib_null_callback: Setting cib_refresh_notify callbacks for crmd: on
>> crmd[4877]: 2010/04/28_17:31:32 info: do_cib_control: CIB connection established
>> cib[4873]: 2010/04/28_17:31:32 info: cib_client_status_callback: Status update: Client linux_ha_01/cib now has status [join]
>> cib[4873]: 2010/04/28_17:31:32 info: cib_client_status_callback: Status update: Client linux_ha_02/cib now has status [join]
>> cib[4873]: 2010/04/28_17:31:32 info: cib_client_status_callback: Status update: Client linux_ha_01/cib now has status [online]
>> crmd[4877]: 2010/04/28_17:31:32 info: register_with_ha: Hostname: linux_ha_01
>> cib[4873]: 2010/04/28_17:31:32 info: cib_null_callback: Setting cib_diff_notify callbacks for mgmtd: on
>> cib[4884]: 2010/04/28_17:31:32 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4884]: 2010/04/28_17:31:32 WARN: validate_cib_digest: No on-disk digest present
>> cib[4884]: 2010/04/28_17:31:32 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
>> cib[4884]: 2010/04/28_17:31:32 WARN: validate_cib_digest: No on-disk digest present
>> cib[4884]: 2010/04/28_17:31:32 info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: e957bd80c42e50fe1768dc3ab0b2ae2d)
>> cib[4884]: 2010/04/28_17:31:32 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4884]: 2010/04/28_17:31:32 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
>> cib[4884]: 2010/04/28_17:31:32 WARN: validate_cib_digest: No on-disk digest present
>> cib[4873]: 2010/04/28_17:31:33 info: cib_client_status_callback: Status update: Client linux_ha_02/cib now has status [online]
>> crmd[4877]: 2010/04/28_17:31:33 info: register_with_ha: UUID: 0fd126d0-5276-4c0a-9a0d-ecf8048e5203
>> mgmtd[4878]: 2010/04/28_17:31:33 debug: main: run the loop...
>> mgmtd[4878]: 2010/04/28_17:31:33 info: Started.
>> crmd[4877]: 2010/04/28_17:31:34 info: populate_cib_nodes: Requesting the list of configured nodes
>> crmd[4877]: 2010/04/28_17:31:36 notice: populate_cib_nodes: Node: linux_ha_02 (uuid: 7deb3c18-3d0c-4c76-8488-3d2057a2e638)
>> cib[4873]: 2010/04/28_17:31:37 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
>> crmd[4877]: 2010/04/28_17:31:37 notice: populate_cib_nodes: Node: linux_ha_01 (uuid: 0fd126d0-5276-4c0a-9a0d-ecf8048e5203)
>> cib[4873]: 2010/04/28_17:31:37 info: mem_handle_event: instance=2, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4
>> crmd[4877]: 2010/04/28_17:31:37 info: do_ha_control: Connected to Heartbeat
>> cib[4873]: 2010/04/28_17:31:37 info: cib_ccm_msg_callback: PEER: linux_ha_02
>> cib[4873]: 2010/04/28_17:31:37 info: cib_ccm_msg_callback: PEER: linux_ha_01
>> crmd[4877]: 2010/04/28_17:31:37 info: do_ccm_control: CCM connection established... waiting for first callback
>> crmd[4877]: 2010/04/28_17:31:37 info: do_started: Delaying start, CCM (0000000000100000) not connected
>> crmd[4877]: 2010/04/28_17:31:37 info: crmd_init: Starting crmd's mainloop
>> crmd[4877]: 2010/04/28_17:31:37 notice: crmd_client_status_callback: Status update: Client linux_ha_01/crmd now has status [online]
>> attrd[4876]: 2010/04/28_17:31:37 info: main: Starting mainloop...
>> crmd[4877]: 2010/04/28_17:31:38 notice: crmd_client_status_callback: Status update: Client linux_ha_01/crmd now has status [online]
>> crmd[4877]: 2010/04/28_17:31:38 notice: crmd_client_status_callback: Status update: Client linux_ha_02/crmd now has status [online]
>> crmd[4877]: 2010/04/28_17:31:39 info: do_started: Delaying start, CCM (0000000000100000) not connected
>> crmd[4877]: 2010/04/28_17:31:39 info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
>> crmd[4877]: 2010/04/28_17:31:39 info: mem_handle_event: instance=2, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4
>> crmd[4877]: 2010/04/28_17:31:39 info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=2)
>> crmd[4877]: 2010/04/28_17:31:39 info: ccm_event_detail: NEW MEMBERSHIP: trans=2, nodes=2, new=2, lost=0 n_idx=0, new_idx=0, old_idx=4
>> crmd[4877]: 2010/04/28_17:31:39 info: ccm_event_detail: 	CURRENT: linux_ha_02 [nodeid=1, born=1]
>> crmd[4877]: 2010/04/28_17:31:39 info: ccm_event_detail: 	CURRENT: linux_ha_01 [nodeid=0, born=2]
>> crmd[4877]: 2010/04/28_17:31:39 info: ccm_event_detail: 	NEW:     linux_ha_02 [nodeid=1, born=1]
>> crmd[4877]: 2010/04/28_17:31:39 info: ccm_event_detail: 	NEW:     linux_ha_01 [nodeid=0, born=2]
>> crmd[4877]: 2010/04/28_17:31:39 info: do_started: The local CRM is operational
>> crmd[4877]: 2010/04/28_17:31:39 info: do_state_transition: State transition S_STARTING ->  S_PENDING [ input=I_PENDING cause=C_CCM_CALLBACK origin=do_started ]
>> crmd[4877]: 2010/04/28_17:33:33 info: do_election_count_vote: Election check: vote from linux_ha_02
>> crmd[4877]: 2010/04/28_17:33:33 info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
>> crmd[4877]: 2010/04/28_17:33:33 WARN: do_log: [[FSA]] Input I_DC_TIMEOUT from crm_timer_popped() received in state (S_PENDING)
>> crmd[4877]: 2010/04/28_17:33:33 info: do_state_transition: State transition S_PENDING ->  S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
>> crmd[4877]: 2010/04/28_17:33:34 info: do_election_count_vote: Updated voted hash for linux_ha_01 to vote
>> crmd[4877]: 2010/04/28_17:33:34 info: do_election_count_vote: Election ignore: our vote (linux_ha_01)
>> crmd[4877]: 2010/04/28_17:33:34 info: do_election_check: Still waiting on 1 non-votes (2 total)
>> cib[4873]: 2010/04/28_17:33:36 info: apply_xml_diff: Digest mis-match: expected 3e936a49856b825f39dfd71d7b594ee9, calculated ab1d821d6e191d40dd2023a35b2844c4
>> crmd[4877]: 2010/04/28_17:33:36 WARN: do_log: [[FSA]] Input I_JOIN_OFFER from route_message() received in state (S_ELECTION)
>> cib[4873]: 2010/04/28_17:33:36 info: cib_process_diff: Diff 0.0.0 ->  0.1.1 not applied to 0.0.0: Failed application of a global update.  Requesting full refresh.
>> crmd[4877]: 2010/04/28_17:33:36 info: do_election_count_vote: Election check: vote from linux_ha_02
>> cib[4873]: 2010/04/28_17:33:36 info: cib_process_diff: Requesting re-sync from peer: Failed application of a global update.  Requesting full refresh.
>> crmd[4877]: 2010/04/28_17:33:36 info: do_election_check: Still waiting on 1 non-votes (2 total)
>> cib[4873]: 2010/04/28_17:33:36 WARN: do_cib_notify: cib_apply_diff of<diff>  FAILED: Application of an update diff failed, requesting a full refresh
>> crmd[4877]: 2010/04/28_17:33:36 info: do_state_transition: State transition S_ELECTION ->  S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
>> cib[4873]: 2010/04/28_17:33:36 WARN: cib_process_request: cib_apply_diff operation failed: Application of an update diff failed, requesting a full refresh
>> crmd[4877]: 2010/04/28_17:33:36 info: do_dc_release: DC role released
>> cib[4873]: 2010/04/28_17:33:36 WARN: cib_process_diff: Not applying diff 0.1.1 ->  0.1.2 (sync in progress)
>> cib[4873]: 2010/04/28_17:33:36 WARN: do_cib_notify: cib_apply_diff of<diff>  FAILED: Application of an update diff failed, requesting a full refresh
>> cib[4873]: 2010/04/28_17:33:36 WARN: cib_process_request: cib_apply_diff operation failed: Application of an update diff failed, requesting a full refresh
>> cib[4873]: 2010/04/28_17:33:39 info: cib_replace_notify: Replaced: 0.0.0 ->  0.1.2 from<null>
>> crmd[4877]: 2010/04/28_17:33:39 info: update_dc: Set DC to linux_ha_02 (2.0)
>> crmd[4877]: 2010/04/28_17:33:39 info: populate_cib_nodes: Requesting the list of configured nodes
>> cib[4885]: 2010/04/28_17:33:39 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4885]: 2010/04/28_17:33:39 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4885]: 2010/04/28_17:33:39 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
>> cib[4885]: 2010/04/28_17:33:39 info: write_cib_contents: Wrote version 0.1.4 of the CIB to disk (digest: ad2c8b218c8a363dc6320d5859da8b53)
>> cib[4885]: 2010/04/28_17:33:39 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4885]: 2010/04/28_17:33:39 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
>> crmd[4877]: 2010/04/28_17:33:41 notice: populate_cib_nodes: Node: linux_ha_02 (uuid: 7deb3c18-3d0c-4c76-8488-3d2057a2e638)
>> crmd[4877]: 2010/04/28_17:33:42 notice: populate_cib_nodes: Node: linux_ha_01 (uuid: 0fd126d0-5276-4c0a-9a0d-ecf8048e5203)
>> crmd[4877]: 2010/04/28_17:33:44 info: update_dc: Set DC to linux_ha_02 (2.0)
>> crmd[4877]: 2010/04/28_17:33:44 info: do_state_transition: State transition S_PENDING ->  S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
>> cib[4886]: 2010/04/28_17:33:44 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4886]: 2010/04/28_17:33:44 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4886]: 2010/04/28_17:33:44 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
>> cib[4886]: 2010/04/28_17:33:44 info: write_cib_contents: Wrote version 0.2.3 of the CIB to disk (digest: 19dd9885d89c780b3f6a25e6b5b01231)
>> cib[4886]: 2010/04/28_17:33:44 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
>> cib[4886]: 2010/04/28_17:33:44 info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
>> crmd[4877]: 2010/04/28_17:33:47 info: do_lrm_rsc_op: Performing op=prmExPostgreSQLDB_monitor_0 key=4:0:7:53f447e0-1328-412c-8ecc-e08029f2bfad)
>> lrmd[4874]: 2010/04/28_17:33:47 info: rsc:prmExPostgreSQLDB: monitor
>> sfex[4887][4893]: 2010/04/28_17:33:47 DEBUG: sfex_monitor: started...
>> sfex[4887][4894]: 2010/04/28_17:33:47 INFO: probe...
>> crmd[4877]: 2010/04/28_17:33:47 info: do_lrm_rsc_op: Performing op=prmFsPostgreSQLDB1_monitor_0 key=5:0:7:53f447e0-1328-412c-8ecc-e08029f2bfad)
>> lrmd[4874]: 2010/04/28_17:33:47 info: rsc:prmFsPostgreSQLDB1: monitor
>> crmd[4877]: 2010/04/28_17:33:47 info: do_lrm_rsc_op: Performing op=prmIpPostgreSQLDB_monitor_0 key=6:0:7:53f447e0-1328-412c-8ecc-e08029f2bfad)
>> lrmd[4874]: 2010/04/28_17:33:47 info: rsc:prmIpPostgreSQLDB: monitor
>> crmd[4877]: 2010/04/28_17:33:47 info: do_lrm_rsc_op: Performing op=prmApPostgreSQLDB_monitor_0 key=7:0:7:53f447e0-1328-412c-8ecc-e08029f2bfad)
>> lrmd[4874]: 2010/04/28_17:33:47 info: rsc:prmApPostgreSQLDB: monitor
>> crmd[4877]: 2010/04/28_17:33:47 info: process_lrm_event: LRM operation prmExPostgreSQLDB_monitor_0 (call=2, rc=7) complete
>> crmd[4877]: 2010/04/28_17:33:47 info: process_lrm_event: LRM operation prmIpPostgreSQLDB_monitor_0 (call=4, rc=7) complete
>> pgsql[4912][4945]: 2010/04/28_17:33:47 INFO: PostgreSQL is down
>> crmd[4877]: 2010/04/28_17:33:47 info: process_lrm_event: LRM operation prmApPostgreSQLDB_monitor_0 (call=5, rc=7) complete
>> crmd[4877]: 2010/04/28_17:33:47 info: process_lrm_event: LRM operation prmFsPostgreSQLDB1_monitor_0 (call=3, rc=7) complete
>> crmd[4877]: 2010/04/28_17:33:50 info: do_lrm_rsc_op: Performing op=prmExPostgreSQLDB_start_0 key=5:1:0:53f447e0-1328-412c-8ecc-e08029f2bfad)
>> lrmd[4874]: 2010/04/28_17:33:50 info: rsc:prmExPostgreSQLDB: start
>> sfex[4965][4971]: 2010/04/28_17:33:50 INFO: sfex_start: started...
>> sfex[4965][4973]: 2010/04/28_17:33:50 ERROR: Lock acquisition error (3).
>> lrmd[4874]: 2010/04/28_17:33:50 info: RA output: (prmExPostgreSQLDB:start:stderr) sfex_lock: ERROR: magic number mismatched.
>> 
>> crmd[4877]: 2010/04/28_17:33:50 info: process_lrm_event: LRM operation prmExPostgreSQLDB_start_0 (call=6, rc=1) complete
>> crmd[4877]: 2010/04/28_17:33:53 info: do_lrm_rsc_op: Performing op=prmExPostgreSQLDB_stop_0 key=1:2:0:53f447e0-1328-412c-8ecc-e08029f2bfad)
>> lrmd[4874]: 2010/04/28_17:33:53 info: rsc:prmExPostgreSQLDB: stop
>> sfex[4983][4989]: 2010/04/28_17:33:53 INFO: sfex_stop: started...
>> sfex[4983][4991]: 2010/04/28_17:33:53 WARNING: Lock release error (3).
>> lrmd[4874]: 2010/04/28_17:33:53 info: RA output: (prmExPostgreSQLDB:stop:stdout) sfex_unlock: ERROR: magic number mismatched.
>> 
>> sfex[4983][4993]: 2010/04/28_17:33:53 WARNING: The error is ignored at the stop.
>> sfex[4983][4994]: 2010/04/28_17:33:53 INFO: sfex_stop: complete.
>> crmd[4877]: 2010/04/28_17:33:53 info: process_lrm_event: LRM operation prmExPostgreSQLDB_stop_0 (call=7, rc=0) complete
>> 
>> ------------------
>> 
>> 以上、何卒よろしくお願い致します。
>> 
>> _______________________________________________
>> Linux-ha-japan mailing list
>> Linux****@lists*****
>> http://lists.sourceforge.jp/mailman/listinfo/linux-ha-japan
>> 
>> 
>
>
>
>------------------------------
>
>_______________________________________________
>Linux-ha-japan mailing list
>Linux****@lists*****
>http://lists.sourceforge.jp/mailman/listinfo/linux-ha-japan
>
>
>以上: Linux-ha-japan まとめ読み, 29 巻, 7 号
>********************************************
>
>





Linux-ha-japan メーリングリストの案内
Back to archive index