Init Script (LSB) Compatibility Checks
Assuming some_service is configured correctly and currently not active, the following sequence will help you determine if it is LSB compatible:
- Start (stopped)
/etc/init.d/some_service start ; echo "result: $?"
- Did the service start?
- Did the command print result: 0 (in addition to the regular output)?
Status (running)
/etc/init.d/some_service status ; echo "result: $?"
- Did the script accept the command?
- Did the script indicate the service was running?
- Did the command print result: 0 (in addition to the regular output)?
Start (running)
/etc/init.d/some_service start ; echo "result: $?"
- Is the service still running?
- Did the command print result: 0 (in addition to the regular output)?
Stop (running)
/etc/init.d/some_service stop ; echo "result: $?"
- Was the service stopped?
- Did the command print result: 0 (in addition to the regular output)?
Status (stopped)
/etc/init.d/some_service status ; echo "result: $?"
- Did the script accept the command?
- Did the script indicate the service was not running?
- Did the command print result: 3 (in addition to the regular output)?
Stop (stopped)
/etc/init.d/some_service stop ; echo "result: $?"
- Is the service still stopped?
- Did the command print result: 0 (in addition to the regular output)?
Status (failed)
This step is not readily testable and relies on manual inspection of the script.
The script can optionally use one of the other codes (other than 3) listed in the LSB spec to indicate that it is active but failed.
In such a case, this tells the cluster that, before moving the resource to another node, it should stop it on the existing one first.
Making use of these extra exit codes is encouraged.
If the answer to any of the above questions is no, then the init script is not LSB compliant.
If you are using Pacemaker resource management, then your options at this point are to:
Redis & Celery Installation configuration
Install configure redis-3.0.7 & sentinels and celery 3.1.15 on the both the nodes and demonize as yes(loaded while)
Run 2 sentinels process on each node along with 3 more sentinels on a separate temporary node (These 3 sentinels hosted on temporary node would participate in voting mechanism to choose Redis Master if current master goes down, so we are using 7 sentinels process also termed as quorum 7)
Installation of Celery
Move celery daemon file into /etc/init.d
cp ~/celery/extra/genric-inti.d /etc /init.d (these generic daemon is slightly different to use it’s a Standard file use nocout-etl file)
(recommended file)-nocout-etl
cp ~nocout-etl /etc/init.d/
Move celery daemon file into /etc/init.d
And add into Cluster for managing