The returncode of /etc/init.d/ietd script return allways 0 even if the start of the script fails. Example: san1 ~ # /etc/init.d/ietd status; echo "Returncode: $?" * status: stopped Returncode: 1 san1 ~ # /etc/init.d/ietd start; echo "Returncode: $?" * Loading iSCSI-Target modules - iscsi_trgt ... [ ok ] * Starting iSCSI Enterprise Target ... [ ok ] Returncode: 0 san1 ~ # tail /var/log/messages Mar 11 17:15:02 san1 iSCSI Enterprise Target Software - version 0.4.16 Mar 11 17:15:02 san1 iscsi_trgt: Registered io type fileio Mar 11 17:15:02 san1 iscsi_trgt: Registered io type blockio Mar 11 17:15:02 san1 iscsi_trgt: Registered io type nullio Mar 11 17:15:02 san1 iscsi_trgt: blockio_open_path(167) Can't open device /dev/drbd1, error -30 Mar 11 17:15:02 san1 iscsi_trgt: blockio_attach(354) Error attaching Lun 1 to Target iqn.2009-03.local.seva:san Mar 11 17:15:02 san1 ietd: Can't create a logical unit 30 1 1 Path=/dev/drbd1,Type=blockio i need the correct Returncode for the heartbeat monitoring of ietd service the sys-block/iscsitarget-0.4.16 has this error too.
we dont care about the LSB if however a daemon fails to start even though the init script reported [ OK ], then that is a bug we can look at
That's actually upstream by the way, I also find it boring but the problem is that upstream does not return the status code for the fork until after it checked the configuration file.
(In reply to comment #2) > That's actually upstream by the way, I also find it boring but the problem is > that upstream does not return the status code for the fork until after it > checked the configuration file. > is there another way to check the ietd status for failed forks? i think the monitoring the dmesg is not the best way! i need the properly status for the heartbeat. if someting is wrong the heartbeat can then migrate the iscsi service to anoter node, or resolve the deps.
I can see the reasoning pretty well (and I actually had similar issues myself), but I guess this might be requested upstream first of all, I'll check if s-s-d can help here.
Diego, did you check? Vitali, is this still relevant with 1.4.20.2?
As of today, I have taken over primary maintainership of this package from base-system. I am reassigning this bug to myself, although resolution will likely come from upstream. Feel free to ping me in IRC if I have not posted a status update on my discussion with upstream about this issue between now and the end of the month.
removed.