Summary: | sys-fs/dmraid-1.0.0_rc16-rc1 libdmraid-events-isw.so is not loaded beacaus pthread_mutex_trylock is undefined symbol | ||
---|---|---|---|
Product: | Gentoo Linux | Reporter: | Christophe Philemotte <christophe_philemotte> |
Component: | Current packages | Assignee: | No maintainer - Look at https://wiki.gentoo.org/wiki/Project:Proxy_Maintainers if you want to take care of it <maintainer-needed> |
Status: | CONFIRMED --- | ||
Severity: | normal | CC: | axs, joost.ruis, maxime.Gilles81, Rainmaker526, tommy |
Priority: | High | ||
Version: | unspecified | ||
Hardware: | AMD64 | ||
OS: | Linux | ||
Whiteboard: | |||
Package list: | Runtime testing required: | --- | |
Attachments: |
updates dmraid to snapshot v1_0_0_rc16-support_ignoremissing_option
ebuild that will apply the rc16 update patch strace output dmeventd CVS HEAD ebuild for dmraid |
Description
Christophe Philemotte
2010-03-02 09:34:27 UTC
enable -pthread ldflags fix the problem ... src_configure() { append-ldflags -pthread econf \ $(use_enable static static_link) \ $(use_enable selinux libselinux) \ $(use_enable selinux libsepol) } ... But now, I've another error: RAID set "isw_ddhjaehgdj_HDD_RAID" was activated ERROR: Unable to register a device mapper event handler for device "isw_ddhjaehgdj_HDD_RAID" iirc, there is an issue with using pthreads with other hardware or older versions of the kernel or something.. i can't remember the details but there is a reason to not include it for all compiles. That said, it seems that since it is needed it should be USE-flagged. Unfortunately I don't have the hardware to test dmraid anymore. That second issue looks like it's an upstream issue, though.. FYI, intel raid can now be managed entirely via mdadm, you don't need to use dmraid (or device-mapper) anymore. thanks Ian Stakenvicius for the tips, I'll take a look at mdadm so. @Ian: I suggest you either try to continue the maintainence of this ebuild or at least tell me, that you dont want to maintain it anymore, so others have a chance of looking at it. I haven't found anything upstream that would suggest adding -pthread is a good idea (a couple of versions back it seemed to cause more issues than it solved).. As for the lack of detection, upstream seems to have nothing to say -- I'm adding the patch listed in bug 275451 but I do not think this will resolve your issue either. Since support for all intel biosraid is moving to mdadm anyhow, I'm changing this bug to resolved/wontfix. IF ANYONE ELSE HAS THE PTHREAD ISSUE, please re-open this bug and I'll apply the fix and/or sort it out with upstream. OK, as requested. I tried re-emerging sys-fs/dmraid to make sure I got the "patched" version, but I get this error too. * Applying dmraid-1.0.0_rc16-undo-p-rename.patch ... [ ok ] * Applying dmraid-1.0.0_rc16-return-all-sets.patch ... [ ok ] * Applying dmraid-destdir-fix.patch ... [ ok ] * Applying dmraid-1.0.0_rc16-as-needed.patch ... [ ok ] Also on an intel RAID set. I'm willing to test patches etc, I'll wait with trying mdadm. PS, I don't have rights to reopen the bug, as I'm not the reporter. OK, reopening bug as a reminder.. Created attachment 240215 [details]
updates dmraid to snapshot v1_0_0_rc16-support_ignoremissing_option
Created attachment 240217 [details] ebuild that will apply the rc16 update patch OK, so upstream has been working on dmraid since rc16 through the 'device-mapper' project now instead of through the 'ataraid' project, it seems. Anyhow, there's a new version tagged in their CVS which includes pthread linking. This ebuild, combined with the patch "dmraid-1.0.0_rc16_20100317.patch" (in attachment 240215 [details] - copy it to files/ in your overlay) will provide the updated copy. I've tested the ebuild and it compiles and installs fine, HOWEVER I can't confirm that it works at runtime since I don't have the hardware. If you could test for me and report back I would appreciate it. Well, the ebuild compiles fine, but unfortunatly, does not work. /dev/mapper is still empty. dmraid -ay hangs for a while, and finally spits out: Medusa mapper # dmraid -ay RAID set "isw_dagchgecca_BootEnBackup" already active RAID set "isw_dagchgecca_Data" already active ERROR: Unable to register a device mapper event handler for device "isw_dagchgecca_BootEnBackup" ERROR: Unable to register a device mapper event handler for device "isw_dagchgecca_Data" ERROR: opening "/dev/mapper/isw_dagchgecca_BootEnBackup" ERROR: opening "/dev/mapper/isw_dagchgecca_Data" Output from "strace -f -s 1500 -t -o /tmp/strace dmraid -ay" is found here: http://sharesend.com/uuu3a (bzip2'ed, 141MB unpacked, bugzilla has a limit of 1MB for uploaded files, this is 2.3MB) line 1002460 shows the error. Just above (1002436) that you can see the child segfaulting after a munmap() call. Created attachment 240367 [details]
strace output dmeventd
After debugging some more, I'm pretty sure it's dmeventd which is segfaulting.
Testcase; start strace -s 1500 -f -t -o /tmp/dmeventd -- dmeventd -ddd -f in one terminal, start dmraid -ay in another terminal.
dmeventd exits with "segfault".
strace output is attached (this one is tiny).
Created attachment 240375 [details]
CVS HEAD ebuild for dmraid
Try using this ebuild -- it's for current CVS; if it still doesn't work, then I'll forward your debugging info upstream so that they can try and fix it before the next release. If it does work, i'll try and back-port the patches.
(adjust the KEYWORDS as necessary for your system)
Hmm, CVS HEAD doesn't seem to be working either... Builds, compiles and everything, but dmraid -ay still fails. Same error message, strace output looks similar too... Ian, did you get around to reporting this upstream? Not yet -- will be doing so today. (In reply to comment #3) > thanks Ian Stakenvicius for the tips, I'll take a look at mdadm so. > Is there any documentation on howto use mdadm and Intel raid ? Thanks, Maxime None that I am aware of... https://bugzilla.redhat.com/show_bug.cgi?id=542022#c2 ...might shed some light, though. Also, Roel, the issue this user was having looks a bit like yours; might be worth checking out... This is the closest thing to a howto that i've been able to find so far: http://www.linuxquestions.org/questions/linux-kernel-70/imsm-volumes-in-mdadm-raid-setup-776171/#post3796916 I read something at some point in the past couple of weeks (sorry, don't have the reference) that seemed to imply that the md raid kernel modules could 'take over' the raids, even though they aren't being configured, which might block dmraid from being able to do its thing. Could you maybe double-check to ensure that no md-raid modules are enabled in your kernel, and then give dmraid a test? Hmm, that could be something: Medusa ~ # lsmod | grep md Medusa ~ # zgrep _MD_ /proc/config.gz CONFIG_MD_AUTODETECT=y CONFIG_MD_LINEAR=m CONFIG_MD_RAID0=y CONFIG_MD_RAID1=y CONFIG_MD_RAID10=m CONFIG_MD_RAID456=m CONFIG_MD_RAID6_PQ=m CONFIG_MD_MULTIPATH=y CONFIG_MD_FAULTY=y I apparently have MD support compiled into my kernel. I'll see if I can recompile without this (or with modules) and re-test. Thanks for the how-to :) I do think the RedHat bug is different from my situation (different output), but comment #4 mentions: "because as said already Intel RAID now uses mdraid instead of dmraid.". |