Summary: | net-libs/libmicrohttpd-0.9.21 fails tests with USE="ssl" | ||
---|---|---|---|
Product: | Gentoo Linux | Reporter: | Myckel Habets <myckel> |
Component: | [OLD] Library | Assignee: | Anthony Basile <blueness> |
Status: | RESOLVED INVALID | ||
Severity: | normal | ||
Priority: | Normal | ||
Version: | unspecified | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Package list: | Runtime testing required: | --- | |
Attachments: | Build log of the failing build. |
Description
Myckel Habets
2012-10-06 15:07:07 UTC
Created attachment 325804 [details]
Build log of the failing build.
> Refusing to run test with OpenSSL. Please install libcurl-gnutls This is probably fallout from upstream's fix to bug 334067. (In reply to comment #2) > > Refusing to run test with OpenSSL. Please install libcurl-gnutls > > This is probably fallout from upstream's fix to bug 334067. I'm not able to reproduce this with libmicrohttpd-0.9.22 on amd64. Can you test .22 on x86. I don't have native hardware. Disregard my previous remark. The error here is in the tls_daemon_options_test and not related to the other bug. (In reply to comment #4) > Disregard my previous remark. The error here is in the > tls_daemon_options_test and not related to the other bug. Okay, I just thought of something. FEATURES="test" pulls in the RDEPEND ssl? ( >=net-misc/curl-7.25.0-r1[ssl] ) but USE="ssl" means that any one of six different backend ssl providers can be pulled in. CURL_SSL="openssl axtls cyassl gnutls nss polarssl" I wonder if tls_daemon_options_test is failing for one of them. I use CURL_SSL="openssl". @Myckel, can you check what CURL_SSL is for you. (In reply to comment #5) > @Myckel, can you check what CURL_SSL is for you. [ebuild R ] net-misc/curl-7.26.0 USE="ipv6 ssl test threads -ares -idn -kerberos -ldap -ssh -static-libs" CURL_SSL="openssl -axtls -cyassl -gnutls -nss -polarssl" 2,366 kB Looking at the build log, I think the following is happening: 1) test fails (as expected, SSL connect error) 2) result isn't passed correctly to the validating section (segfault) 3) validating section finds the segfault en says the test fails. So, where is it segfaulting? Doing some tests: 1) On a ~x86 chroot on a different system, tests are successful. 2) Problem exists in version 0.9.22 on the system that I use for stabilization. (In reply to comment #7) > Looking at the build log, I think the following is happening: > > 1) test fails (as expected, SSL connect error) > 2) result isn't passed correctly to the validating section (segfault) > 3) validating section finds the segfault en says the test fails. > > So, where is it segfaulting? 1) strace until you hit the seg fault and post that. 2) if you are comfortable with gdb, run until segfault and give me a bt 3) post anything different about the two systems I've tested in an x86 vm, no problem, so I'm thinking invalid, but you have some issue so let's narrow it and make sure it isn't something that is going to pop up again in libmicrohttpd Ok, in strace it seems that the bug does not surface (all tests are successful). GDB gave the following trace (I hope I did it right): Program received signal SIGABRT, Aborted. 0xb7fe0424 in ?? () (gdb) bt #0 0xb7fe0424 in ?? () #1 0xb7ce7d73 in abort () from /lib/libc.so.6 #2 0xb7fd2aef in mhd_panic_std (cls=0x0, file=0xb7fdbe6c "daemon.c", line=2586, reason=0x0) at daemon.c:104 #3 0xb7fd6228 in close_all_connections (daemon=0x806a3b0) at daemon.c:2586 #4 0xb7fd65aa in MHD_stop_daemon (daemon=0x806a3b0) at daemon.c:2682 #5 0x08049e6e in teardown_testcase (d=0x806a3b0) at tls_test_common.c:380 #6 0x0804a0d0 in test_wrap (test_name=0x804a819 "TLS1.0 vs SSL3", test_function=0x80490c8 <test_unmatching_ssl_version>, cls=0x0, daemon_flags=7, cipher_suite=0x804a70b "AES256-SHA", proto_version=3) at tls_test_common.c:477 #7 0x080493e9 in main (argc=1, argv=0xbfffefd4) at tls_daemon_options_test.c:171 Any idea what to recompile to get #0? glibc? The only difference between the systems that I can think about is the CPU optimization (-march=pentium4 against -march=athon-xp), for the rest they are mostly similar. Looking into it, close_all_connection gets stuck at collecting threads (as the comment at the code block says), calling mhd_panic_std, which calls abort. Why? System too slow? (In reply to comment #11) > Looking into it, close_all_connection gets stuck at collecting threads (as > the comment at the code block says), calling mhd_panic_std, which calls > abort. > > Why? System too slow? I'm sorry Myckel, I can't reproduce this even on native x86 --- well not quite. A 64-bit processor with a purely 32-bit userland. You did the gdb right and your interpretation is correct but I'm not sure what it is about your system that's doing this. Let's leave this bug open for now and see if anyone hits the same issue. I could be a different upgrade path? Not sure. (In reply to comment #12) > Let's leave this bug open for now and see if anyone hits the same issue. Fine with me. (In reply to comment #13) > (In reply to comment #12) > > Let's leave this bug open for now and see if anyone hits the same issue. > > Fine with me. I don't know if you're still interested, but I just added 0.9.24 to the tree. All the tests worked for me. Would you test again and see if this is still an issue? (In reply to comment #14) > (In reply to comment #13) > > (In reply to comment #12) > > > Let's leave this bug open for now and see if anyone hits the same issue. > > > > Fine with me. > > I don't know if you're still interested, but I just added 0.9.24 to the > tree. All the tests worked for me. Would you test again and see if this is > still an issue? Sorry for the slow reply. I'm still able to reproduce the bug in 0.9.24 and 0.9.26 (latest in tree). (In reply to comment #15) > (In reply to comment #14) > > (In reply to comment #13) > > > (In reply to comment #12) > > > > Let's leave this bug open for now and see if anyone hits the same issue. > > > > > > Fine with me. > > > > I don't know if you're still interested, but I just added 0.9.24 to the > > tree. All the tests worked for me. Would you test again and see if this is > > still an issue? > > Sorry for the slow reply. > I'm still able to reproduce the bug in 0.9.24 and 0.9.26 (latest in tree). Can you do an strace and not a backtrace. Like this strace -f -o libmicrohttpd.log emerge libmicrohttpd plus any other environment variables. It will be very long, but if you know what to look for, give me just the failing test's trace. I'm still wondering if something else is broken on your system, so can you reproduce this on *any* other system. I've been using x86 chroots. (In reply to comment #16) > I'm still wondering if something else is broken on your system, so can you > reproduce this on *any* other system. I've been using x86 chroots. On a VIA EPIA2 system I can't reproduce this. Seems something is broken on the P4 system. I'll have a further look into it to fix it. For this reason I close this bug. |