gpg-agent sometimes crashes after the second query. Reproducible: Sometimes Steps to Reproduce: 1. eval `gpg-agent --daemon` 2. gpg --no-options --use-agent --no-tty --sign --local-user 59426429 -o /dev/null /dev/null 3. enter pin 4. gpg --no-options --use-agent --no-tty --sign --local-user 59426429 -o /dev/null /dev/null Actual Results: Sometimes gpg-agent processes the second query, but crashes. On other occasions, the agent just works. This is maybe related to #204662. Expected Results: gpg-agent should continue running gnupg is compiled with CFLAGS="-g -ggdb". $ paludis --info paludis 0.26.0_alpha5 Paludis build information: Compiler: CXX: i686-pc-linux-gnu-g++ 4.2.2 (Gentoo 4.2.2 p1.0) CXXFLAGS: -march=prescott -O2 -fomit-frame-pointer -pipe LDFLAGS: DATE: 2008-01-05T16:52:25+0100 Libraries: C++ Library: GNU libstdc++ 20071007 Reduced Privs: reduced_uid: 1000 reduced_uid->name: rsandner reduced_uid->dir: /home/rsandner reduced_gid: 1000 reduced_gid->name: rsandner Paths: DATADIR: /usr/share LIBDIR: /usr/lib LIBEXECDIR: /usr/libexec SYSCONFDIR: /etc PYTHONINSTALLDIR: /usr/lib/python2.5/site-packages RUBYINSTALLDIR: Repository virtuals: format: virtuals Repository installed-virtuals: format: installed_virtuals root: / Repository gentoo: format: ebuild location: /usr/portage append_repository_name_to_write_cache: true builddir: /var/tmp/paludis cache: /usr/portage/metadata/cache distdir: /usr/portage/distfiles eapi_when_unknown: 0 eapi_when_unspecified: 0 eclassdirs: /usr/portage/eclass ignore_deprecated_profiles: false layout: traditional names_cache: /usr/portage/.cache/names newsdir: /usr/portage/metadata/news profile_eapi: 0 profiles: /usr/portage/profiles/default-linux/x86/2006.1 securitydir: /usr/portage/metadata/glsa setsdir: /usr/portage/sets sync: rsync://rsync.gentoo.org/gentoo-portage sync_options: use_manifest: use write_cache: /var/cache/paludis/metadata Package information: app-admin/eselect-compiler: (none) app-shells/bash: 3.2_p33 dev-java/java-config: 1.3.7 2.0.33-r1 dev-lang/python: 2.5.1-r5 dev-python/pycrypto: 2.0.1-r6 dev-util/ccache: (none) dev-util/confcache: (none) sys-apps/baselayout: 1.12.10-r5 sys-apps/sandbox: 1.2.18.1-r2 sys-devel/autoconf: 2.13 2.61-r1 sys-devel/automake: 1.10 1.4_p6 1.5 1.6.3 1.7.9-r1 1.8.5-r3 1.9.6-r2 sys-devel/binutils: 2.18-r1 sys-devel/gcc-config: 1.4.0-r4 sys-devel/libtool: 1.5.24 virtual/os-headers: 2.6.23-r3 (for sys-kernel/linux-headers::installed)
Created attachment 140905 [details] strace log of a crash
Created attachment 140907 [details] gpg-agent debugging log
$ echo gnupg libksba libassuan libgcrypt lib-gpgerror pth | xargs -n1 equery --quiet list [I--] [ ] app-crypt/gnupg-2.0.7 (0) [I--] [ ] dev-libs/libksba-1.0.2-r1 (0) [I--] [ ~] dev-libs/libassuan-1.0.4 (0) [I--] [ ~] dev-libs/libgcrypt-1.4.0-r1 (0) [I--] [ ] dev-libs/libpthread-stubs-0.1 (0) [I--] [ ~] dev-libs/pth-2.0.7 (0)
Missing from info: gcc version and glibc version.
[I--] [ ~] sys-devel/gcc-4.2.2 (4.2) [I--] [ ~] sys-libs/glibc-2.7-r1 (2.2)
Created attachment 140922 [details] backtrace don't know if this helps...
Yes it helps. But I cannot reproduce this!!!! I upgraded to glibc-2.7 and still all work correctly. Can you please sync portage, I added debug USE flag to pth, set it and reemerge pth and gnupg. Run it this way: gpg-agent --daemon bash $ echo ${GPG_AGENT_INFO} At a second shell you should rest GPG_AGENT: $ export GPG_AGENT_INFO=<whatever> And the do your stuff... I hope the debug log of pth will be helpful. Thanks!
I have a hard time reproducing the bug myself. Well, maybe the solution is to re-emerge first pth and then gnupg... Will do some more testing, feel free to close the bug for now. Thanks for the help.
Can you reproduce this every time on gnupg-2.0.8?
Negative, gnupg-2.0.8 and gnupg-2.0.7 gpg-agents work as expected at the moment. But pth debugging messages are still shown, even after recompiling pth without debug USE-flag.
Ok, just synced, and with pth-2.0.7-r1 everything is back to normal, no debugging messages anymore and working gpg-agent.
As per comment#7, I cannot reproduce this. If you have a valid scenario, please post and reopen. Thanks!