net-ftp/ftp .17r3 uses getpass() from pwd.h which is obsolete and should not be used according to man pages (man 3 getpass). When entering a password after supplying the PASS command, it prompts for "Password:". After entering, 2000-4000 bytes of letters or any data the buffer is overflowed. Since this is just an ftp client the buffer overflow doesn't really matter since no privilege can be gained. Reproducible: Always Steps to Reproduce: 1. Run /usr/bin/ftp. 2. Connect to any server. 3. Input any username. 4. Input large password. (2000-4000+ bytes) Actual Results: The ftp client crashes: "Segmentation fault" since the EIP register is overwritten by any characters I submitted. So if EIP was overwritten with AAAA, it would try to read 0x41414141 which it cannot read. Expected Results: It should have submitted the password to the ftp server instead of crashing.
can i get a 2nd set of eyes to look this over ? the error isnt in the password handling, it's in the generic command handling when ftp is built with ssl support, the command() function in ftp.c is modified to include a static stack buffer of 2048 bytes and then writes to it with vsprintf() ... so if the user does something like: PASS <3000 characters> ftp poops all over itself the part that i'd like double checked is that the recvrequest() function also calls the command() function ... i want to make sure that a remote server cannot send a really long buffer and have it processed by the command() function
and along those same lines, command() should never be passed a string buffer from a remote server since command() processes it with vsprintf() so we wouldnt want the server to be able to shove a %n in our mouth
(In reply to comment #1) > can i get a 2nd set of eyes to look this over ? the error isnt in the password > handling, it's in the generic command handling > > when ftp is built with ssl support, the command() function in ftp.c is modified > to include a static stack buffer of 2048 bytes and then writes to it with > vsprintf() ... so if the user does something like: > PASS <3000 characters> > ftp poops all over itself > > the part that i'd like double checked is that the recvrequest() function also > calls the command() function ... i want to make sure that a remote server cannot > send a really long buffer and have it processed by the command() function Indeed, it is bad; I ran emerge --fetchonly ftp, got the source package to .17r3, but I didn't find where the buffer is initialized to 2048 bytes in ftp.c. Have you tried contacting the Netkit developers? They would probably have more insight to this.
netkit-* packages are dead upstream pretty much this buffer comes from a ssl patch we use in Gentoo, it isnt in the original netkit-ftp package initializing the buffer is not a problem ... it's written to in command() everytime ... the problem is that there arent any bounds checking done on the input that is written to it
This should be a little more audited before we release a patch. Setting to Auditing.
I checked every call to command(), I dont think it's possible for a server to overflow that buffer, or initiate a format string attack. I guess this is just a regular non-security bug.
thanks Tavis, i'll just revbump it as such
0.17-r5 now in portage with buffer len checks