I'm not sure if this is mainly an issue of glibc or the affected ftp-servers and I'm unsure about the possible impact, see also in german:
same article translated:
This is mainly a problem of the (g)libc. The problem is that ftp-server (as anything else) can't limit how much memory is used by the affected functions.
So be sure you have set limits (see ulimit -a) set for ftp-servers, so that only the ftp-server dies, otherwise you might get an oom.
*** Bug 340555 has been marked as a duplicate of this bug. ***
the original advisory talks about bugs in GLOB_LIMIT handling which is a BSD-specific option. the gnulib maintainers also agree with the sentiment that you should use the existing resource controls to "fix" this. i.e. you should be ulimiting all server processes anyways.
nothing to change in glibc/gnulib -> CLOSED:WORKSFORME
Please never ever close security bugs by yourself; IMHO this still needs attention and the security team will decide what to do with this.
My opinion on the issue: if I can remotely crash an ftp from our stable tree with a default configuration we provide, it's a valid security bug and we need to do something about it.
Even if it's not regarded as a valid security problem, we might want to keep it open to track fixing of GLOB_LIMIT.
It is definitively a design problem of the API in glibc. If you don't know and can't control how much time and memory a function will need, the function renders usesless.
Having read the sentence "There are many functions in libc that
can allocate an arbitrary amount of memory." on the ml of glibc, I wonder what other functions are having such problematic use cases through which they might allocate gigabytes of memory (through a valid call).
In my humble opinion, all those functions should get marked with a '#pragma warning' or something similiar to warn developers.
Argumenting that a developer should use a subprocess, setting limits for them and then handle a possible kill of the subprocess, as done on the ml for glibc, is like saying, go away and don't use that function.
Just to clarify my critics on that API. I don't talk about stuff like malloc or qsort where one could at least estimate how much memory or time is needed when the function is called with a specific set of arguments.
The special problem with glob is, that it is almost impossible (for a human or a machine) to see or estimate how many time and resources will be used when evaluating a given pattern. So I see that GLOB_LIMIT as a reasonable approach to solve that problem.
there is no such thing as "GLOB_LIMIT" which means there is nothing to be fixed. if you have an ftp system which is overloaded and takes down a system, your system is misconfigured. you need to be setting correct usage limits on your exposed daemons.
as for adding such features/changes to sys-libs/glibc, it isnt going to happen. if you want to make glibc more "security conscience", then do so on your own time with the upstream developers on their development mailing lists.
As I said, "go away and don't use that function".
At least not in something which could be used by someone you don't want to give the right to kill the process which uses that function and can feed input to glob().
And by the way, don't forget to limit resources of users too. FTP-servers aren't the only applications which are using glob(). And don't forget to restart services if they are getting killed.
If I was the objective of the last comment:
I never talked about that this is needed to get fixed. The solution is simple to not use that function.
I've just posted some thoughts to warn others and offer some light about the problems which may arise when glob() is/was used.
1.5 years on and still no plans to do anything about this. let's close it out and be done already ...