Here's an ebuild for GlusterFS, a distributed file system. Reproducible: Always Steps to Reproduce:
Created attachment 110951 [details] ebuild for glusterfs My first ebuild, so feel free to improve however you see fit.
Created attachment 110963 [details] Updated ebuild Ok, that last one didn't work quite as well as I'd expected. Setting the localstatedir to {D}/var just confused the program later on... This version doesn't do that, but the /var/log/glusterfs directory will need to be made by hand before installation. Sorry, couldn't figure out how to make it stop giving me access denied errors.
Cool. Since you've clearly tested glusterfs, could you tell us a bit about how it's worked for you? Do you think it's mature enough to go in the main tree, or should we stick it in an overlay? (Assuming the ebuild will get to a good enough point, I'm talking about the upstream software)
I've just finished setting it up, and wow. So far this looks great. It's easy to configure, (seemingly) fast, and in my first ten "touch"s not a file has been lost. Actually, I've been looking for a way to do this kind of thing forever, so sorry if it sounds like an advertisement. My only beef with it though is that it doesn't yet support Software RAID. Any recommendations on benchmarking software (preferable something that tests reliability as well as speed)?
*** Bug 170375 has been marked as a duplicate of this bug. ***
Created attachment 112915 [details] updated ebuild to glusterfs-1.3.0_pre2.2
(In reply to comment #6) > Created an attachment (id=112915) [edit] > updated ebuild to glusterfs-1.3.0_pre2.2 > Fails for me - it appears that the "configure" step never gets invoked, causing everything else to b0rk. Emerging (2 of 2) sys-fs/glusterfs-1.3.0_pre2 to / * glusterfs-1.3.0-pre2.2.tar.gz MD5 ;-) ... [ ok ] * glusterfs-1.3.0-pre2.2.tar.gz RMD160 ;-) ...[ ok ] * glusterfs-1.3.0-pre2.2.tar.gz SHA1 ;-) ... [ ok ] * glusterfs-1.3.0-pre2.2.tar.gz SHA256 ;-) ...[ ok ] * glusterfs-1.3.0-pre2.2.tar.gz size ;-) ... [ ok ] * checking ebuild checksums ;-) ... [ ok ] * checking auxfile checksums ;-) ... [ ok ] * checking miscfile checksums ;-) ... [ ok ] * checking glusterfs-1.3.0-pre2.2.tar.gz ;-) ... [ ok ] >>> Unpacking source... >>> Unpacking glusterfs-1.3.0-pre2.2.tar.gz to /var/tmp/portage/sys-fs/glusterfs-1.3.0_pre2/work >>> Source unpacked. >>> Compiling source in /var/tmp/portage/sys-fs/glusterfs-1.3.0_pre2 ... >>> Source compiled. >>> Test phase [not enabled]: sys-fs/glusterfs-1.3.0_pre2 >>> Install glusterfs-1.3.0_pre2 into /var/tmp/portage/sys-fs/glusterfs-1.3.0_pre2/image/ category sys-fs make: *** No rule to make target `install'. Stop. !!! ERROR: sys-fs/glusterfs-1.3.0_pre2 failed. Call stack: ebuild.sh, line 1614: Called dyn_install ebuild.sh, line 1060: Called qa_call 'src_install' environment, line 2968: Called src_install glusterfs-1.3.0_pre2.ebuild, line 38: Called die !!! Install failed !!! If you need support, post the topmost build error, and the call stack if relevant. !!! A complete build log is located at '/var/log/portage/sys-fs:glusterfs-1.3.0_pre2:20070318-213959.log'. !!! This ebuild is from an overlay: '/usr/local/portage'
I think some things changed at the source, so try deleting the distfile and emerging again.
Created attachment 113823 [details] Another updated ebuild Not sure to have a decimal after the pre, so I just called it the same thing. It is still a new version though; updated from 1.3.0-pre2.2 to 1.3.0-pre2.3
Still breaks for me in the same fashion.
Created attachment 113936 [details] ebuild now fixed for portage. Ok,so this one works with portage(I've been using paludis). The "cd ..." just needed to be added to the src_compile part instead of the src_unpack part. Have a lot of fun!
You're closer - the source now compiles, but the install step still fails: >>> Source compiled. (No, really) >>> Test phase [not enabled]: sys-fs/glusterfs-1.3.0_pre2 >>> Install glusterfs-1.3.0_pre2 into /var/tmp/portage/sys-fs/glusterfs-1.3.0_pre2/image/ category sys-fs make: *** No rule to make target `install'. Stop. !!! ERROR: sys-fs/glusterfs-1.3.0_pre2 failed. Call stack: ebuild.sh, line 1614: Called dyn_install ebuild.sh, line 1060: Called qa_call 'src_install' environment, line 2970: Called src_install glusterfs-1.3.0_pre2.ebuild, line 38: Called die !!! Install failed !!! If you need support, post the topmost build error, and the call stack if relevant. !!! A complete build log is located at '/var/log/portage/sys-fs:glusterfs-1.3.0_pre2:20070321-190511.log'. !!! This ebuild is from an overlay: '/usr/local/portage'
Created attachment 114026 [details] This one works for sure. This one works, I promise. I didn't fully test it last time, but this time I did. It seems portage wanted an extra cd ... in the src_install phase. Why? I don't know. No other package that I have seen does this... Oh well, here you go.
Latest build works great! Thanks!
Created attachment 117753 [details] glusterfs-1.3.0_pre2-r1 ebuild package
(In reply to comment #14) > Latest build works great! Thanks! > Thanks Andy Romeril, it worked well for me. I modified it to come up with: - a server init.d script, - a client init.d script and its conf.d that defines the mountpoint, - both the server and client init.d are tied to their corresponding configuration file in /etc/glusterfs. If the init.d script is named "abc", then its config. file is assumed to be /etc/glusterfs/abc.vol, - no more example configuration files installed in /examples, - a shorter description. Extract the ebuild archive like: tar jxf glusterfs-1.3.0_pre2-r1.ebuild.tar.bz2 -C /usr/local/portage ...assuming that /usr/local/portage is one of your portage overlay directory. And use the init.d the usual way. Don't forget to write the matching configuration files in /etc/glusterfs.
A config file for each client is not needed nor is it encouraged. Instead, one of the servers has a config file that it will distribute to all of the clients. It's specified in the server's config file under "option client-volume-filename". The client then gets its config file by specifying -s "serverIP". Ex. glusterfs -s 192.168.1.101 /glusterfs
Created attachment 117779 [details] Updated config files in tarball
(In reply to comment #16) > (In reply to comment #14) > > Latest build works great! Thanks! > > > > Thanks Andy Romeril, it worked well for me. I meant "Thanks Paul", sorry. Paul: OK, thank you once again! I didn't know that having one client config. file per client was discouraged. Plus, having no client config. file at all is really more convenient. I should have read the GlusterFS documentation more closely :-)
I don't know if it's mentioned in the docs, but I've been in their IRC channel a while and remember one of the devs saying something about it there. Also, if there is a way to make ebuilds fetch the source from tla, that would be best as there have been many many fixes -- including the examples folder in / -- since the last release (The next release is actually due out about now).
Created attachment 119420 [details] yet another update now at version 1.3.0-pre4
I would like to add to this BUG that: 1 - I start using GlusterFS in production in two weeks after months of tests and, at least to my needs, it's stable enough for a ~x86 (although probably they will only release the stable 1.3.0 version next week). GlusterFS is simple and powerful. 2 - The ebuild here worked for me, I just had to change the version from _pre4 to _pre5 using the pre5.3 tar files.
Created attachment 125780 [details] ebuild for glusterfs (uses tla) This ebuild automatically fetches the latest source from their GNU Arch repository. Use with caution.
Created attachment 125782 [details] ebuild for glusterfs version 1.3_pre6 released 7/23/07
Created attachment 125784 [details] both the tla ebuild and 1.3_pre6 plus init.d/conf.d files
Created attachment 127701 [details] sys-fs/glusterfs/glusterfs-1.3.0.ebuild ebuild for 1.3.0 Did not installed the init.d and conf.d stuff but have changed the ebuild to handle better Infiniband requriements and added the installation of the header files for GlustreFS.
Created attachment 130096 [details] Working ebuild for version 1.3.1 Based on Roman's 'glusterfs-1.3.0_pre2-r1 ebuild package'. What I changed: - Added the init script's dependencies: client needs fuse, server before client, client after server. - src_install(): added a mkdir ${D}/var/log/glusterfs. Without it the service will not start. - upgraded version with upstream. This version is pretty stable for me.
(In reply to comment #27) > Created an attachment (id=130096) [edit] > Working ebuild for version 1.3.1 > > Based on Roman's 'glusterfs-1.3.0_pre2-r1 ebuild package'. > > What I changed: > > - Added the init script's dependencies: client needs fuse, server before > client, client after server. > - src_install(): added a mkdir ${D}/var/log/glusterfs. Without it the service > will not start. > If you would have looked at my 1.3.0 ebuild then you would have seen that this is already added. > - upgraded version with upstream. > > This version is pretty stable for me. > I can not test it. The ebuild you have attached is in binary or in a format I can not read.
(In reply to comment #28) > I can not test it. The ebuild you have attached is in binary or in a format I > can not read. > I'm very sorry I forgot to tell :)... It's a .tar.bz2.
(In reply to comment #29) > (In reply to comment #28) > > I can not test it. The ebuild you have attached is in binary or in a format I > > can not read. > > > > I'm very sorry I forgot to tell :)... > It's a .tar.bz2. > Okay. I downloaded it and checked it. Here my feedback: I personally think your 1.3.1 ebuild is less advanced then the 1.3.0 ebuild I have posted. Your ebuild does: - Not honor the USE flags correctly: - >=sys-fs/fuse-2.6.3 is NOT needed if you just use the server USE flag - It does not check for verbs.h if you use the infiniband USE flag - Not install the header files from libglusterfs - Misses dependencies to sys-devel/flex and sys-devel/bison - Does not install the sample volume specification
(In reply to comment #30) > I personally think your 1.3.1 ebuild is less advanced then the 1.3.0 ebuild I > have posted. Your ebuild does: > - Not honor the USE flags correctly: > - >=sys-fs/fuse-2.6.3 is NOT needed if you just use the server USE flag > - It does not check for verbs.h if you use the infiniband USE flag > - Not install the header files from libglusterfs > - Misses dependencies to sys-devel/flex and sys-devel/bison > - Does not install the sample volume specification > Agreed! Although I have experience with Gentoo (long), Portage, Catalyst and etc, I'm new to ebuilds. Your is definitely more advanced. I'll start using it here (uncommenting the initd script's lines). Thanks!
Created attachment 144150 [details] Merged work of steveb and Daniel, improved(?) init scripts I merged the ebuilds from Steveb and Daniel 1.3.7 marked as "stable", 9999 (tla) masked for easier emerge choice Small change in server init script Bigger change in client init script, including support for serve-side or client local config configuration. completly configurable from conf.d not mutch tested so far ;)
Savannah has changed their hostname from sv to savannah, so anyone using the tla version will need to change the ETLA_ARCHIVES line _only_ to arch.savannah.gnu.org. Do not change the ETLA_VERSION line.
Created attachment 157681 [details] update of glusterfs ebuild Attached is an update of the glusterfs ebuild to version 1.3.9. I have made a few modifications to the ebuild. - removed the 'server' USE flag as the server is always built. The fuse-client code is optional so that makes sense to have as a USE flag. - commented out the MOUNTPOINT and SERVER lines in glusterfs-client.confd. This will force people to properly decide the defaults rather than have a possibly incorrect default. It should not be assumed that a server will be running on the localhost. - added code to create the MOUNTPOINT in glusterfs-client.initd if it does not exist. - removed explicit installation of the sample configurations as they are installed by default. - changed 'x86' to '~x86' in KEYWORDS field as that seems appropriate for an ebuild that is not yet in portage.
(In reply to comment #34) > - removed the 'server' USE flag as the server is always built. The fuse-client > code is optional so that makes sense to have as a USE flag. To clarify this a bit, beginning with version 1.3.8, the server and client processes are the same binary, not two distinct server and client binaries. However, there is still an optional fuse-client, hence the client USE flag is still valid. With the same binary for the server and client processes it is possible to use a unified server/client config file. This would translate into only needing one initd file as well. However, I was not able to get this mode to work reliably in my testing. With a unified config file, sometimes the file system came up correctly, but more often than not, it did not come up correctly. So for now, it is probably best to keep separate configurations for server and client and have two glusterfs processes running on a host.
I am working with an ebuild similar to the ones already posted. However, I am trying to run eautoreconf and the build fails. So far it seems that "-shared" is not added to any compile commands and that breaks everything. Any help would be appreciated.
Created attachment 168144 [details] ebuild for glusterfs 1.3.12
Alternative glusterfs-1.3.12.ebuild http://bazaar.launchpad.net/%7Ebberberov/%2Bjunk/dev-overlay/files/7?file_id=glusterfs-20081017085415-n5a1c7ev3dcd7l5w-1 It is still a work in progress. It will compile, but I have not done any testing.
Any chance we can get this added to the main portage tree? If not whats holding it back? Thanks
Basically having a maintainer and possibly having somebody with an infiniband setup for the related dep.
(In reply to comment #40) > Basically having a maintainer and possibly having somebody with an infiniband > setup for the related dep. > I can test it on IB enabeled hardware but last time i do it it wasnt stable enough
Created attachment 178899 [details] 2.0rc1 ebuild I started hacking a bit: removed infiniband for now, made the fuse-client trigger use fuse instead of use client.
glusterfs does not depend on ntp or ntpclient so You should remove that dependency as it is not needed, You can run it without them. Of course You should keep clock in sync across Your glusterfs cluster but if You gonna do it or not and the way You do it is totally up to You (why not use openntpd or run rdate with cron?). Proper message during emerge is enough.
that's right, I overlooked that part.
I've been using that new 2.0.0_rc1 ebuild for a few days now... seems to work a treat. Have only tried the most basic of file server configurations, no HA stuff, but haven't found any issues with it.
I'll try to update it a bit more, alexxy would you help me with the infiniband side?
Updating the link from comment #38 for the newer version: http://bazaar.launchpad.net/%7Ebberberov/%2Bjunk/dev-overlay/files/head%3A/sys-cluster/glusterfs/
I got syntax error in Your init script, so I made this change and it now works for me without errors: === modified file 'net-fs/glusterfs/files/glusterfs-1.3.10.initd' --- net-fs/glusterfs/files/glusterfs-1.3.10.initd 2009-01-21 23:55:10 +0000 +++ net-fs/glusterfs/files/glusterfs-1.3.10.initd 2009-01-29 22:31:50 +0000 @@ -51,7 +51,7 @@ fi fi - if test ( -n "${GLUSTERFS_MOUNTPOINT}" ) -a ( ! -d "${GLUSTERFS_MOUNTPOINT}" ) + if [ -n "${GLUSTERFS_MOUNTPOINT}" ] || [ ! -d "${GLUSTERFS_MOUNTPOINT}" ] then eerror "The mountpoint ${GLUSTERFS_MOUNTPOINT} does not exist" return 1
typo, should be && not ||
arghhh, I need to stop posting when I'm half sleeping. I'm still getting error with setting GLUSTERFS_MOUNTPOINT, gonna look at it tomorrow. Sorry for spamming this bug.
I too had to make some changes to the init script... I can't remember what I changed to make it work....... so I'll just post the version of the file I have.
Created attachment 180179 [details] init script for 2.0.0_rc1
Łukasz, I updated the initd file. There may be errors elsewhere however.
two typos: 1. sed s/unmount/umount 2. --pidfile is start-stop-daemon option === modified file 'sys-cluster/glusterfs/files/glusterfs-1.3.10.initd' --- sys-cluster/glusterfs/files/glusterfs-1.3.10.initd 2009-01-31 17:40:36 +0000 +++ sys-cluster/glusterfs/files/glusterfs-1.3.10.initd 2009-01-31 21:38:29 +0000 @@ -77,8 +77,8 @@ then einfo "Using server supplied volume specification file" start-stop-daemon --start --pidfile ${GLUSTERFS_PIDFILE} \ - --exec /usr/sbin/glusterfs -- \ - --pidfile=${GLUSTERFS_PIDFILE} \ + --exec /usr/sbin/glusterfs \ + --pidfile=${GLUSTERFS_PIDFILE} -- \ --log-file=${GLUSTERFS_LOGFILE} \ --server=${GLUSTERFS_SERVER} \ --port=${GLUSTERFS_PORT} \ @@ -88,8 +88,8 @@ else einfo "Using local volume specification file" start-stop-daemon --start --pidfile ${GLUSTERFS_PIDFILE} \ - --exec /usr/sbin/glusterfs -- \ - --pidfile=${GLUSTERFS_PIDFILE} \ + --exec /usr/sbin/glusterfs \ + --pidfile=${GLUSTERFS_PIDFILE} -- \ --log-file=${GLUSTERFS_LOGFILE} \ --spec-file=${GLUSTERFS_SPEC} \ ${GLUSTERFS_OPTS} ${GLUSTERFS_MOUNTPOINT} @@ -112,7 +112,7 @@ status="$?" else einfo "Unmounting ${GLUSTERFS_MOUNTPOINT} ..." - unmount "${GLUSTERFS_MOUNTPOINT}" + umount "${GLUSTERFS_MOUNTPOINT}" status="$?" fi eoutdent
If You want to pass pid file location to gluster use: --pid-file=PIDFILE
Didn't noticed that first --pidfile arg, should be: === modified file 'sys-cluster/glusterfs/files/glusterfs-1.3.10.initd' --- sys-cluster/glusterfs/files/glusterfs-1.3.10.initd 2009-01-31 17:40:36 +0000 +++ sys-cluster/glusterfs/files/glusterfs-1.3.10.initd 2009-01-31 21:45:11 +0000 @@ -78,20 +78,20 @@ einfo "Using server supplied volume specification file" start-stop-daemon --start --pidfile ${GLUSTERFS_PIDFILE} \ --exec /usr/sbin/glusterfs -- \ - --pidfile=${GLUSTERFS_PIDFILE} \ --log-file=${GLUSTERFS_LOGFILE} \ --server=${GLUSTERFS_SERVER} \ --port=${GLUSTERFS_PORT} \ --transport=${GLUSTERFS_TRANSPORT} \ + --pid-file=${GLUSTERFS_PIDFILE} \ ${GLUSTERFS_OPTS} ${GLUSTERFS_MOUNTPOINT} status="$?" else einfo "Using local volume specification file" start-stop-daemon --start --pidfile ${GLUSTERFS_PIDFILE} \ --exec /usr/sbin/glusterfs -- \ - --pidfile=${GLUSTERFS_PIDFILE} \ --log-file=${GLUSTERFS_LOGFILE} \ --spec-file=${GLUSTERFS_SPEC} \ + --pid-file=${GLUSTERFS_PIDFILE} \ ${GLUSTERFS_OPTS} ${GLUSTERFS_MOUNTPOINT} status="$?" fi @@ -112,7 +112,7 @@ status="$?" else einfo "Unmounting ${GLUSTERFS_MOUNTPOINT} ..." - unmount "${GLUSTERFS_MOUNTPOINT}" + umount "${GLUSTERFS_MOUNTPOINT}" status="$?" fi eoutdent
Łukasz, Fixed. 1.3.12 and 2.0.0_rc1 use different options. I should have double-checked that earlier.
glusterfs-2.0.0rc2 is out, anybody has a preference toward the plain glusterfs intscript from Dan or the previous more complex one? I'm tempted to mediate using an openssl-like approach.
Also, glusterfs source can now be retrieved with git, so there are no more worries about tla being out of the tree. I'll make an ebuild for that as soon as I have time to figure it out. Or, if someone else wants to do it, here is the information that I have: Message: 1 Date: Sun, 22 Feb 2009 04:10:13 +0530 From: Anand Avati <avati@gluster.com> Subject: [Gluster-devel] migration to git To: gluster-devel@nongnu.org, gluster-users@gluster.org Message-ID: <8bd4838e0902211440p58b354c8w33b72a2db6f5211b@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 GlusterFS source repository has been migrated to git. Source code can be checked out with sh$ git clone git://git.sv.gnu.org/gluster.git glusterfs The same has been updated on http://www.gluster.org/download.php. It would be appreciated if all future patch submissions follow the 'git format-patch' style. Thanks, Avati
I have tested the "alternative" ebuild from Boian Berberov and found it a bit better. And first of all, on the URL are all files (around the ebuild) on one place.
I updated my ebuild to 2.0.0_rc2. I added a workaround for a parallel make problem and added doc and examples USE flags. I also added a new GIT ebuild. The link from comment #47 still works. If anyone has suggestions for a better parallel make workaround, improvements to the init script or anything else, feel free to post them.
Thanks, Boian for the git ebuild. I do have question about the ebuilds: what do you mean by the comment "Currently in @system"?
Also, I think it would help the devs a lot if by default we set gcc to -O0 and -ggdb (I don't know if that one's needed) for the git version.
For parallel make issues have you tried reading through http://blog.flameeyes.eu/tag/foraparallelworld ? If you still have problems with parallel make feel free to drop me a direct email and I'll take a look.
Paul, No problem. If I can spend more time on build testing, other people can spend more time on run-time testing. "Currently in @system" is a note to me that what follows was in the system set when I checked for it, so I do not look in the profiles too often. I am unfamiliar with BSD so I added it there. I can add a debug USE flag and see if there is anything in the configure/Makefiles about that. I would not want to force or strip FLAGS unless I know it is policy to do so for debugging, or a dev tells me to do it. --------------------------------------- Diego, I have read some of your blog posts. I do not think there is a build problem now, but there may be a better workaround. Maybe I can make, test and send a patch upstream, but I have not looked at that yet.
Created attachment 184027 [details] Glusterfs-2.0.0_rc2 ebuild Attached is an ebuild for latest glusterfs-2.0.0_rc2 :-) It works out the box with example "Standalone Storage" config files included as per this page: http://www.gluster.org/docs/index.php/GlusterFS#Configuration Install it and then run /etc/init.d/glusterfsd start to start the daemon. Then mount (on the same host) as follows: mount -t glusterfs 127.0.0.1 /my/mount/path Hope this can help someone create a proper ebuild :-) Cheers, Rich
I noticed that GlusterFS ebuild is available via the "science" overlay. Is it possible to make the latest versions of the ebuild available in this overlay?
I recently found out about bug 221311. So is sys-devel/flex a valid DEPEND for glusterfs and other ebuilds or should I just add an additional comment about that bug. Should mod_glusterfs for apache and lighttpd be build by separate ebuilds, like www-apache/mod_glusterfs and maybe www-lighttpd/mod_glusterfs?
Created attachment 186994 [details] glusterfs-2.0.0_rc7.ebuild This is an ebuild for the latest v2 release candidate 7. It did not work with existing patch "glusterfs-2.0.0_rc1-docdir.patch" so see my next attachment for "glusterfs-2.0.0_rc7-docdir.patch" Based on "glusterfs-2.0.0_rc2.ebuild" by Boian Berberov with some changes: --- glusterfs-2.0.0_rc2.ebuild 2009-04-01 17:18:29.005084693 +0100 +++ glusterfs-2.0.0_rc7.ebuild 2009-04-01 17:18:29.008417155 +0100 @@ -18,31 +18,29 @@ LICENSE="GPL-3" KEYWORDS="~amd64 ~ppc ~ppc64" -# Disabled -# infiniband -IUSE="berkdb doc examples fuse" +IUSE="berkdb doc examples fuse -apache2 -infiniband" -# Disabled -# apache2? ( >=www-servers/apache-2.2 ) # Currently in @system # kernel_FreeBSD? ( sys-freebsd/freebsd-libexec ) + COMMON_DEPEND=" - berkdb? ( >=sys-libs/db-4.6.21 ) - fuse? ( >=sys-fs/fuse-2.7.3 ) + berkdb? ( >=sys-libs/db-4.6.21 ) + fuse? ( >=sys-fs/fuse-2.7.0 ) + apache2? ( >=www-servers/apache-2.2 ) + infiniband? ( sys-cluster/libibverbs ) " # Currently in @system # sys-devel/bison # sys-devel/flex DEPEND="${COMMON_DEPEND}" # Disabled -# infiniband? ( sys-cluster/libibverbs ) RDEPEND="${COMMON_DEPEND}" RESTRICT="mirror" S="${WORKDIR}/${MY_P}" src_prepare() { - epatch "${FILESDIR}/${PN}-2.0.0_rc1-docdir.patch" + epatch "${FILESDIR}/${PN}-2.0.0_rc7-docdir.patch" epatch "${FILESDIR}/${PN}-2.0.0_rc1-run-and-log-directories.patch" if ! use doc; then @@ -65,15 +63,12 @@ } src_configure() { -# Disabled -# $(use_enable apache2 mod_glusterfs) -# $(use_enable infiniband ibverbs) local myconf=" - --disable-mod_glusterfs $(use_enable berkdb bdb) $(use_enable fuse fuse-client) - --disable-ibverbs - --enable-libglusterfsclient + $(use_enable fuse libglusterfsclient) + $(use_enable apache2 mod_glusterfs) + $(use_enable infiniband ibverbs) " econf --config-cache --disable-static ${myconf} \
Created attachment 186995 [details, diff] glusterfs-2.0.0_rc7-docdir.patch for use with glusterfs-2.0.0_rc7.ebuild This is the modified "glusterfs-2.0.0_rc1-docdir.patch", called "glusterfs-2.0.0_rc7-docdir.patch" to go with "glusterfs-2.0.0_rc7.ebuild" above. The differences between "glusterfs-2.0.0_rc1-docdir.patch" and "glusterfs-2.0.0_rc7-docdir.patch" are: --- glusterfs-2.0.0_rc1-docdir.patch 2009-04-01 17:18:28.981750121 +0100 +++ glusterfs-2.0.0_rc7-docdir.patch 2009-04-01 17:18:28.998416914 +0100 @@ -13,6 +13,6 @@ @@ -1,5 +1,3 @@ - -docdir = $(datadir)/doc/glusterfs/ - EmacsModedir = $(docdir)/ - EmacsMode_DATA = glusterfs-mode.el + EditorModedir = $(docdir)/ + EditorMode_DATA = glusterfs-mode.el
Seems we can add it to science overlay =)
Created attachment 191364 [details] version glusterfs-2.0.1 ebuild
Created attachment 191371 [details] glusterfs-2.0.1.ebuild cleanup @@ -2,92 +2,93 @@ # Distributed under the terms of the GNU General Public License v2 # $Header: $ -EAPI="2" - -inherit autotools eutils versionator +inherit autotools apache-module elisp-common eutils multilib versionator DESCRIPTION="GlusterFS is a powerful network/cluster filesystem" HOMEPAGE="http://www.gluster.org/" +SRC_URI="http://ftp.gluster.com/pub/gluster/${PN}/$(get_version_component_range '1-2')/${PV}/${P}.tar.gz" -SLOT="0" -MY_PV="$(replace_version_separator '_' '')" -MY_PV_2="$(get_version_component_range "1-2")" -MY_PV_3="$(get_version_component_range "1-3")" -MY_P="${PN}-${MY_PV}" -SRC_URI="http://europe.gluster.org/${PN}/${MY_PV_2}/${MY_PV_3}/${MY_P}.tar.gz" LICENSE="GPL-3" - -KEYWORDS="~amd64 ~ppc ~ppc64" -IUSE="berkdb doc examples fuse -apache2 -infiniband" - -# Currently in @system -# kernel_FreeBSD? ( sys-freebsd/freebsd-libexec ) - -COMMON_DEPEND=" - berkdb? ( >=sys-libs/db-4.6.21 ) - fuse? ( >=sys-fs/fuse-2.7.0 ) - apache2? ( >=www-servers/apache-2.2 ) - infiniband? ( sys-cluster/libibverbs ) -" -# Currently in @system -# sys-devel/bison -# sys-devel/flex -DEPEND="${COMMON_DEPEND}" -# Disabled -RDEPEND="${COMMON_DEPEND}" - -RESTRICT="mirror" -S="${WORKDIR}/${MY_P}" - -src_prepare() { - epatch "${FILESDIR}/${PN}-2.0.0_rc7-docdir.patch" - epatch "${FILESDIR}/${PN}-2.0.0_rc1-run-and-log-directories.patch" +SLOT="0" +KEYWORDS="~amd64 ~ppc ~ppc64 ~x86" +IUSE="berkdb doc emacs examples fuse static vim-syntax" +#IUSE="berkdb doc emacs examples fuse infiniband static vim-syntax" + +DEPEND="berkdb? ( >=sys-libs/db-4.6.21 ) + emacs? ( virtual/emacs ) + fuse? ( >=sys-fs/fuse-2.7.0 )" +# infiniband? ( sys-cluster/libibverbs ) +RDEPEND="${RDEPEND}" + +SITEFILE="50${PN}-mode-gentoo.el" + +APXS2_S="${S}/mod_glusterfs/apache/2.2/src" +APACHE2_MOD_FILE="${APXS2_S}/.libs/mod_${PN}.so" +APACHE2_MOD_CONF="70_mod_${PN}" +APACHE2_MOD_DEFINE="GLUSTERFS" +APACHE2_DOCFILES="README.txt" +want_apache2_2 + +src_unpack() { + unpack ${A} + cd "${S}" + + epatch "${FILESDIR}/${P}-gentoo.patch" + epatch "${FILESDIR}/${P}-parallel-make.patch" + epatch "${FILESDIR}/${P}-apache2.patch" + epatch "${FILESDIR}/${P}-apxs.patch" if ! use doc; then - ebegin "Applying sed remove-guides-from-Makefile.am-patch" - sed -i -e '/SUBDIRS =/s/ [a-z]*\-guide//g' \ + sed -i -e '/SUBDIRS =/s/ [a-z]*-guide//g' \ doc/Makefile.am \ || die "sed remove-guides-from-Makefile.am-patch" - eend 0 fi if ! use examples; then - ebegin "Applying sed remove-examples-from-Makefile.am-patch" sed -i -e '/SUBDIRS =/s/ examples//' \ doc/Makefile.am \ || die "sed remove-examples-from-Makefile.am-patch" - eend 0 fi - eautoreconf || die "eautoreconf failed" -} - -src_configure() { - local myconf=" - $(use_enable berkdb bdb) - $(use_enable fuse fuse-client) - $(use_enable fuse libglusterfsclient) - $(use_enable apache2 mod_glusterfs) - $(use_enable infiniband ibverbs) - " - - econf --config-cache --disable-static ${myconf} \ - --localstatedir=/var --docdir=/usr/share/doc/${PF} + eautoreconf } src_compile() { - # Parallel make workaround - cd "${S}/libglusterfs" && emake -j1 || die "emake failed" - cd "${S}" && emake || die "emake failed" + econf \ + $(use_enable berkdb bdb) \ + $(use_enable fuse fuse-client) \ + $(use_enable apache2 mod_glusterfs) \ + $(use_enable static) \ + --localstatedir=/var ||die +# $(use_enable infiniband ibverbs) \ + emake || die + # use apache2 && apache-module_src_compile + if use emacs ; then + elisp-compile extras/glusterfs-mode.el || die "elisp-compile failed" + fi } src_install() { - emake DESTDIR="${D}" LIBTOOLFLAGS="--quiet" -j1 install || die "emake install failed" + emake DESTDIR="${D}" docdir="/usr/share/doc/${PF}/extras" install || die + + if use apache2 ; then + apache-module_src_install + rm -rf "${D}/usr/$(get_libdir)/glusterfs/${PV}/apache" + fi + + if use emacs ; then + elisp-install ${PN} extras/glusterfs-mode.el* || die "elisp-install failed" + elisp-site-file-install "${FILESDIR}/${SITEFILE}" + fi + + if use vim-syntax ; then + insinto /usr/share/vim/vimfiles/ftdetect; doins "${FILESDIR}/glusterfs.vim" || die + insinto /usr/share/vim/vimfiles/syntax; doins extras/glusterfs.vim || die + fi dodoc AUTHORS ChangeLog NEWS README THANKS || die "dodoc failed" - newinitd "${FILESDIR}/${PN}-2.0.0_rc1.initd" "${PN}" || die "newinitd failed" - newconfd "${FILESDIR}/${PN}-2.0.0_rc1.confd" "${PN}" || die "newconfd failed" + newinitd "${FILESDIR}/${P}.initd" glusterfsd || die "newinitd failed" keepdir /var/log/${PN} || die "keepdir failed" } @@ -96,13 +97,12 @@ elog "The glusterfs startup script can be multiplexed." elog "The default startup script uses /etc/conf.d/glusterfs to configure the" elog "separate service. To create additional instances of the glusterfs service" - elog "simply create a symlink to the glusterfs startup script that is prefixed" - elog "with \"glusterfs.\"" + elog "simply create a symlink to the glusterfs startup script." elog elog "Example:" - elog " # cd /etc/init.d" - elog " # ln -s glusterfs glusterfs.client" - elog "You can now treat glusterfs.client like any other service" + elog " # ln -s glusterfsd /etc/init.d/glusterfsd2" + elog " # ${EDITOR} /etc/glusterfs/glusterfsd2.vol" + elog "You can now treat glusterfsd2 like any other service" echo elog "You can mount exported GlusterFS filesystems through /etc/fstab instead of" elog "through a startup script instance. For more information visit:" @@ -111,4 +111,12 @@ ewarn "You need to use a ntp client to keep the clocks synchronized across all" ewarn "of your servers. Setup a NTP synchronizing service before attempting to" ewarn "run GlusterFS." + + use apache2 && apache-module_pkg_postinst + + use emacs && elisp-site-regen +} + +pkg_postrm() { + use emacs && elisp-site-regen }
Created attachment 191372 [details] files/50glusterfs-mode-gentoo.el
Created attachment 191374 [details] files/70_mod_glusterfs.conf
Created attachment 191375 [details] files/glusterfs.vim
Created attachment 191376 [details] files/glusterfs-2.0.1.initd
Created attachment 191377 [details, diff] files/glusterfs-2.0.1-apache2.patch
Created attachment 191378 [details, diff] files/glusterfs-2.0.1-apxs.patch
Created attachment 191379 [details, diff] files/glusterfs-2.0.1-gentoo.patch
Created attachment 191380 [details, diff] files/glusterfs-2.0.1-parallel-make.patch
Created attachment 191382 [details] glusterfs-2.0.1.ebuild
May I ask why has "$(use_enable infiniband ibverbs)" been commented out in the latest glusterfs-2.0.1.ebuild? John (In reply to comment #82) > Created an attachment (id=191382) [edit] > glusterfs-2.0.1.ebuild >
(In reply to comment #83) > May I ask why has "$(use_enable infiniband ibverbs)" been commented out in the > latest glusterfs-2.0.1.ebuild? > because sys-cluster/libibverbs is not in official portage tree yet. glusterfs-2.0.1.ebuild is in my git overlay. http://git.overlays.gentoo.org/gitweb/?p=dev/matsuu.git;a=tree;f=net-fs/glusterfs
(In reply to comment #84) > (In reply to comment #83) > > May I ask why has "$(use_enable infiniband ibverbs)" been commented out in the > > latest glusterfs-2.0.1.ebuild? > > > > because sys-cluster/libibverbs is not in official portage tree yet. > > glusterfs-2.0.1.ebuild is in my git overlay. > http://git.overlays.gentoo.org/gitweb/?p=dev/matsuu.git;a=tree;f=net-fs/glusterfs > Ebuild added to sci overlay with IB support since sci overlay contains IB steck =)
since there is enough interest what about adding it in portage now? the 2.0.1 looks stable enough and the fuse patch looks not needed any more.
The glusterfs ebuild in the science overlay doesn't have infiniband in the IUSE variable, that one is still commented out. (In reply to comment #85) > > Ebuild added to sci overlay with IB support since sci overlay contains IB steck > =) >
(In reply to comment #87) > The glusterfs ebuild in the science overlay doesn't have infiniband in the IUSE > variable, that one is still commented out. > > (In reply to comment #85) > > > > Ebuild added to sci overlay with IB support since sci overlay contains IB steck > > =) > > > USE flag added
(In reply to comment #86) > since there is enough interest what about adding it in portage now? the 2.0.1 > looks stable enough and the fuse patch looks not needed any more. > I can add it to tree but without infiniband. I use this fs in production on hpc cluster
(In reply to comment #86) > since there is enough interest what about adding it in portage now? the 2.0.1 > looks stable enough and the fuse patch looks not needed any more. > Although a fuse patch is not mandatory I think it may still be recommended for peak performance. Is it possible to get an ebuild to patch the kernel source for fuse, recompile the fuse module, and install it?
last time I looked at it most of the upstream .29 fuse had already implemented file locking and raised the number of background requests a bit. What's missing is support for O_DIRECT, that looks somehow a bit problematic. The patch itself doesn't apply on linux fuse at all. I'm thinking about making FUSE_MAX_BACKGROUND a module option instead.
Created attachment 195121 [details] glusterfs-2.0.2.ebuild bump
Created attachment 195123 [details, diff] files/glusterfs-2.0.2-docdir.patch
Created attachment 195463 [details, diff] Allow different service names/config files like OpenVPN does This patches the init.d script to use symlink service names like the openvpn and postfix (and others) do. To use make a volume file /etc/glusterfs/service-name.vol and then symlink `ln -s /etc/init.d/glusterfs /etc/init.d/glusterfs.service-name` then `/etc/init.d/glusterfs.service-name start` Output like: carbon ~ # /etc/init.d/glusterfsd.central-server start * Starting glusterfsd.central-server ... Thanks!
There are some reference to the science overlay and the [science] is in subject, but I didn't found glusterfs ebuild there. It is in the Matsuu overlay, but not the latest version. Matsuu, can you update it? Thanks
(In reply to comment #95) > There are some reference to the science overlay and the [science] is in > subject, but I didn't found glusterfs ebuild there. > > It is in the Matsuu overlay, but not the latest version. Matsuu, can you update > it? Thanks > Does appear science overlay needs another update, currently its got 2.0.1 anyone here able to do this?
> > Does appear science overlay needs another update, currently its got 2.0.1 > anyone here able to do this? > Why do the ebuild classified as a science app? This is standard cluster app, but not science. Also, you know science very often uses cluster technologies, but this app/fs doesn't do a science work, it is just a cluster fs. Anobody knows, when gentoo developers will include glusterfs in portage tree?
(In reply to comment #97) > Why do the ebuild classified as a science app? This is standard cluster app, > but not science. Also, you know science very often uses cluster technologies, > but this app/fs doesn't do a science work, it is just a cluster fs. Because scientific software works, mainly on the clusters.
Created attachment 200571 [details] A great init.d script for GlusterFS I have been using this init.d script for GlusterFS and its good. I can't remeber where it come from. It supports both client and server as well as symlinking multiple configurations.
In version 2.0.6 support for apache was removed. Files glusterfs-2.0.1-apxs.patch. , 70_mod_glusterfs.conf, glusterfs-2.0.1-gentoo.patch are obsolete for this version (i dont know if mod_glusterfs will back). glusterfs.initd contains depend to: openib net.ib0 , it can be a little problematic for users who doesnt use infiniband.
I think the time is mature to get glusterfs in portage, who is willing to push it there?
(In reply to comment #101) > I think the time is mature to get glusterfs in portage, who is willing to push > it there? > I'll add glusterfs to tree since i have production cluster with it =)
(In reply to comment #102) > (In reply to comment #101) > > I think the time is mature to get glusterfs in portage, who is willing to push > > it there? > > > > I'll add glusterfs to tree since i have production cluster with it =) > Added to tree =) Thanks to all! =)
Is infiniband support disabled or otherwise not available for this release? I noticed the USE flag was missing.
(In reply to comment #104) > Is infiniband support disabled or otherwise not available for this release? I > noticed the USE flag was missing. > It exists, but gentoo doesn't have the infiniband libs.
for the ebuild in tree, seems the berkdb is not working from my test, it stopped at: libtool: compile: i686-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I. -I../../../.. -fPIC -D_FILE_OFFSET_BITS=64 -D__USE_FILE_OFFSET64 -D_GNU_SOURCE -DGF_LINUX_HOST_OS -Wall -I../../../../libglusterfs/src -nostartfiles -O2 -march=i686 -pipe -MT bdb.lo -MD -MP -MF .deps/bdb.Tpo -c bdb.c -fPIC -DPIC -o .libs/bdb.o bdb-ll.c: In function 'bdb_dbenv_init': bdb-ll.c:966: error: 'DB_LOG_AUTOREMOVE' undeclared (first use in this function) bdb-ll.c:966: error: (Each undeclared identifier is reported only once bdb-ll.c:966: error: for each function it appears in.) make[5]: *** [bdb-ll.lo] Error 1 make[5]: *** Waiting for unfinished jobs.... mv -f .deps/bctx.Tpo .deps/bctx.Plo mv -f .deps/bdb.Tpo .deps/bdb.Plo make[5]: Leaving directory `/var/tmp/portage/sys-cluster/glusterfs-2.0.6/work/glusterfs-2.0.6/xlators/storage/bdb/src' make[4]: *** [all-recursive] Error 1
(In reply to comment #106) > for the ebuild in tree, seems the berkdb is not working from my test, it > stopped at: > > libtool: compile: i686-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I. -I../../../.. > -fPIC -D_FILE_OFFSET_BITS=64 -D__USE_FILE_OFFSET64 -D_GNU_SOURCE > -DGF_LINUX_HOST_OS -Wall -I../../../../libglusterfs/src -nostartfiles -O2 > -march=i686 -pipe -MT bdb.lo -MD -MP -MF .deps/bdb.Tpo -c bdb.c -fPIC -DPIC -o > .libs/bdb.o > bdb-ll.c: In function 'bdb_dbenv_init': > bdb-ll.c:966: error: 'DB_LOG_AUTOREMOVE' undeclared (first use in this > function) > bdb-ll.c:966: error: (Each undeclared identifier is reported only once > bdb-ll.c:966: error: for each function it appears in.) > make[5]: *** [bdb-ll.lo] Error 1 > make[5]: *** Waiting for unfinished jobs.... > mv -f .deps/bctx.Tpo .deps/bctx.Plo > mv -f .deps/bdb.Tpo .deps/bdb.Plo > make[5]: Leaving directory > `/var/tmp/portage/sys-cluster/glusterfs-2.0.6/work/glusterfs-2.0.6/xlators/storage/bdb/src' > make[4]: *** [all-recursive] Error 1 > These are the translators supported for 2.0: storage posix server POSIX locking mechanism. protocol server server Serving through IP, InfiniBand, etc. protocol client client Access through IP, InfiniBand, etc. cluster distribute client Distributed file storage between multiple servers. cluster nufa client Non-uniform distributed storage, which prefers a local disk. cluster replicate client Replicate (mirror) content between servers. cluster stripe client Striping between multiple servers. performance readahead client Read ahead blocks for sequential read. performance writebehind client Aggregate written blocks. performance io-threads server Multithread for performance. performance io-cache client Cache read blocks. features locks server POSIX locking on server (required for shared access translators). features filter server Path filters. debug trace client Trace GlusterFS internal functions. encryption rot-13 client Sample encryption code. (Don't use for production.) From what I can remember, bdb support will be re-added after they are more comfortable with the stability of everything else. (They've been bashed a few times on the mailing list for instability.)
(In reply to comment #105) > (In reply to comment #104) > > Is infiniband support disabled or otherwise not available for this release? I > > noticed the USE flag was missing. > > > > It exists, but gentoo doesn't have the infiniband libs. > Infiniband libs exists in science overlay and i also have plan to add them to tree. So after i add infiniband libs to tree i'll eenable inifiniband use flag
(In reply to comment #107) > (In reply to comment #106) > > for the ebuild in tree, seems the berkdb is not working from my test, it > > stopped at: > > > > libtool: compile: i686-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I. -I../../../.. > > -fPIC -D_FILE_OFFSET_BITS=64 -D__USE_FILE_OFFSET64 -D_GNU_SOURCE > > -DGF_LINUX_HOST_OS -Wall -I../../../../libglusterfs/src -nostartfiles -O2 > > -march=i686 -pipe -MT bdb.lo -MD -MP -MF .deps/bdb.Tpo -c bdb.c -fPIC -DPIC -o > > .libs/bdb.o > > bdb-ll.c: In function 'bdb_dbenv_init': > > bdb-ll.c:966: error: 'DB_LOG_AUTOREMOVE' undeclared (first use in this > > function) > > bdb-ll.c:966: error: (Each undeclared identifier is reported only once > > bdb-ll.c:966: error: for each function it appears in.) > > make[5]: *** [bdb-ll.lo] Error 1 > > make[5]: *** Waiting for unfinished jobs.... > > mv -f .deps/bctx.Tpo .deps/bctx.Plo > > mv -f .deps/bdb.Tpo .deps/bdb.Plo > > make[5]: Leaving directory > > `/var/tmp/portage/sys-cluster/glusterfs-2.0.6/work/glusterfs-2.0.6/xlators/storage/bdb/src' > > make[4]: *** [all-recursive] Error 1 > > > > These are the translators supported for 2.0: > storage posix server POSIX locking mechanism. > protocol server server Serving through IP, InfiniBand, etc. > protocol client client Access through IP, InfiniBand, etc. > cluster distribute client Distributed file storage > between multiple servers. > cluster nufa client Non-uniform distributed storage, which > prefers a local disk. > cluster replicate client Replicate (mirror) content > between servers. > cluster stripe client Striping between multiple servers. > performance readahead client Read ahead blocks for > sequential read. > performance writebehind client Aggregate written blocks. > performance io-threads server Multithread for performance. > performance io-cache client Cache read blocks. > features locks server POSIX locking on server (required for > shared access translators). > features filter server Path filters. > debug trace client Trace GlusterFS internal functions. > encryption rot-13 client Sample encryption code. (Don't use for > production.) > > From what I can remember, bdb support will be re-added after they are more > comfortable with the stability of everything else. (They've been bashed a few > times on the mailing list for instability.) > Seems i miss it =) so thanks i'll drop bdb support from tree ebuild =)