Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!
Bug 890961 - sys-fs/bees missing openrc initscript
Summary: sys-fs/bees missing openrc initscript
Status: UNCONFIRMED
Alias: None
Product: Gentoo Linux
Classification: Unclassified
Component: Current packages (show other bugs)
Hardware: All Linux
: Normal normal
Assignee: Kai Krakow
URL:
Whiteboard:
Keywords: PATCH
Depends on:
Blocks:
 
Reported: 2023-01-15 16:18 UTC by gpancot@hotmail.com
Modified: 2024-01-26 18:29 UTC (History)
6 users (show)

See Also:
Package list:
Runtime testing required: ---


Attachments
openrc init script (beesd,1.03 KB, text/plain)
2023-01-15 16:18 UTC, gpancot@hotmail.com
Details
bees init.d script (bees.initd,2.72 KB, text/plain)
2023-10-06 20:52 UTC, Forza
Details
bees conf.d file (bees.confd,1.64 KB, text/x-matlab)
2023-10-06 20:52 UTC, Forza
Details
bees init.d script (bees.initd,2.75 KB, text/plain)
2023-10-07 20:35 UTC, Forza
Details
bees conf.d file (bees.confd,1.72 KB, text/x-matlab)
2023-10-07 20:36 UTC, Forza
Details
bees init.d script (bees.initd,2.87 KB, text/plain)
2023-10-07 21:20 UTC, Forza
Details
bees conf.d file (bees.confd,2.82 KB, text/x-matlab)
2023-10-08 13:10 UTC, Forza
Details
bees init.d script (bees.initd,3.04 KB, text/plain)
2023-10-08 13:10 UTC, Forza
Details
logrotate.d config file for bees (bees.logrotate,61 bytes, text/plain)
2023-10-08 16:37 UTC, Forza
Details
bees conf.d file (bees.confd,3.40 KB, text/x-matlab)
2023-10-08 16:41 UTC, Forza
Details
bees init.d script (bees.initd,3.04 KB, text/plain)
2023-10-08 16:43 UTC, Forza
Details

Note You need to log in before you can comment on or make changes to this bug.
Description gpancot@hotmail.com 2023-01-15 16:18:56 UTC
Created attachment 848593 [details]
openrc init script

missing openrc initscript for bees.
an attempt attached
Comment 1 Sam James archtester Gentoo Infrastructure gentoo-dev Security 2023-07-10 06:22:55 UTC
For reference:
- https://github.com/Zygo/bees/issues/114
- https://github.com/automorphism88/gentoo-overlay/commit/fbcd1c819f32513071d71eea825f0346b56b4d57
- https://git.alpinelinux.org/aports/tree/testing/bees/bees.initd?id=f86f6abb16437b8c09eaa0df0d9be1510dd56cae

One of the questions is if we want to use the beesd wrapper script or the real bees.

I'm hacking something up but I'm not sure if I really want it to be mounting/unmounting the subvolume or not.
Comment 2 Kai Krakow 2023-07-18 19:20:13 UTC
The wrapper script has its own set of problems. And, IMHO, it breaks some configuration rules:

While we point our mount points to devices, the wrapper asks for a filesystem guid.

I think we could do better but it is important to have the filesystem mounted as `subvolid=0` somewhere. Also, bees can run in multiple instances: One per filesystem, and initrd script should support that.

So what do we need?

We could look for a conf.d file with filesystem path, then ensure its mount option contains `subvolid=0` and otherwise refuse starting the service.

After this check, we also need to ensure that the beeshome directory exists. It defaults to `$MOUNTPOINT/.beeshome` (and should be its own subvolume in this case) but it can actually sit on any other filesystem, including fat32. So the conf.d file should optionally list a beeshome directory.

We also have to set a location of a state file which tracks the scanning progress, it defaults to the $BEESHOME location.

But do we want to do all of that?

I'm still thinking that some of the logic to figure out how to mount that filesystem with `subvolid=0` in a private namespace should be left to the daemon itself - not some script. Such a private namespace would then automatically unmount the filesystem when the process dies, and it would also only be visible to the process. There's an idea on how to add that to the wrapper script - but doing it properly in bash has shown to have a lot of opportunities for bugs and quirks sneaking in.

Also, we'd need to point to a location where the user can look up the optimal size of the hash buffer. A default configuration could probably be created by `emerge --config` which would create an fstab entry for bees, create a beeshome, create an empty hashfile, create the conf.d file.

I'd rather see an initrd script upstream. But few people have cared, probably because systemd is more and more becoming widespread. I wonder how long Gentoo is going to support initrd script or openrc. I already INSTALL_MASKed /etc/init.d.

But if anyone comes up with a proper solution, I'd take care of upstreaming that script - I don't see any problems with that.
Comment 3 Forza 2023-10-06 20:52:05 UTC
Created attachment 872246 [details]
bees init.d script
Comment 4 Forza 2023-10-06 20:52:46 UTC
Created attachment 872247 [details]
bees conf.d file
Comment 5 Forza 2023-10-06 21:11:08 UTC
Hi,

I have been thinking along the lines of Kai and made an attempt at replacing the `beesd` script with a pure init.d/conf.d version which seems a better fit for Gentoo with OpenRC.

Multiple bees instances are supported by simply creating a link in init.d and copying the conf.d. Example

/etc/init.d/bees -> bees_datavol
/etc/conf.d/bees_datavol

The script currently sets up mount points, checks .beeshome and a few other things before starting bees.

It has two extra commands; suspend and resume. One should issue a suspend before using `btrfs send` from the same filesystem. Earlier kernels had some bugs, but nowsdays block concurrent send with running deduplication.

I've been contemplating putting all filesystem in the one conf.d, but it would make it trickier to limit resources with cgroups (see the conf.d file).

I'm definitely experienced with writing openrc scripts, so see this as a draft proposal to further build discussions on.

Thanks
Comment 6 Forza 2023-10-06 21:30:52 UTC
I think we should also have a log rotate config.

/var/log/bees/*.log {
    missingok
    notifempty
    copytruncate
}
Comment 7 Forza 2023-10-07 08:31:57 UTC
Oh dear. An unfortunate typo.. Last paragraph should say:

I'm definitely _NOT_ experienced with writing openrc scripts, so see this as a draft proposal to further build discussions on.
Comment 8 Kai Krakow 2023-10-07 17:49:17 UTC
Forza,

your files have some typos which you might want to fix:

> # Use `btrfs filesyatem show` to see currently
> # known btrfs fikesystems' UUID's.
(maybe more)


Also some more ideas:

> ## Hash table sizing

Recommends 128M but the default is 512M.


> 	if [ "${hashsize}" != "${old_hashsize}" ] ; then
>		truncate -s "${hashsize}" "${hashfile}"
>		rm "${beeshome}/beescrawl.dat"
>	fi

Do not just resize the hashfile with existing contents. This may lead to undesired behavior and only works properly if you exactly size by powers of two - and even that is not recommended. Better rm the file before creating it again. Even if the original beesd wrapper does it in that way, it's just an artifact of an old constraint when the hashfile had to come in a size power of two. This constraint has since been dropped but the wrapper was never adjusted.


> ## OpenRC resource control
> # Do not set memory.x too low as bees needs to fit
> # the entire hash database in memory.

Not only that but bees works in a way that it tries to use the dirty cache of just written data to reduce IO thrashing. But with cgroups enabled, this means that ownership of the cache in transferred over to the bees cgroup and then in turn is bound to the constraints of that cgroup. This can lead to early pushing out cached data. So optimally, bees should be able to "own" a lot of cached data - which will be re-owned to other processes if those read data again. IOW, bees design results in writeback caching ownership transferred to bees, and because bees reads ALL written data, it will effectively flush your cache early under the constraints put on its cgroup.

I've stopped using memory cgroups which I believe led to IO trashing with high memory pressure, high swap, and still free memory partially due to how bees operates and how the kernel handles cache memory ownership. On a desktop, you probably want to run without cgroups, or at least very carefully tune the constraints. Using memory cgroups can quite easily have the opposite effect of what you're trying to do - especially while using bees.

This may work better after bees has migrated over to using checksums from btrfs itself rather then rolling its own checksumming. But this will take some more time.

Also, if not doing many writes, the negative impacts should be much lower.
Comment 9 Forza 2023-10-07 19:21:56 UTC
(In reply to Kai Krakow from comment #8)
> Forza,
> 
> your files have some typos which you might want to fix:
> 
> > # Use `btrfs filesyatem show` to see currently
> > # known btrfs fikesystems' UUID's.
> (maybe more)
> 
Thanks for proof reading.

> 
> Also some more ideas:
> 
> > ## Hash table sizing
> 
> Recommends 128M but the default is 512M.
I meant to change it to 128. I had used 512 with an existing db.

> 
> 
> > 	if [ "${hashsize}" != "${old_hashsize}" ] ; then
> >		truncate -s "${hashsize}" "${hashfile}"
> >		rm "${beeshome}/beescrawl.dat"
> >	fi
> 
> Do not just resize the hashfile with existing contents. This may lead to
> undesired behavior and only works properly if you exactly size by powers of
> two - and even that is not recommended. Better rm the file before creating
> it again. Even if the original beesd wrapper does it in that way, it's just
> an artifact of an old constraint when the hashfile had to come in a size
> power of two. This constraint has since been dropped but the wrapper was
> never adjusted.

Then I'm thinking we should abort with a message telling the user to clear the files manually. Another option is to add a "reset" command so users can do "rc-service bees reset", which would remove the hash table and crawl.dat. But I think that's better left to the user. 

What do you think? 

> 
> 
> > ## OpenRC resource control
> > # Do not set memory.x too low as bees needs to fit
> > # the entire hash database in memory.
> 
> Not only that but bees works in a way that it tries to use the dirty cache
> of just written data to reduce IO thrashing. But with cgroups enabled, this
> means that ownership of the cache in transferred over to the bees cgroup and
> then in turn is bound to the constraints of that cgroup. This can lead to
> early pushing out cached data. So optimally, bees should be able to "own" a
> lot of cached data - which will be re-owned to other processes if those read
> data again. IOW, bees design results in writeback caching ownership
> transferred to bees, and because bees reads ALL written data, it will
> effectively flush your cache early under the constraints put on its cgroup.
> 
> I've stopped using memory cgroups which I believe led to IO trashing with
> high memory pressure, high swap, and still free memory partially due to how
> bees operates and how the kernel handles cache memory ownership. On a
> desktop, you probably want to run without cgroups, or at least very
> carefully tune the constraints. Using memory cgroups can quite easily have
> the opposite effect of what you're trying to do - especially while using
> bees.
> 
> This may work better after bees has migrated over to using checksums from
> btrfs itself rather then rolling its own checksumming. But this will take
> some more time.
> 
> Also, if not doing many writes, the negative impacts should be much lower.

This is interesting information. Thanks for sharing. Based on this I suggest only setting io and cpu cgroup limits, or simply use start-stop-daemon's nice, ionice and procsched functions instead. 

I'll work this and submit updated versions in a bit.
Comment 10 Forza 2023-10-07 19:26:55 UTC
Regarding logging. What log level should be default? And what would be a good path for logs? Log level 8 would create huge files, so I do not think it is suitable.
Comment 11 Forza 2023-10-07 20:35:33 UTC
Created attachment 872279 [details]
bees init.d script
Comment 12 Forza 2023-10-07 20:36:08 UTC
Created attachment 872280 [details]
bees conf.d file
Comment 13 Forza 2023-10-07 21:20:50 UTC
Created attachment 872282 [details]
bees init.d script

* make sure ${mnt} exists.
* test and creation of hashfile if empty/missing.
* path to bees corrected.
* loglevel:=1
* update depend().
Comment 14 Kai Krakow 2023-10-08 04:37:34 UTC
(In reply to Forza from comment #9)
> > > ## Hash table sizing
> > 
> > Recommends 128M but the default is 512M.
> I meant to change it to 128. I had used 512 with an existing db.

I think for most current systems (8 GB+ RAM), 256 or 512 might be a good start.


> > Do not just resize the hashfile with existing contents. This may lead to
> > undesired behavior and only works properly if you exactly size by powers of
> > two - and even that is not recommended. Better rm the file before creating
> > it again. Even if the original beesd wrapper does it in that way, it's just
> > an artifact of an old constraint when the hashfile had to come in a size
> > power of two. This constraint has since been dropped but the wrapper was
> > never adjusted.
> 
> Then I'm thinking we should abort with a message telling the user to clear
> the files manually. Another option is to add a "reset" command so users can
> do "rc-service bees reset", which would remove the hash table and crawl.dat.
> But I think that's better left to the user. 

Yes, actually I had the idea to suggest the same but then didn't mention it and rather let you decide. But yes, that's a good idea, I think.

[...]
> > IOW, bees design results in writeback caching ownership
> > transferred to bees, and because bees reads ALL written data, it will
> > effectively flush your cache early under the constraints put on its cgroup.
[...]

> This is interesting information. Thanks for sharing. Based on this I suggest
> only setting io and cpu cgroup limits, or simply use start-stop-daemon's
> nice, ionice and procsched functions instead. 

Yes, CPU batch scheduling should work just fine. IO scheduling doesn't work very isolated in btrfs so the benefits of doing iosched is limited while using btrfs (low IO priorities may unintentionally leak to other processes). So I'd recommend to not setting it too low.

I'm running bees in a systemd resource slice. To give you some ideas/advice by a real-world example, here are my settings:

# /etc/systemd/system/maintenance.slice
[Unit]
Description=Limit maintenance tasks memory usage

[Slice]
MemoryLow=4G
AllowedCPUs=16-19 # e-cores
CPUWeight=20
IOWeight=10

# /etc/systemd/system/bees.service
[Unit]
# [...]

[Service]
Type=simple
Environment=BEESSTATUS=%t/bees/bees.status
ExecStart=/usr/libexec/bees --no-timestamps --strip-paths --thread-count=6 --scan-mode=3 --verbose=5 --loadavg-target=5 /mnt/btrfs-pool
CPUSchedulingPolicy=idle
IOSchedulingClass=idle
IOSchedulingPriority=7
KillMode=control-group
KillSignal=SIGTERM
Nice=19
Restart=on-abnormal
ReadWritePaths=/mnt/btrfs-pool
RuntimeDirectory=bees
StartupCPUWeight=25
WorkingDirectory=/run/bees


As you can see, I'm using IOSchedulingClass idle which works fine for me (partially because my hashtable is on an xfs filesystem). But it may not for all workloads. So we should be a little bit more conservative on this by default.

Also, I default to log level 5 which seems to be a good balance. `--strip-paths` can remove a lot of log data by logging only relative paths instead of absolute ones.

Because bees is running all the time, I'm using a loadavg target instead: bees will pause working if loadavg goes above 5. This may be a better resource control than IO scheduling because loadavg also includes IO load. But of course it will count IO load globally across all filesystems. So if you copy lots of small files to a slow USB device, load would increase and bees would pause. But I think that's okay.
Comment 15 Forza 2023-10-08 06:12:39 UTC
To summarise what I think left to decide. 

* Default hash table size. Maybe it should be commented out by default? 

* Start-stop-daemon does not have an environment variable for CPU scheduler, but only the command line argument '--scheduler'. Do we set it to idle via start_stop_daemon_args, or leave it as an option to the user?

* Logging directory. Should we have a conf.d variable for it? We'd need to add a checkpath logdir in init.d too. 

* Loglevel. Is 5 our consensus?

* Do we use --no-timestamps --strip-paths by default? Or do we expect the user to set them in 
bees_args? 

* --thread-count=6 --scan-mode=3 --verbose=5 --loadavg-target=5: These will be dependant on user's system. I think we should not set them as default, but leave upstream defaults. 


Note on IO scheduling priorities. Afaik they currently only makes sense with the CFS and BFQ IO schedulers. Deadline ignores ionice settings.
Comment 16 Kai Krakow 2023-10-08 07:51:16 UTC
(In reply to Forza from comment #15)
> * Default hash table size. Maybe it should be commented out by default? 

I'm undecided on this. At least the ebuild should give a hint if it needs to be set, and how to size it properly.

> * Start-stop-daemon does not have an environment variable for CPU scheduler,
> but only the command line argument '--scheduler'. Do we set it to idle via
> start_stop_daemon_args, or leave it as an option to the user?

It should be "batch" or "idle" where "batch" would adhere to nice level with a slight priority penalty but bigger time slice - thus giving better CPU cache hit rates for calculating hashes. "idle" only allocates time slices if the process would not preempt an interactive process and a CPU thread is idle. "batch" with "nice" is probably what most users want, but we should offer "idle" optionally.

> * Logging directory. Should we have a conf.d variable for it? We'd need to
> add a checkpath logdir in init.d too. 

I have no opinion on that because I migrated most logging to systemd-journald. But having one log file per instance in `/var/log` is probably fine.

> * Loglevel. Is 5 our consensus?

Level 5 mostly logs performance problems with specific files so users can act on them. Usually, these are files which generate a lot duplicate hashes. This situation usually occurs when you use a lot of snapshots (1000+) or have files with a lot of identical headers (e.g. big game data files often contain thousands of file headers for packed assets).

> * Do we use --no-timestamps --strip-paths by default? Or do we expect the
> user to set them in 
> bees_args? 

If logging to files directly, you should NOT use `--no-timestamps`. This option is meant for loggers that add their own timestamps. `systemd-journald` does that (that's why I introduced that option to bees). `syslog` may do it, too. But if openrc just redirects output to a file, you want to add timestamps.

How would we handle logrotate then? bees has no way of reopening log files because it just writes to stdout/stderr. Does start-stop-daemon properly handle that?

> * --thread-count=6 --scan-mode=3 --verbose=5 --loadavg-target=5: These will
> be dependant on user's system. I think we should not set them as default,
> but leave upstream defaults. 

We should have them in an example config. `thread-count` has proper defaults. `loadavg-target` doesn't and "5" might be a good default we could ship at least as a recommendation. `scan-mode` with "3" is probably also a good recommendation if you also use snapper.

According to bees docs, setting `scan-mode` to a different value after the first pass completed (which can take hours to days, there's no indicator of that event in bees other than looking at the status file generation numbers and compare them to your subvolume state before starting bees) could be useful:

bees defaults to mode 1 which has high throughput but is slow responding to new data. If you're actively using the file system, you should use mode 3 (which responds fast to new data, then falls back to mode 0 which is good for rotating snapshots). So running initially with mode 1, then switch to mode 3 after a few days, may be a good advice.

> Note on IO scheduling priorities. Afaik they currently only makes sense with
> the CFS and BFQ IO schedulers. Deadline ignores ionice settings.

Yes, that's correct.

I'm running btrfs metadata on two different NVMe drives, then run btrfs data chunks via bcache in mdraid1 on the remaining part of the NVMe drives. bees hashdata is served from an SSD. The NVMe drives are running via kyber scheduler, all others are running bfq.

For your curiosity, here's my patch set for running dedicated btrfs metadata and data partitions:
https://github.com/kakra/linux/pull/26

It may give you a better understanding of what I'm trying to explain above. Having dedicated metadata partitions improves btrfs responsiveness a lot. But it's only useful if you're running mixed SSD/HDD setups.
Comment 17 Forza 2023-10-08 11:34:29 UTC
Do we want to be able to choose syslog logging in conf.d

bees_syslog=true

Which then would ignore logfile directive and send everything to syslog. 

I'm not entirely sure what is best solution for this. Like so? We also need to add the --no-timestamps bees argument.

start-stop-daemon --stdout-logger logger --tag ${RC_SVCNAME}
Comment 18 Forza 2023-10-08 11:57:56 UTC
(In reply to Kai Krakow from comment #16)
> (In reply to Forza from comment #15)
> > * Default hash table size. Maybe it should be commented out by default? 
> 
> I'm undecided on this. At least the ebuild should give a hint if it needs to
> be set, and how to size it properly.

OK. Let's use 128MiB as default, or we should also clarify this in the size chart. Can you suggest a better text for the hashsize variable to make it clear to the user that he should change it? 

> 
> > * Start-stop-daemon does not have an environment variable for CPU scheduler,
> > but only the command line argument '--scheduler'. Do we set it to idle via
> > start_stop_daemon_args, or leave it as an option to the user?
> 
> It should be "batch" or "idle" where "batch" would adhere to nice level with
> a slight priority penalty but bigger time slice - thus giving better CPU
> cache hit rates for calculating hashes. "idle" only allocates time slices if
> the process would not preempt an interactive process and a CPU thread is
> idle. "batch" with "nice" is probably what most users want, but we should
> offer "idle" optionally.

I'm leaning towards "idle" because it will affect user desktop experience the least, while still being a safe option. Maybe clarify the options in the sample config would be enough? 

> 
> > * Logging directory. Should we have a conf.d variable for it? We'd need to
> > add a checkpath logdir in init.d too. 
> 
> I have no opinion on that because I migrated most logging to
> systemd-journald. But having one log file per instance in `/var/log` is
> probably fine.
> 
> > * Loglevel. Is 5 our consensus?
> 
> Level 5 mostly logs performance problems with specific files so users can
> act on them. Usually, these are files which generate a lot duplicate hashes.
> This situation usually occurs when you use a lot of snapshots (1000+) or
> have files with a lot of identical headers (e.g. big game data files often
> contain thousands of file headers for packed assets).

Then I think we stay with 5. I think using a bees subdir in /var/log would be nice, especially if we have several bees agents running. 

> 
> > * Do we use --no-timestamps --strip-paths by default? Or do we expect the
> > user to set them in 
> > bees_args? 
> 
> If logging to files directly, you should NOT use `--no-timestamps`. This
> option is meant for loggers that add their own timestamps.
> `systemd-journald` does that (that's why I introduced that option to bees).
> `syslog` may do it, too. But if openrc just redirects output to a file, you
> want to add timestamps.
> 
> How would we handle logrotate then? bees has no way of reopening log files
> because it just writes to stdout/stderr. Does start-stop-daemon properly
> handle that?

Logrotate has a "copytruncate" option to deal with this. I'll make a sample logrotate.d script later. 
> 
> > * --thread-count=6 --scan-mode=3 --verbose=5 --loadavg-target=5: These will
> > be dependant on user's system. I think we should not set them as default,
> > but leave upstream defaults. 
> 
> We should have them in an example config. `thread-count` has proper
> defaults. `loadavg-target` doesn't and "5" might be a good default we could
> ship at least as a recommendation. `scan-mode` with "3" is probably also a
> good recommendation if you also use snapper.

Agreed 
> 
> According to bees docs, setting `scan-mode` to a different value after the
> first pass completed (which can take hours to days, there's no indicator of
> that event in bees other than looking at the status file generation numbers
> and compare them to your subvolume state before starting bees) could be
> useful:
> 
> bees defaults to mode 1 which has high throughput but is slow responding to
> new data. If you're actively using the file system, you should use mode 3
> (which responds fast to new data, then falls back to mode 0 which is good
> for rotating snapshots). So running initially with mode 1, then switch to
> mode 3 after a few days, may be a good advice.

This may be difficult to achieve using the init script. Maybe we should just mode 3 by default and provide good examples in the conf.d? 

> 
> > Note on IO scheduling priorities. Afaik they currently only makes sense with
> > the CFS and BFQ IO schedulers. Deadline ignores ionice settings.
> 
> Yes, that's correct.
> 
> I'm running btrfs metadata on two different NVMe drives, then run btrfs data
> chunks via bcache in mdraid1 on the remaining part of the NVMe drives. bees
> hashdata is served from an SSD. The NVMe drives are running via kyber
> scheduler, all others are running bfq.
> 
> For your curiosity, here's my patch set for running dedicated btrfs metadata
> and data partitions:
> https://github.com/kakra/linux/pull/26
> 
> It may give you a better understanding of what I'm trying to explain above.
> Having dedicated metadata partitions improves btrfs responsiveness a lot.
> But it's only useful if you're running mixed SSD/HDD setups.

Thanks a lot for your detailed suggestions!
Comment 19 Forza 2023-10-08 13:10:28 UTC
Created attachment 872324 [details]
bees conf.d file
Comment 20 Forza 2023-10-08 13:10:53 UTC
Created attachment 872325 [details]
bees init.d script
Comment 21 Forza 2023-10-08 16:37:02 UTC
Created attachment 872350 [details]
logrotate.d config file for bees

Not sure what is standard for Gentoo, so i made a minimal logrotate.d file.
Comment 22 Forza 2023-10-08 16:41:56 UTC
Created attachment 872351 [details]
bees conf.d file

Reworked conf.d with size suggestion etc.
Comment 23 Forza 2023-10-08 16:43:16 UTC
Created attachment 872352 [details]
bees init.d script

Fixed some mistakes, and added progress output on stopping.

Hopefully this is last change!
Comment 24 Kai Krakow 2023-10-10 22:49:45 UTC
This look quite complete if you don't have any other ideas.

How do we want to proceed? I could ask Zygo to upstream this in bees but it will probably take a few weeks until merged.

We could also first put it into the ebuild but then we'd probably add files to portage which would be removed after upstream merged the changes.

In any case: Do you want to submit PRs yourself, or should I do it and add you as the author of the commits?
Comment 25 Forza 2023-10-11 14:01:13 UTC
(In reply to Kai Krakow from comment #24)
> This look quite complete if you don't have any other ideas.
> 
Not at the moment. Are you ok with the description in conf.d on how to run multiple versions of bees? 

Other than that it is mostly if we go with batch or idle as default. I am ok with either. 

> How do we want to proceed? I could ask Zygo to upstream this in bees but it
> will probably take a few weeks until merged.

Yes, why not? Other distros could use these scripts too?

> We could also first put it into the ebuild but then we'd probably add files
> to portage which would be removed after upstream merged the changes.

It's what's easiest for you as maintainer. If it may take time to get a new release of bees, it is probably good if we include the three files in the ebuild for now. 
> 
> In any case: Do you want to submit PRs yourself, or should I do it and add
> you as the author of the commits?

Please do the PR. I've done one or two for Gentoo before but it usually ends up wrong :D.
Comment 26 Kai Krakow 2023-10-14 10:57:06 UTC
(In reply to Forza from comment #18)

[...switching scan modes...]
> This may be difficult to achieve using the init script. Maybe we should just
> mode 3 by default and provide good examples in the conf.d?

This should not be some programmed behavior but rather a recommendation to the user. Starting with 3 is a good default, I think.

I'll probably start a PR with this in bees tomorrow, then wait for the merge to master, and backport the changes to the tagged versions in Gentoo.