mkisofs from app-cdr/cdrtools-3.01_alpha06 creates broken images if there is a file larger than a certain limit (4 GB?) present.
The large file cannot be extracted from the .iso image by any archive manager I tried. If I burn that image to a DVD, trying to read that file results in an I/O error.
3.0 doesn't have this problem.
Tested with k3b-2.0.2-r1.
Steps to Reproduce:
1.burn a >4GB file onto a DVD using k3b
2.try to read it
Portage 2.2.0_alpha80 (default/linux/amd64/10.0/desktop, gcc-4.5.3, glibc-2.14.1-r1, 2.6.39-pf3 x86_64)
System uname: Linux-2.6.39-pf3-x86_64-Intel-R-_Core-TM-2_Duo_CPU_E7400_@_2.80GHz-with-gentoo-2.1
Timestamp of tree: Tue, 13 Dec 2011 10:00:01 +0000
distcc 3.1 x86_64-pc-linux-gnu [disabled]
dev-lang/python: 2.7.2-r3, 3.2.2
sys-devel/autoconf: 2.13, 2.68
sys-devel/automake: 1.4_p6-r1, 1.9.6-r3, 1.10.3, 1.11.1-r1
sys-devel/gcc: 4.4.6-r1, 4.5.3-r1
sys-kernel/linux-headers: 3.1 (virtual/os-headers)
Repositories: gentoo multimedia bitcoin sunrise dupa
Installed sets: @system
CFLAGS="-O2 -march=native -pipe -ggdb"
CONFIG_PROTECT="/etc /usr/share/config /usr/share/gnupg/qualified.txt /usr/share/themes/oxygen-gtk/gtk-2.0"
CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/gentoo-release /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo /etc/texmf/language.dat.d /etc/texmf/language.def.d /etc/texmf/updmap.d /etc/texmf/web2c"
CXXFLAGS="-O2 -march=native -pipe -ggdb"
FEATURES="assume-digests binpkg-logs distlocks ebuild-locks fixlafiles installsources news parallel-fetch preserve-libs protect-owned sandbox sfperms splitdebug strict unknown-features-warn unmerge-logs unmerge-orphans userfetch"
FFLAGS="-O2 -march=native -pipe -ggdb"
GENTOO_MIRRORS="http://mirror.netcologne.de/gentoo/ http://ftp.spline.inf.fu-berlin.de/mirrors/gentoo/ http://ftp-stud.hs-esslingen.de/pub/Mirrors/gentoo/"
PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages"
PORTDIR_OVERLAY="/usr/local/portage/layman/multimedia /usr/local/portage/layman/bitcoin /usr/local/portage/layman/sunrise /usr/local/portage/moje"
USE="X Xaw3d a52 aac aalib acl acpi alsa amd64 amr amrnb amrwb apng async audiofile automount bash-completion berkdb bfq bineditor bluray branding bzip2 cairo cdda cddb cdparanoia cdr chdir cli consolekit cracklib crypt css cuda cups curl cxx dbus dirac disk-partition divx djvu dri dts dvd dvdr editor emboss emovix enca encode exchange exif faac faad fam fat ffmpeg fftw firefox firefox3 flac fontconfig fortran gd gdbm gdu geoip gif glitz gmedia gphoto2 gpm gtk hddtemp iconv id3 id3tag imagemagick inotify iostats ipod jpeg kde kde4 kdehiddenvisibility kipi kompare kpathsea kqemu ladspa lame laptop lastfm latex lcms libass libcaca libnotify libsamplerate lm_sensors lzma lzo mad matroska midi mjpeg mmap mmx mmxext mng modules mp3 mp3tunes mp4 mpeg mplayer mtp mudflap multilib musicbrainz ncurses nls nptl nptlonly nsplugin ntfs nvidia ogg okteta openal opencore-amr opengl openmp optimized-qmake pam pango pch pcre pdf plasma png policykit portage ppds pppd private-headers pulseaudio qt3 qt3support qt4 qthelp raster readline realmedia roe scanner schroedinger sdl secure-delete sensord session shaders slang sndfile solver soundtouch sourceview sparse spell sse sse2 sse3 sse4 sse4a ssl ssse3 startup-notification suid svg swat symlink sysfs syslog tcpd theora threads tiff truetype udev unicode upnp usb vaapi vamp vcd vdpau vorbis webkit wicd wifi wmf wmp wxwidgets wxwindows x264 xcb xcomposite xine xml xorg xscreensaver xv xvid xvmc zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" CALLIGRA_FEATURES="kexi words flow plan stage tables krita karbon braindump" CAMERAS="ptp2" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="pl" PHP_TARGETS="php5-3" QEMU_SOFTMMU_TARGETS="x86_64 i386" QEMU_USER_TARGETS="x86_64 i386" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="nvidia" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account"
Unset: CPPFLAGS, CTARGET, INSTALL_MASK, LC_ALL, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS
"Please stop putting alpha software in ~amd64 tree without a good reason."
Maybe you shouldn't be running ~amd64 in the first place? Just a piece of advice.
~arch (~x86, ~ppc-macos)
The package version and the ebuild are believed to work and do not have any known serious bugs, but more testing is required before the package version is considered suitable for arch.
What made you believe, that this unofficial alpha release works and does not have serious bugs?
Also, from the same devmanual page:
The package.mask file can be used to 'hard mask' individual or groups of ebuilds. This should be used for testing ebuilds or beta releases of software, and may also be used if a package has serious compatibility problems.
I guess it applies to alpha releases even more than it does for beta.
I did reproduce this bug on my x86 system. I use cdrtools since a few years now but did never need to create iso images with files bigger than 4GB.
There are only alpha and stable releases for cdrtools and there have never been beta releases, pre releases, release candidates, patch releases or the like just alpha releases. This are just names, please don't take the name to serious. This is especially true for cdrtools where the author refused to create new stable versions for some reasons and only did alpha releases from 2004 on up to 2010.
You say there is not enough testing for new releases you are welcome to test every aspect of a program for every new release, you soon will find out that this is impossible. This is what the testing tree in Gentoo is for. We thank you for your report and will try to fix this bug.
Also I do not consider this a serious bug, there is not much damage done if there is a corrupt iso file if you do not delete the original files immediately. Just use the stable version of cdrtools and be good.
Also please note that attitude does make this bug resolved earlier so please calm down a bit.
Actually I could not reproduce it. I just was not able to view the image with an archive manager but this is also the case for archives with files smaller than 4GB. The image containing a file larger than 4GB burned fine with cdrtools and with xfburn which uses the libburnia libraries. The only problem I see is that 7za which claims to be able to unpack iso files has problems with the image, maybe you use the same archive manager. For the I/O error try to reproduce the issue without k3b and use mkisofs and cdrtools directly.
>There are only alpha and stable releases for cdrtools and there have never been
beta releases, pre releases, release candidates, patch releases or the like
just alpha releases. This are just names, please don't take the name to
No, these aren't just names, these are agreed upon identifiers, that express author's level of confidence of the release. Beta means "It's basically done, some minor bugs need to be ironed out before release, please grab it and test it.", while Alpha means "I'm not even thinking about releasing this yet. You can use it if you want, but on your own risk, as it may still contain serious bugs."
>This is especially true for cdrtools where the author refused to
create new stable versions for some reasons and only did alpha releases from
2004 on up to 2010.
The reason was probably that it wasn't stable enough to warrant a stable release. IMHO the correct solution is adding these alpha releases hard masked to the tree. If one of them gets fame of being stable and relatively bug-free (for example by being used in another big distro or even by spending some time in the tree without serious open bugs), only then unmask it and keyword it ~arch. If there is no such release, please, at least give it a solid testing before putting it in ~arch tree. That obviously wasn't the case here, since this bug could have been caught even with the most basic testing.
>You say there is not enough testing for new releases you are welcome to test
every aspect of a program for every new release, you soon will find out that
this is impossible. This is what the testing tree in Gentoo is for. We thank
you for your report and will try to fix this bug.
As I said earlier (by quoting the official Gentoo dev manual) the testing tree is for testing stable releases of packages that are believed to be free of serious bugs before giving it a "Gentoo Seal of Aprroval" (stable tree). It is not for testing alpha software. That's what hard masking is for. If you start putting cdrtools alpha releases hard-masked into the tree I will gladly step forward and test the hell out of them (occasionally). But right now I just masked all >3.0 releases because I simply don't like suddenly realising I just wasted 5 DVD-Rs (partly because K3B's data verification is broken... but that's a different story)
>Also I do not consider this a serious bug, there is not much damage done if
there is a corrupt iso file if you do not delete the original files
immediately. Just use the stable version of cdrtools and be good.
Now this is just ridiculous. Are you saying, that the main feature of cdrtools is burning files smaller than 4GB and burning files >4GB is just a small add-on feature? A DVD disc can hold 4.4 GiB of data and it is perfectly reasonable to expect, that many people will want to burn 4.4GiB of data onto a DVD disc.
>Also please note that attitude does make this bug resolved earlier so please
calm down a bit.
I'm sorry, but it's just irritating, that half the times I want to burn something on my Gentoo system something in the burning stack is broken, be it cdrtools or k3b. And it's even more irritating at times like this, when an issue could be avoided by simply following the rules (hell, at one time there was an alpha release of k3b in the *stable* tree. boy, what a ride that was...).
So, to sum things up, I'm just asking for one thing: please, stick to the rules laid out in Gentoo's official manual. They are there for a reason.
(In reply to comment #5)
> Actually I could not reproduce it. I just was not able to view the image with
> an archive manager but this is also the case for archives with files smaller
> than 4GB. The image containing a file larger than 4GB burned fine with cdrtools
> and with xfburn which uses the libburnia libraries. The only problem I see is
> that 7za which claims to be able to unpack iso files has problems with the
> image, maybe you use the same archive manager.
Ok, I'll do some more testing and report back.
> For the I/O error try to
> reproduce the issue without k3b and use mkisofs and cdrtools directly.
What mkisofs command should I use? I've never used it directly and it seems quite complicated. The k3b's command was:
mkisofs calculate size command:
/usr/bin/mkisofs -gui -graft-points -print-size -quiet -volid K3b data project -volset -appid K3B THE CD KREATOR (C) 1998-2010 SEBASTIAN TRUEG AND MICHAL MALEK -publisher -preparer -sysid LINUX -volset-size 1 -volset-seqno 1 -sort /tmp/kde-piotrek/k3bV18565.tmp -rational-rock -hide-list /tmp/kde-piotrek/k3be18565.tmp -joliet -joliet-long -hide-joliet-list /tmp/kde-piotrek/k3bk18565.tmp -no-cache-inodes -udf -full-iso9660-filenames -iso-level 3 -path-list /tmp/kde-piotrek/k3bX18565.tmp
/usr/bin/mkisofs -gui -graft-points -volid K3b data project -volset -appid K3B THE CD KREATOR (C) 1998-2010 SEBASTIAN TRUEG AND MICHAL MALEK -publisher -preparer -sysid LINUX -volset-size 1 -volset-seqno 1 -sort /tmp/kde-piotrek/k3br18565.tmp -rational-rock -hide-list /tmp/kde-piotrek/k3bi18565.tmp -joliet -joliet-long -hide-joliet-list /tmp/kde-piotrek/k3bD18565.tmp -no-cache-inodes -udf -full-iso9660-filenames -iso-level 3 -path-list /tmp/kde-piotrek/k3bG18565.tmp
Ok, first observations, still using K3B to create .isos:
7zip (both Linux (app-arch/p7zip-9.20.1) and Windows (9.20 official release) versions) definitely fails to unpack the >4GiB .iso while having no problem with <4GiB .iso. I tested this by right clicking the image -> 7-Zip -> Test archive.
The weird thing is, that ark now succesfully opens the >4GiB iso and unpacks the large file. I distinctly remember not being able to open them the last time I tried. Same result with Gnome's archive manager (I think they're both based on libarchive). On the other hand now it can't open other archives that I had lying around that were not created in K3B/mkisofs. Weird.
As I don't have any DVD-Rs currently I can't test burning these images, but I tried mounting them with cdemu and the result was the same as with burning them - reading the >4GiB file resulted in I/O errors:
[186955.271873] attempt to access beyond end of device
[186955.271876] sr1: rw=0, want=9174816, limit=9174652
These are different errors then those I got from the actual drive. Looking through my /var/log/messages I *think* these are those errors I got then:
Dec 13 01:04:04 localhost kernel: [28142.857266] Buffer I/O error on device sr0, logical block 2294200
Dec 13 01:04:04 localhost kernel: [28142.857273] Buffer I/O error on device sr0, logical block 2294201
Dec 13 02:03:32 localhost kernel: [31710.982633] end_request: I/O error, dev sr0, sector 1040
Dec 13 10:47:24 localhost kernel: [32487.149720] end_request: I/O error, dev sr0, sector 16
Dec 13 10:47:24 localhost kernel: [32487.149725] Buffer I/O error on device sr0, logical block 2
Dec 13 11:52:22 localhost kernel: [36384.646828] Buffer I/O error on device sr0, logical block 2248352
Ok, now I'm starting to lose it. I just created another .iso to test https://bugs.gentoo.org/show_bug.cgi?id=343857 , that is an image with a 4.4GiB file and a ~100KiB text file. This time it worked flawlessly -- both 7zip and cdemu had no problem with neither of those files.
So, I did some more testing and it seems, that if I burn a >4GiB file + a small file everything works fine. However if I burn the same large file alone, it's unreadable.
This is strange, because I distinctly remember that when I stumbled upon this bug a month ago, the small files were readable, but the large file was not in similar cases. I'm 100% sure about this, because I remember, that I was burning some Steam backups, which consist of one big ~4.4 GiB file and a couple of smaller ones.
Created attachment 299471 [details]
Compressed image (blows out to 4.1GiB)
I got an idea. I dd'd a 4.1 GiB file from /dev/zero and made a broken image out of it. As expected, it bz2'd neatly to 5.6 KiB, so here it is. Maybe analysing this image will shed some light on the problem.
(In reply to comment #11)
> Created attachment 299471 [details]
> Compressed image (blows out to 4.1GiB)
> I got an idea. I dd'd a 4.1 GiB file from /dev/zero and made a broken image out
> of it. As expected, it bz2'd neatly to 5.6 KiB, so here it is. Maybe analysing
> this image will shed some light on the problem.
Okay I burned your image without problems. The DVD was also mounted without problems but when trying to copy the files back to disk I got an I/O error after 295788544 Bytes copied to disk. What command did you use to create the iso? iso-level 3 is at least necessary for files greater than 4GB
Now I did test different isos created with dd and did burn them on a re-writable DVD. The sizes were 3999M, 4001M, 4095M, 4097M, one of about 4201M with the same size you created and a 4401M from an already created iso.
The commands I did use were:
dd bs=1 count=0 seek=4097M of=/dev/zero of=file4097M
mkisofs -iso-level 3 -o file4097M.iso file4097M
cdrecord -dev=0,1,0 -v file4097M.iso
All of them did burn without errors and I was able to copy the burned file back to disk without I/O errors.
The K3B's mkisofs command is in one of my earlier comments. It contains "-iso-level 3".
I'll try your simplier mkisofs command later today when I'm on my home computer.
So what is the current state?
Does it mean that a problem cannot be verified?
Ok, I tried your approach and couldn't reproduce it that way. Image containing the same file gives I/O errors when created with K3B and seems to work fine when created using your mkisofs command. I still tested it only using cdemu, I'll get some DVD-Rs tomorrow and try burning those images.
I noticed 7zip bails out on both images, but at this point I'm more convinced it's a bug in 7zip.
Anyway, I modified the summary and will post the full K3B output in a minute.
Created attachment 299587 [details]
Adding the KDE team as this looks like an issue with k3b.
Regarding the 7-zip issue there is a sourceforge bug  already.
You should probably file an upstream bug at bugs.kde.org and link back to it here (but be warned, k3b is not really being maintained anymore, and I know that's a pity)
I found how to reproduce: mkisofs -iso-level 3 -udf -sort emptyfile
Step by step:
dd bs=1 count=0 seek=4100M if=/dev/zero of=badfile
mkisofs -iso-level 3 -udf -sort emptyfile -o badimage.iso badfile
It works fine if I omit the -udf or -sort option or if the file is smaller, say 3900M.
Here's the weird thing: I found this similar bug on KDE bugzilla: https://bugs.kde.org/show_bug.cgi?id=257241.
It seems like it's exactly the same thing, but the guy there claimed it happened in mkisofs 3.0, and Joerg said he fixed it in 3.01a01. In my experience it's kind of the opposite: it works fine in 3.0 and gets screwed up in 3.01a07.
I read https://bugs.kde.org/show_bug.cgi?id=257241 again and noticed I was mistaken. In it files smaller than 4G were corrupted when mixed with files larger than that. In this case it's the other way around. It looks more like bug #343857 I reported earlier and closed just a minute ago.
They still seem similar though, especially being caused by using the -sort option with an empty file. Perhaps even the fix for bug #343857 caused this one?
For the record, I put "badfile 1" into emptyfile and it still resulted in a broken image, so it's not just an empty sort file.
Still happens in 3.01_alpha08.
OK its not kde related problem, removing us from cc.
I confirm this; actually this bug's back in this particular version, I've reported this upstream.
Please hard mask this version.
I have exchanged a couple of emails with Joerg over this bug. His firm stance was that mkisofs produces perfectly fine images and I'm experiencing a Linux kernel bug (and apparently an identical Windows 7 bug). Good luck trying to change his mind.
This problem doesn't persist anymore. Everyone, try recompiling and check.
I even tried from the command line --
mkisofs -sort odr -no-cache-inodes -udf -full-iso9660-filenames -iso-level 3 -o img.iso 'file > 4g.bin'
(In reply to comment #27)
> This problem doesn't persist anymore. Everyone, try recompiling and check.
> I even tried from the command line --
> mkisofs -sort odr -no-cache-inodes -udf -full-iso9660-filenames -iso-level 3
> -o img.iso 'file > 4g.bin'
I recompiled it and for me it's still there. Did you try the command line I suggested in comment 20? I tried both my command line and yours and both create broken images.
(In reply to comment #28)
> (In reply to comment #27)
> > This problem doesn't persist anymore. Everyone, try recompiling and check.
> > I even tried from the command line --
> > mkisofs -sort odr -no-cache-inodes -udf -full-iso9660-filenames -iso-level 3
> > -o img.iso 'file > 4g.bin'
> I recompiled it and for me it's still there. Did you try the command line I
> suggested in comment 20? I tried both my command line and yours and both
> create broken images.
I took the time and was able to reproduce this bug. It occurred with both command lines. Personally I think this must be a problem with mkisofs especially if you can reproduce this on Windows 7 and Linux. I will leave this bug open, eventually it will be fixed
I didn't get any mails as response.
[78520.717285] attempt to access beyond end of device
[78520.717287] loop1: rw=0, want=8398484, limit=8398480
But I've been burning dual layer disks extensively (till 7.9 GB), I've been verifying each file with sha1sums, and I'm also using UDF all with k3b.
There's not a single corruption.
equery check app-cdr/cdrtools
* Checking app-cdr/cdrtools-3.01_alpha08 ...
306 out of 306 files passed
k3b runs the following --
/usr/bin/mkisofs -gui -graft-points -volid badfile -volset -appid K3B THE CD KREATOR (C) 1998-2010 SEBASTIAN TRUEG AND MICHAL MALEK -publisher -preparer -sysid LINUX -volset-size 1 -volset-seqno 1 -sort /tmp/kde-de/k3bH10211.tmp -no-cache-inodes -udf -iso-level 3 -path-list /tmp/kde-de/k3bb10211.tmp
(In reply to comment #30)
> But I've been burning dual layer disks extensively (till 7.9 GB), I've been
> verifying each file with sha1sums, and I'm also using UDF all with k3b.
Have you been writing files that are by themselves larger than 4GB? From what I've seen, this bug appears only when you create images containing such files, if you, say, burn two 3GB files the image will come out fine.
Reproducible with 3.01_alpha09
Maybe we should ask Joerg which OS does he recommend. I'll ask him now.
I've found a related bug in cdrtools-3.
If you burn files sizes 4G+, the file itself is ok at integrity, but other files (smaller in size) are not.
This has to do with the sort option again --
mkisofs -sort blnk -udf -iso-level 3
(In reply to comment #34)
> I've found a related bug in cdrtools-3.
Yeah, I've mentioned it in comment 20 and 21.
(In reply to comment #35)
> (In reply to comment #34)
> > I've found a related bug in cdrtools-3.
> Yeah, I've mentioned it in comment 20 and 21.
I did some experiments with alpha 01, and the bug is fixed in it. There're no corruptions at all. The large file is fine, so are the small files and there're no "[186955.271873] attempt to access beyond end of device"
But alpha06 (atleast) breaks it.
Still not fixed in 3.01_alpha10.
For this bug to be fixed, Jörg will have to realize it first.
In the mean I suggest alpha1 in tree, and maybe 3.0 should be masked.
But alpha1 has a different problem which I'm not willing to diagnose, it results in data corruption and the same old "attempt to access beyond end of device" under some combinations.
As an alternative, for people looking forward twards a quick solution/alternative for regular backups or other tasks, you may use squashfs, or tar (from stdin) and burn it using cdrecord.
squashfs works perfectly with all DE (automount, in the FS list of FM etc...)
cdrecord speed=1 dev=10,0,0 Dilbert-1989.tar.gz
Then you can tar -xf /dev/sr0
If you've given up hope of using K3B and are at the command line, a better alternative would be to simply not use the -sort option.
(In reply to comment #41)
> If you've given up hope of using K3B and are at the command line, a better
> alternative would be to simply not use the -sort option.
You've to admit that iso* FS is broken by design; who'd design a FS which only accepts a few handful characters as filename and have strange restrictions on them which makes it feel like an FS designed for Windows.
On top of that they try to fix the horribly broken FS (instead of reimplementing something else) by adding 'extensions' and 'levels' which further complicate the issue.
So I think squashfs is a much better option; it has inbuilt powerful xz compression and officially supports sorting also with almost all unix attributes.
The main reason why I commented -- The problem appears to be fixed with a11, please test (it may still be broken).
How exactly does it "appear to be fixed"? It's still broken for me in the same way in alpha11.
Join the discussion with Joerg then --
The problem with this report is that it claims that none of the archvers
can extract the file and this does not help as this leads just to bugs in
the related other software.
Thanks to Piotr Mitas, I now knoe that this is a UDF-specific problem
There is a preliminary test version at:
that fixes two bugs in the UDF meta data. Please test and report.
Fixed in app-cdr/cdrtools-3.01_alpha12.