It looks it's related to fixed bug #473866, but it still happens. I've tested 3.15 and 3.16 series, too - still not fixed. Kernel with the same config built from gentoo-sources boots perfectly fine. It seems to be similar to the problem discussed in thread pointed by URL. Reproducible: Always
Created attachment 385722 [details] boot log, part 1
Created attachment 385724 [details] boot log, part 2
Sorry for screenshots, but I have no way to attach to console via serial port.
Bouncing it off upstream.
1. can you enable frame pointers to get a better backtrace? 2. isn't it possible to add a virtual serial port to the guest VM that linux could use a serial console and log there? 3. can you upload your hv_vmbus.ko?
Created attachment 385764 [details] hv_vmbus.ko
Created attachment 385766 [details] boot log, part 1
Created attachment 385768 [details] boot log, part 2
Created attachment 385770 [details] boot log, part 3
boot log part 1 hangs for a while before printing call trace. In part 2 there's a first call trace and after some time kernel prints another call trace - part 3. The problem with VM is that I don't have an access to supervisor to connect to serial port. I've asked admins about that, but I have no answer, yet.
(In reply to Amadeusz Żołnowski from comment #10) > The problem with VM is that I don't have an access to supervisor to connect > to serial port. I've asked admins about that, but I have no answer, yet. It's not possible. I don't know if Hyper-V is so poor (very possible) or admins don't have enough will to find out how to connect.
what happens if you disable KERNEXEC in the guest config?
(In reply to PaX Team from comment #12) > what happens if you disable KERNEXEC in the guest config? PAX_KERNEXEC? It is&was disabled. I haven't enabled grsecurity options, yet. I wanted to have booting kernel, first. (-:
(In reply to Amadeusz Żołnowski from comment #13) > PAX_KERNEXEC? It is&was disabled. I haven't enabled grsecurity options, yet. then it's a different issue than #473866 and right now i don't have an idea, the guest kernel is stuck in the vmcall insn and it's hyper-v that isn't returning from it...
(In reply to PaX Team from comment #14) > (In reply to Amadeusz Żołnowski from comment #13) > > PAX_KERNEXEC? It is&was disabled. I haven't enabled grsecurity options, yet. > > then it's a different issue than #473866 and right now i don't have an idea, > the guest kernel is stuck in the vmcall insn and it's hyper-v that isn't > returning from it... Yup, but the same config with gentoo-sources (which is mostly vanilla, isn't it?) produces fully functional kernel... Maybe you could point to some small fragments where I can experiment with a patch? Maybe it's something with atomic operations?
sorry, i don't know where you could experiment here as the ball's already on hyper-v's court so to speak, the kernel's simply waiting for the vmcall to return. what would help is knowing what hyper-v's doing but i guess that's not trivial to debug. perhaps there're some logs on the host? also, what is the hyper-v version you're using? i've had success reports before for at least windows 2012 r2.
(In reply to PaX Team from comment #16) > perhaps there're some logs on the host? My VPS provider isn't either willing or able to cooperate, so no chance for that, unfortunately. So… I guess we have to close it as RESO NEEDINFO or something like that. But thanks anyway. :-) > also, what is the hyper-v version you're using? i've had success reports > before for at least windows 2012 r2. It's 2012 R2, too.
(In reply to Amadeusz Żołnowski from comment #17) > (In reply to PaX Team from comment #16) > > perhaps there're some logs on the host? > > My VPS provider isn't either willing or able to cooperate, so no chance for > that, unfortunately. So… I guess we have to close it as RESO NEEDINFO or > something like that. But thanks anyway. :-) > > > also, what is the hyper-v version you're using? i've had success reports > > before for at least windows 2012 r2. > > It's 2012 R2, too. I'm sorry about this too. I just don't have access to this env to help out. We've been frustrated with virtualization on and off because of these sorts of issues. We have concentrated on qemu/kvm and virtualbox because at least there we have a chance of testing.
(In reply to Anthony Basile from comment #18) > I'm sorry about this too. I just don't have access to this env to help out. > We've been frustrated with virtualization on and off because of these sorts > of issues. We have concentrated on qemu/kvm and virtualbox because at least > there we have a chance of testing. No problem, that's understandable. :-)