It used to be when a livecd was loaded if a cable was plugged into the ethernet port that corresponded to eth0 the network would work. Now net.eth0 does not exist. This may be an effort to help depricate ifconfig but network no longer works out of the box. In one server I was able to symlink net.lo to net.eth0 and it worked I think with the newest livecd that doesn't even work. Reproducible: Sometimes Steps to Reproduce: 1.Boot with the current amd64 install cd 2.Network will not work by default 3.Sometimes it is not possible to even symlink to net.lo 4.No packets are transmitted through eth0 5.Even when "network" is started there is no connectivity Actual Results: Network did not work. Expected Results: Eth0 or Eth1, whichever is plugged in should have connected and assumed dhcp for the install.
On both the livecd for the server where I could repair the problem and the newer one where I couldn't I get the warning net.eth0 has started, but is inactive
What network card do you have in the box and what is the exact version of the install-cd?
I am using the current install cd. I found a similar problem though when I used an old one on this server it's a poweredge 1850. The network card is an intel e1000 I believe. http://en.gentoo-wiki.com/wiki/Dell_PowerEdge_1850 LIVECD Version: install-amd64-minimal-20110811.iso
Also a strange message "cannot start autoconfig as gpm-pre would not start. (and on this server it's acting worse than the last one, the last one I could get it working after making a symlink)
When cable was in NIC 2 it then had eth0 work, and eth1 on net-setup detected no hardware settings, this may be only an issue with dual nics.
(In reply to comment #7) > When cable was in NIC 2 it then had eth0 work, and eth1 on net-setup detected > no hardware settings, this may be only an issue with dual nics. Does your box have a dual nic card or 2 identical cards? If so, you need to pass the nic options to the kernel or it will likely default to enable only 1 of the cards / ports.
@kernel: Is there anything we can do here or should we just document that users will have to pass the appropriate kernel parameters to configure dual-nic cards?
@kernel: ping
*ping*
(In reply to comment #9) > @kernel: > > Is there anything we can do here or should we just document that users will > have to pass the appropriate kernel parameters to configure dual-nic cards? ping ?
This seems to work fine here with multiple NICs. I think this was fixed a while back, now that we are using the "dhcpcd" init script to bring up all available interfaces via DHCP.
Kernel pong. Assuming that this may still form a problem for some people it kind of depends on how common those kind of cards are; if there are just a few, documenting it somewhere should just suffice. If there's a reasonable amount then setting it by default, if it doesn't go at the cost of non-multiple NIC users and would just do nothing for those users. Otherwise it may be interesting to introduce a kernel option to support multiple NICs that would enable the various options. In the end, multiple options are possible here and it depends on the amount of people we're targetting, how the kernel parameters function and so on; I can't tell in advance what works out the best. Though, at a minimal, documenting it would be a good start...
Created attachment 349670 [details] screenshot of success See attached screenshot, this is working exactly as desired, right? Whether you have 1 nic or 10, it brings up the ones it can via dhcp, and you can use net-setup or hand config otherwise.
(In reply to Ben Kohler from comment #15) > Whether you have 1 nic or 10, it brings up the ones it can via dhcp, and you can use net-setup or hand config otherwise. Are you aware that this topic is about _one_ card providing multiple NICs? It might be possible that this still depends from card to card. For instance, we need to know whether it works for the mentioned Intel e1000...
Sorry, I thought this was about the init system & livecd setup. It seems this is just about instructions on how to get predictable fine-grained control of certain multi-interface NICs.
It seems I've missed Tom's replies from the kernel team - apologies for that. Given the questions raised, I'm going to close this bug as NEEDINFO. If anyone is still having issues with multi-nic cards, feel free to reopen the bug and provide the requested information.