Oracle VM can be a little tricky to drive, so I use this post to document some gotchas and what I did to resolve them.
Use the following links to jump to the item you need.
When starting up VMs from OVM Manager, you may encounter this error:
Error: Device 1 (vif) could not be connected. Hotplug scripts not working
Note, the Device could be 0, 2, etc. depending upon the number of vNICs you have configured.
What’s happening is the vNICs are not being initialized by the Hotplug scripts within the specified time limit. It could also be the side effect of a Xen memory leak which is a known bug. If the VM can be started without any vNICs, then you’ll know you have this issue. Here are 3 things you can try if you’re in this situation.
Solution #1: Add the vNICs one at a time.
RACNODE2_VM has an ID of 0004fb00000600000703eab3e0c76af5.
We know from creating the VM storage repositories earlier, that the VM_Filesystems_Repo has an ID of 0004fb000003000059416081b6e25e36. If we log onto the Oracle VM server itself, we can locate this VM’s configuration file called vm.cfg:
[root@ovmsvr]# cd / [root@ovmsvr]# cd OVS [root@ovmsvr]# cd Repositories [root@ovmsvr]# cd 0004fb000003000059416081b6e25e36 [root@ovmsvr]# cd VirtualMachines [root@ovmsvr]# cd 0004fb00000600000703eab3e0c76af5 [root@ovmsvr]# pwd /OVS/Repositories/0004fb000003000059416081b6e25e36/VirtualMachines/ 0004fb00000600000703eab3e0c76af5 [root@ovmsvr]# ls -l total 0 -rw------- 1 root root 981 Dec 17 22:03 vm.cfg
The vm.cfg file contains the VM configuration data. The first line in vm.cfg specifies the vNIC MAC addresses. For example:
vif = ['mac=00:21:f6:d2:45:a0,bridge=c0a80000', 'mac=00:21:f6:e6:f6:71, bridge=101393ebe0', 'mac=00:21:f6:2f:de:c5,bridge=10a42e83d3']
Make a copy of this line and comment it out so you have a record of what it looked like originally. Then edit the original line, removing all but the first mac and bridge pair. Then start the VM. If that works, stop the VM, edit this line to include the first 2 mac and bridge pairs, then start the VM again. If that works, try starting the VM with this line containing all 3 mac and bridge pairs (i.e. as the line looked originally). This has worked for me, but not always.
Solution #2: Restart the ovs-agent service.
The problem may be caused by a memory leak which is a known bug. This is documented on MOS, Doc ID 2077214.1. What that note effectively tells you to do is restart the ovs-agent service:
[root@ovmsvr ~]# service ovs-agent restart Stopping Oracle VM Agent: [ OK ] Starting Oracle VM Agent: [ OK ]
This is a quick and easy workaround and has worked for me.
Solution #3: Edit and re-start.
The problem could be the value of DEVICE_CREATE_TIMEOUT which defaults to 100. According to MOS, Doc ID 1089604.1, this timeout value can be increased. Thus giving the Hotplug scripts more time to complete their tasks. This is done by editing this file, /etc/xen/xend-config.sxp.
Find this line:
and change it to this:
I have had some success with this solution, but VMs have refused to start after making this change. I got around that by using solution #2.
Note, to make this change effective restart of the OVM Server, which takes ALL your VMs down!
If you’ve not logged into OVM Manager for a while, you may find it’s taken an extended vacation. This can take the form of having to wait a really long time after you’ve entered your username and password, then it throws an “unexpected” error with this accompanying text:
I know, right? How unusual for Java to have had a problem. I mean it’s so rock solid and stable!
Now, you’re probably wanting to log in because you have something to do and don’t have a week to trawl through countless log files trying to find out why Java has had a hissy fit. So the quickest and simplest way to get logged into OVM Manager after seeing this error is to stop and re-start it:
[root@ovmmgr ~]# service ovmm stop Stopping Oracle VM Manager [ OK ] [root@ovmmgr ~]# service ovmm start Starting Oracle VM Manager [ OK ]
Once OVM Manager is up and running, trying logging in again.
Every now and then stopping a VM puts the VM into a ‘Stopping’ state in OVM Manager and that’s where it stays. Aborting the stop or restart and trying again has no effect other than to wind you up. Brilliant! Fortunately there’s a back door method to kick the VM to death, then resurrect it safely. For this you will need the ID for the VM. This can be obtained via OVM Manager.
Click the arrowhead to the left of the VM name. This opens up the Configuration tab. Make a note of the long VM ID string.
Next, log into the OVM server as root and locate the directory where the VM configuration file is located (vm.cfg). The path will be something like this:
/OVS/Repositories/<ID directory name>/VirtualMachines/<VM ID>/vm.cfg
Next, run two xm commands. One to destroy the VM (it doesn’t actually destroy it – rather it destroys the processes running the VM) and the other to create the VM using its configuration file: Here’s an example (where the string ending in 8f4 is the ID of the VM):
[root@ovmsvr ~ ]# xm destroy 0004fb0000060000129f6b1374e4c8f4
[root@ovmsvr ~ ]# xm create -c /OVS/Repositories/0004fb000003000059416081b6e25e36/VirtualMachines/0004fb0000060000129f6b1374e4c8f4/vm.cfg
Whenever I’ve tried this, the command line has hung until I stop and start the VM again via OVM Manager. Then I got the cursor back. YMMV.