I'm just getting started with LXD/LXC and in following some of the online tutorials of this system I've found that the lxc launch command hangs. Judging by all of the results in google when I search for "lxc hanging" this is not an uncommon problem.

For instance, I ran lxc launch ubuntu:lts/amd64 TestContainer and there was no result from that, even after waiting 12 hours.

Instead of using the Snap version of LXD or LXC I am using the .deb versions that come with Ubuntu 18.04 because I like to keep things the way that they are.

But after having these problems with the lxc command hanging, I figured that maybe I should try the snap version instead. 

Currently the .deb version of the lxd package installed is 3.0.3 and the version of lxc installed is 3.0.3.

While I don't really want to switch to using the new Snap packaging system, I have learned from various sources that the snap version of lxd and lxc is more recent. Maybe a more recent version wouldn't have this problem of hanging when trying to download container images.

I got rid of the deb versions of lxd and lxc by running:

apt remove --purge lxd lxd-client

And then I installed the Snap version of LXD with:

snap install lxd

Everything went smoothly.

Then I ran:

lxd init

and was greeted with the message:

-bash: /usr/bin/lxd: No such file or directory

Which meant that the lxd command wasn't recognized yet, so I logged out and logged in again. There's a command you can run which will refresh your environment without having to log out and log back in again, but it takes longer to google what that command is than it does to log out and log in again.

Next I ran (again):

lxd init

And I was met with many prompts. And I accepted the defaults. Here's all of the questions and the default answers:

Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=31GB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

And about the ZFS usage, I already had zfs installed. You should install the zfsutils-linux package on Ubuntu if you're going to use LXD with zfs.

Oh, by the way, the above lxd init operation failed with this message:

Error: Failed to create network 'lxdbr0': Failed to run: dnsmasq --strict-order --bind-interfaces --pid-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.pid --except-interface=lo --no-ping --interface=lxdbr0 --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.144.2.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.144.2.2,10.144.2.254,1h --listen-address=fd42:d013:40ba:944e::1 --enable-ra --dhcp-range ::,constructor:lxdbr0,ra-stateless,ra-names -s lxd -S /lxd/ --conf-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.raw -u lxd: Failed to run: dnsmasq --strict-order --bind-interfaces --pid-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.pid --except-interface=lo --no-ping --interface=lxdbr0 --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.144.2.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.144.2.2,10.144.2.254,1h --listen-address=fd42:d013:40ba:944e::1 --enable-ra --dhcp-range ::,constructor:lxdbr0,ra-stateless,ra-names -s lxd -S /lxd/ --conf-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.raw -u lxd: dnsmasq: failed to create listening socket for 10.144.2.1: Address already in use

And the wise people I found from Google said this is likely due to a conflict with Bind running on this system. And yes, Bind is running on this system. The whole Virtualmin stack is running, so there's a lot of ports in use.

I decided I would stop bind and then run lxd init again. That was fine, and lxd init ran without errors. Though eventually I'm going to want to use bind as a nameserver.

Then I tried the lxc command to see if it would hang:

lxc launch ubuntu:16.04 test

I did that and it ran as expected. 

So the Snap version of lxd worked, though there are some confounding variables here. I don't recall if before I was using bind on this system. 

Anyway, I'm going to look more into this. I like the idea of using containers. I like it more so than using Docker. I just couldn't see the point of Docker for my use scenario. I want to isolate stuff and have for instance, Solr, running in it's own environment. And I want these isolated environments to be able to be backed up in case something needs to go wrong and I'd want to set it up again really quick. Or set it up somewhere else.

So if you've had this problem with the lxc launch command hanging, you might want to swtich to using the Snap version of LXD/LXC or whatever it's called. 

I suppose with my bind conflict there are some workarounds. One person suggests telling bind not to listen on the ip assocated with the bridge used by lxd. Well that would make perfect sense. 

Anyway, I think I'm going to get this figured out because I want some kind of container system in use. LXD is pretty darn simple and lightweight, and it's pretty well integrated in the Ubuntu world and way of doing things. I have to keep Virtualmin too, along with all the services it uses. I could put Virtualmin inside a container but then I'd have to deal with every port being used by virtualmin being accomodated for in the container configuration. Containers seem appropriate for services that use one or two ports, but not many. That's just the way I feel about it.