OpenNebula image for Scaleway

Update 2017-02-16: The problems with the recipes are now fixed and the image does all the configuration automatically.

Scaleway is a provider bare metal servers that work pretty much as VMs. They boot from PXE and have the OS disk exported using nbd so spinning up new servers with different disk images is pretty fast. You can chose from x86 and arm processors, different number of cores and amount of RAM. More LSSD (fancy name for the nbd volumes) can also be added to the machine if needed.

The processors itself are not too fast (Atom) but given the number of cores, RAM and virtualization support it’s a good candidate to create OpenNebula labs. I’ve been working on an image that has OpenNebula already installed and configured to starting new labs is faster. There are still things that doesn’t work but at least the machine boots.

Scaleway uses docker to create the images and then the files are dumped to a real filesystem that is used as and image. There are things that I didn’t really get reading the documentation so I hope this clears things up.

You can follow the documentation to create a new image with Docker. Make sure you read the requirements in that page before creating the server, specially:

  • Use “Docker” from the “ImageHub” images
  • Add a second LSSD of 50 Gb. This can not be added while the server is running

I’ve chosen the “C2S” image but a VM might be enough.

You can find a list of images and its sources in the following URL.

I’ve browsed through them to understand the process. Unfortunately all the application images are based on Ubuntu and I wanted my image to be a CentOS. As the base I’ve picked the CentOS source. The important files here are Dockerfile, Makefile and the script inside overlay-image-tools. Change Makefile so the information is correct and strip Dockerfile from everything except from the bare minimum:

FROM scaleway/centos:amd64-latest

# Environment
ENV SCW_BASE_IMAGE scaleway/centos:latest

# Adding and calling builder-enter
COPY ./overlay-image-tools/usr/local/sbin/scw-builder-enter /usr/local/sbin/
RUN set -e; case "${ARCH}" in \
    armv7l|armhf|arm) \
        touch /tmp/lsb-release; \
	chmod +x /tmp/lsb-release; \
	PATH="$PATH:/tmp" /bin/sh -e /usr/local/sbin/scw-builder-enter; \
	rm -f /tmp/lsb-release; \
      ;; \
    x86_64|amd64) \
        yum install -y redhat-lsb-core; \
        /bin/sh -e /usr/local/sbin/scw-builder-enter; \
        yum clean all; \
      ;; \

RUN yum update -y

# Our installation instructions

# Clean rootfs from image-builder
RUN /usr/local/sbin/scw-builder-leave

The FROM clause is changed to use the docker image created by scaleway. There is an update command as the base image is a bit old.

For this image instead of cramming all the installation instruction in the docker file I’ve decided on creating an script that does all the work. This is easier to debug and could be used without docker. The script is called and is copied to the image and executed. The workflow to develop the script was as follows:

  • make build: this creates the docker image with the previous instructions, basically yum update -y
  • make shell: opens a shell in a docker container with the previously build image
  • write and debug
  • copy the final script back to the host machine

I didn’t want to hardcode OpenNebula passwords or host SSH keys so another script is added that is executed on the first but and does the following:

  • generates oneadmin credentials
  • starts OpenNebula for the first time so it bootstraps the database with the new password
  • changes datastore drivers to qcow2
  • creates a new network using virbr0 bridge
  • populates known_hosts with localhost ssh fingerprint

This is done adding the script to rc.local but was a bad idea as not everything works correctly.

So the instructions added to the Dockerfile are:

COPY ./ /
COPY ./ /
RUN bash -x /

After all the scripts are working you can issue the command make install to build the docker image and write it to the second volume attached to the server.

The next step is creating a snapshot from this volume but it cannot be done with the server in running state. The problem is that there is not a poweroff stated, you have to “archive” the server. It took me around 15 minutes to copy the images to the archival storage. I think this process can be streamlined allowing snapshots from running servers.

Another thing it the UI does not tell is that clicking archival does not do a proper shutdown, it disconnects power from the machine. Issue halt to make sure that all data is in the disks. I’ve also copied the files to my laptop before archiving it.

When the Server is finally archived you can go to its volumes and create a new Snapshot. Set a proper name as this will be the name that will appear in the volume of the new servers created from the new image (it can be renamed later). Now in the snapshots list select the one you’ve just created and click “Image from Snapshot”. Make sure that you pick the correct architecture.

And that’s it, you can create new server from that image.

Things that didn’t go as planned:

  • the docker container didn’t have systemd running so some of the testing was done starting the daemons manually
  • rc.local was executed before the ssh server was up so the known_hosts file was not populated
  • forgot to add localhost as hypervisor

I still don’t have a developer account so the image can not be shared but you can create one downloading the files:

Also, instead of creating a new image you can start a CentOS 7 server and execute and manually.