Cloud pool for City Cloud: a short tutorial

City Cloud, Sweden’s largest IaaS-provider recently released an OpenStack-based infrastructure, making it the first multi-location OpenStack-powered IaaS in Europe. We recently blogged about how openstack4j added support for Keystone v3 authentication, and how to use it to log in to City Cloud to utilize all of their OpenStack goodness.

Yesterday, we blogged about releasing version 3 of our cloud pool API. Today, we blog about how to configure and use our OpenStack cloud pool for use in City Cloud.

Clone and go!

First of all, ensure that you have git, a Java compiler (supporting 1.7 and up), and Maven (3.x) installed. Head over to our github page, and clone the repositories scale.commons and scale.cloudpool as such:

git clone
git clone

Now, compile using Maven:

cd scale.commons
mvn clean install
cd ../scale.cloudpool
mvn clean install

Maven will do its thing, and the end result will be a JAR file for each cloud pool implementation. We will soon use the one in the openstack/target directory, but first, let’s write a configuration file!


As described in a previous blog post, we need to figure out our City Cloud OpenStack credentials, since they are different than the ones used for logging in to the control panel. As we wrote in that post:

Of course, the first thing you need is to head over to Citycloud and sign up. Once your new account is ready, log in over at your control panel. Once there, expand the API section, select the Native OpenStack API item, ensure that the API is active and create a new user. You assign it a password, keep a note of it. Also note your user’s ID.

Next, head over to the Settings section in your navigational sidebar. There, you will find “Manage projects”. Click on it, and note your project ID. Citycloud uses project-scoped authentication, so we will need all three: Openstack API user and password, plus the project ID the resources will belong to.

Keep that information handy, we will need it in our configuration file. But first, a note about SSH keys.

Important note about SSH keys

While City Cloud offers a way to define SSH keypairs in their Control Panel under Servers -> Keypairs, these keypairs actually don’t become available to OpenStack instances via the API. So one has to use the OpenStack API to define the key pair, instead. We have brought this to City Cloud’s attention, and perhaps the process will be simplified in the future. As it stands, however, following the official CLI tools documentation for how to add a keypair is your best bet.

Once all information and SSH keys are in place, we can write our configuration file. The following template is ripped right from the README file on GitHub, but adjusted for City Cloud’s authentication in the driverConfig section:

  "cloudPool": {
    "name": "MyScalingPool",
      "driverConfig": {
        "auth": {  
          "keystoneUrl": "",  
          "v3Credentials": {
            "scope": {
              "projectId": "YOUR PROJECT ID HERE"
            "userId": "YOUR OPENSTACK API USER ID HERE",
        "region": "YOUR CITY CLOUD REGION HERE", 
        "assignFloatingIp": true
  "scaleOutConfig": {
    "size": "1C-1GB",
    "image": "Ubuntu 14.04 - LTS - Trusty Tahr",
    "keyPair": "YOUR SSH KEY HERE",
    "securityGroups": ["YOUR SECURITY GROUPS HERE"],
    "bootScript": [
      "sudo apt-get update",
      "sudo apt-get install -y --force-yes apache2"
  "scaleInConfig": {
    "victimSelectionPolicy": "NEWEST_INSTANCE",
    "instanceHourMargin": 0
  "alerts": {
    "subject": "[elastisys:scale] alert for cloud pool 'CityCloudDemoPool'",
    "recipients": ["", ""],
    "sender": "noreply@",
    "severityFilter": "INFO|NOTICE|WARN|ERROR|FATAL",
    "mailServer": {
      "smtpHost": "",
      "smtpPort": 25,
      "authentication": null,
      "useSsl": false
  "poolUpdatePeriod": 10

This configuration, once filled out with real values instead of the placeholders, of course, will be configured to start instances of stock Ubuntu 14.04 images in your region of choice (Kna1 for Karlskrona, Sto2 for Stockholm, and Lon1 for London) and install the Apache web server upon provisioning. If you instead want to, e.g., ensure that Puppet is installed and have it configure your new node, that substitution in the bootScript section is easy to do.

Starting up

If the correctly filled out configuration file is saved as config.json in the openstack directory under scale.cloudpool that we cloned earlier, we can start the cloud pool as follows:

java -jar target/cloudpool.openstack-*-SNAPSHOT.jar --config config.json

This will run your freshly compiled OpenStack cloud pool using your configuration set as the initial configuration, and its REST server listening on port 8443. Run the above command with –help to see the options you can pass to, e.g., change HTTPS ports, configure certificates, etc.

Using the cloud pool

Now that everything is up and running, let’s have some fun with it!

We will use the REST API using curl, so we can define some options so we don’t have to repeat ourselves over and over:

# define configured certificates, if using
export CLOUD_POOL_AUTH="--key-type pem --key credentials/client_private.pem --cert-type pem --cert credentials/client_certificate.pem"

# general options for curl
export CLOUD_POOL_OPTS="-v --insecure ${CLOUD_POOL_AUTH}"

Let’s use the REST API to start 2 instances of our application:

curl ${CLOUD_POOL_OPTS} -X POST https://localhost:8443/pool/size --header "Content-Type: application/json" -d '{ "desiredSize": 2 }'

The call returns immediately, and the cloud pool will now try to provision 2 instances as we have requested. We can query the cloud pool’s size repeatedly using watch:

watch curl ${CLOUD_POOL_OPTS} -X GET https://localhost:8443/pool/size

Once we see that we have 2 instances actually running, not just desired, we can look at their public IP addresses by inspecting the output of:

curl ${CLOUD_POOL_OPTS} -X GET https://localhost:8443/pool

Log in to these instances using the SSH keys defined earlier, visit them with a web browser, register them with your load balancer, etc. They are yours.

Let’s assume that one of the VMs is misbehaving. Dig out its ID from the JSON document returned earlier, and use it to terminate the machine, ordering a replacement:

curl ${CLOUD_POOL_OPTS} -X POST https://localhost:8443/pool/MACHINE_ID_GOES_HERE/terminate --header "Content-Type: application/json" -d '{ "decrementDesiredSize": false }'

The machine should be terminated, and a replacement provisioned to take its place.

When we feel like we have used our VMs enough, let’s shut one down by decreasing our desired size to 0:

curl ${CLOUD_POOL_OPTS} -X POST https://localhost:8443/pool/size --header "Content-Type: application/json" -d '{ "desiredSize": 0 }'

Re-run the repeated query for the pool size, and see that the VMs disappear shortly. Or, view it from the City Cloud control panel. The cloud pool also releases allocated floating IPs.

Please see the entire cloud pool API documentation for other methods to invoke, including setting service state and membership status, as described in our previous blog post.


In this blog post, we have shown how to obtain, build, configure, run, and use the open source elastisys cloud pool for City Cloud. We hope that it has been informative and if you have any questions, ask us in the comments below or feel free to contact us.

Leave a Reply