Deploying an Instance with novaclient in Rackspace

There are a number of limitations with the current GUI when deploying a new instance in Rackspace (such as no option to attach a keypair to the deployed instance) - therefore I suggest you use the API.

One of the easiest methods I have found is using the nova client 

Installing python-novaclient on Windows
Installing python-novaclient on Linux and Mac OS


Export the correct variables

Environment Variables

List the available flavors (sizes)

Available Flavors

List the available images (templates)

Available Images
(Output was truncated)

Deploy a new instance

To deploy a new instance with 1 vCPU, 1GB RAM, 40GB disk from the RHEL 6.4 template run the following command (must all be on one line!!)

# nova boot --image 16e6c0ae-f881-4180-95b0-3450fe3f8e96 --key-name mykey --flavor 3 --no-service-net --no-public --nic net-id=0XXXX6b2-af20-4d31-8a1a-41abfa3b52ce mytest1

To break it down..

nova boot --image a524978e-e9b5-4069-927e-1334e982b8d9 - deploy instance from VGH template
--key-name mykey - use my set of keys
--flavor 3 - instance size
--no-service-net - do not connect to Rackspace network
--no-public - do not connect to Public Internet
--nic net-id=0XXXX6b2-af20-4d31-8a1a-41abfa3b52ce - connect to Internal_network 
mytest1 - instance name
More RackSpace articles are planned


Is DevOps The Answer to Everything?

I was just watching a discussion on DevOps, Automation and Continuous integration and heard the following,

"Chef is a tool for doing infrastructure automation - config management, application deployment - all of that stuff" (Adam Jacob - CPO, Opscode)

Two things that I would like discuss regarding that quote.

What is Infrastructure?

In my current role at Cisco - we have been discussing to great length platform - and what that platform actually is.

Have a look at the following diagram.


Where does the platform fit? What would you call the infrastructure? What is platform?

If you are a typical virtualization guy you most probably would focus on the bottom part of the diagram. You would say that is the basis of any infrastructure - without the compute, storage, network and perhaps some of other parts (hypervisor or OS could actually float up or down between the two halves) - but without those things - you cannot build anything on top.

If you were to ask a typical developer - then they would say it is all in the top half - because without having the java framework, the application foundations - then the end product (what you are selling to the customer) would not work, you cannot develop applications unless you have the infrastructure underneath.

It all depends on your perspective.

At the moment - the top half of the diagram - developers are making great use of abstraction of the underlying layer. They go to AWS, Openstack, VMware and access an API.

API encapsulation

Developers do not care about the underlying layer - they will place an call to an API to provision a VM - the whole underlying framework is abstracted.

That is why there is such massive use of Cloud for this kind of work. For a developer - just give them a VM and plop it down somewhere - what they really need is an IP and network connectivity to do their work.

But for the IT guys they are very worried about providing everything and piecing it al together to bring up that API and expose to the end user. That is not so simple. Automating the deployment of any of the squares above is not a simple task.

SDDC and SDN are great buzzwords that are trying to define those things today.

Which brings me to the second topic.

Is DevOps the answer to everything?

Again - this will depend on your perspective.

From a developer's eyes - hell yes! I can deploy a full product as code - multiple times a day, it is reproducible, repeatable and just exactly right up my street. I can make an operational change - role it out to hundreds of instances, and if I really find that I messed things up - then role back that change with a small change of the code.

In principle I agree, except for one thing. Tools like Puppet or Chef can do this with what I would define as new generation technologies, NoSQL databases, lightweight web applications, messaging frameworks. This is not valid for the legacy - deeply-embedded enterprise applications, take MSSQL and Oracle for example.

I do not know of a easy to implement solution that will perform a database schema change and easily roll that back with a quick change of a line of code, it is much more complicated than that. I do not say it is not possible - it is - it just takes a lot more planning and thinking. But then again - how many of you are actually using Oracle or MSSQL in the cloud? Not many I think.

Over to the IT guy…

Is DevOps the holy grail? Not by a long shot. Today there is very limited options available to deploy the bottom half of the stack. There are some modules today that can build out an Openstack deployment. Razor provides some of the functionality to deploy a vSphere infrastructure - but again it is not a polished solution and still has a lot of place for improvement.

Do you know of a puppet module that will deploy a UCS/HP chassis? Is there a Chef recipe that can provision a NetApp/EMC storage array? No there is not - at least not today.

So if you were to ask me - DevOps is definitely not the answer to everything - at least not until it can handle the whole stack from top to to bottom

As Robbie Minshall said on the same discussion (39:00),

"What we are trying to do is tie the phases together…  You cannot have DevOps in one space and then have it hit an organizational barrier in another - where they do it completely differently or not all"

For me today this is not a theological problem. It is a technological one. There are very few (if any) tools today that can properly automate a full stack that span both halves of the diagram above.

The vendors are moving into that direction - but are not quite there - yet.

(The original discussion is embedded below)

DevOps and Continuous delivery leveraging Cloud

Please feel free to add your thoughts or comments below.

Thanks to @dawoo for the vEXPERT Swag

I would like to thank Darren Woollard for taking the time to create and send over my new vEXPERT and iVirtualise sticker (even if it is spelled wrong :) )



If you are a vEXPERT - go on over and add your quote to his blog here - http://vexpert.me/sticker


Snowflakes in July?

Unless you live in a very specific part of the world - the chance that you will see snow (I mean the real fluffy stuff - not the artificial muck that people make) is really, really slim. But many of deal with snowflakes each and every single day - and we do not even know it.

So where are all these snowflakes? (in July??)

They don't see each other. They only see what they want to see. They don't know they're dead snowflakes. How often do you see them? All the time. They're everywhere. (I love this quote)

So before I start I suggest that you read this excellent post by @Martinfowler - SnowflakeServer ..

… Great!! You are back.

Our datacenter is full of snowflakes. These snowflakes are our own doing. At least this was the case a few year a go when there were manual build processes. Enterprises realized quite soon that this was not a healthy situation and looked for a way to standardize builds.snowflake

First we started with Norton Ghost images. Then came kickstart scripts for the Linux Servers, and RIS (and thereafter Windows Deployment Services) for the Windows boxes.

There was fiddling with drivers each time a new hardware model came out, a new NIC was introduced, a new HBA etc. etc.. it was a challenge none the less but a pretty automated process.

But for most (I can definitely speak for myself) once the OS was delivered - the application was usually not part of the build. Patches were installed up to a certain point - but then again, there was an build process for IBM hardware one for HP and another for Vendor X/Y/Z.

So yes we have a multitude of servers which are - yes you guessed it - snowflakes. It will be difficult to say that your servers are the same, they are built the same and they will never stay the same. I am not saying that they all should be - but it slowly but surely becomes a nightmare.

But hey… Wait !!! Virtualization solved this for me didn't it? I mean that was the whole idea of having templates wasn't it? A golden image that was the central point of deployment, had to be updated only in one location and from there on in - all VM's would be deployed exactly the same. Well yes - in a way this is true. This changed the snowflakes to some kind of a rubber stamp - exact copies, standardization and hardly any variation.

A couple of months back I was as DevopsCon in Israel and there I was enlightened to the fact that people used Puppet / Chef / Tool X/Y/Z for automated builds - a large part of them was for Continuous Integration or Continuous Development. And there something struck me.

Using an automated build system gave you the exact same OS each and every time.

But.. (and it is a big BUT) - this was only valid for your own environment.

Picture the following scenario. You need to deploy a system that is comprised of:

  1. Bare metal OS (heaven forbid)
  2. VMware VM's - on your environment
  3. vCloud VM's - either on your private cloud - or a Public one
  4. AWS VM's.

This is not a crazy scenario - complicated yes but quite viable.

Now let's examine how you would get the same OS down on to each of the above:

  1. Automated build - using Puppet / Chef /….
  2. Deploy from template - which could possibly have been built with one of the above tools - but I doubt it.
  3. Could be the same template that you have used in-house - but still deployed from a template.
  4. Deployed from a AWS image - or also perhaps also a template that was imported - again I find that not really to be the case.

Here we have at least 3 - and perhaps 4 different ways of getting the same OS down to the different locations and platforms

Now we want to customize the OS's just say Hostname and IP address:

  1. Customize with Orchestration tools (Puppet/Chef)
  2. Use VMware Guest customization
  3. Use vCloud Guest Customization
  4. Customize with your Orchestration tools.

If you are lucky - you can use two methods. If not perhaps 4. If you plan very well - and use the same snowflakes - a lot of themmethod for all 4 - then you are in really good shape - but I assume that most of us are nowhere near that stage today.

Snowflakes… for OS deployment, Snowflakes for configuration, Snowflakes.. Snowflakes.

Enough of what we do not have, and now it's time to talk about how I would like things to be in the future (and I think you should too).

All your Operating systems should be deployed exactly the same way, configured exactly the same way and managed (yep)…. exactly the same way.

  • does not matter if they are physical or virtual
  • does not matter if they are VM's on your vSphere environment or on a vCloud environment
  • does not matter if they are on a VMware public or private cloud, AWS or an Openstack based cloud.

I do not want to have to manage 4 different kinds of deployments one for each environment. I would like to to have one build process that should be able to produce the exact same Operating system (and yes I know there will be differences depending on the hardware or underlying virtual hardware) but the process will be the same. Having four different kinds of snowflake families is better than having hundreds of snowflakes - but still not ideal.

The same Puppet module. The same Chef Recipe. The same automated build..

Are we there yet, no. Will we ever get there - I do not know - perhaps never.

One thing that immediately comes to mind. Since Puppet is now partnered with VMware and are developing a number of projects specifically catered to VMware's needs (or so I hear) - one thing that VMware could do, is allow for the deployment and customization of VMware VM's with Puppet instead of using the guest customization API's. All of course integrated into vSphere.

Just a thought.

Please feel free to share your thoughts and comments below.