Posts

Monitoring with statsd and CloudWatch

For organizations on AWS looking for a monitoring solution, CloudWatch is an attractive choice. EC2 instances and services come with built in CloudWatch monitoring, and via SNS alerts can be routed to email or text messages. I recently had the opportunity to set up a new monitoring system for a client that backed into CloudWatch. It provided an interesting challenge since the project called for monitoring both application and system metrics.

My goal was to route all monitoring information through the same medium and store it in the same backend. While it is possible to provide application monitoring via statsd but system monitoring through something like collectd, I felt like it would be a cleaner solution to send all data over statsd and store it all in CloudWatch.
statsd Developers really like working with statsd. It provides an easy, well supported way to write metrics out to a plugable backend. For example, the python pip module for statsd allows you to log metrics like this:

impor…

Flask-extension management

When your Python Flask application reaches a certain size, or starts to use a number of different Flask extensions, you're bound to start running into issues with how the application is initialized.

To review, the factory pattern states that you should use a function to create your flask application object, like so:

def create_app():
  app = Flask(__name__)
  # do stuff
  return app

But adding in extensions can get problematic:

def create_app():
  app = Flask(__name__)
  db = SQLAlchemy()
  mail = Mail()
  return app

Coming from a small Flask app that set up the app directly, at first I thought: this is great! I can re-create my app object for each set of tests. I can test initializing it in different ways if I need to. But as soon as I added in the extensions all my tests started failing.

It turns out that naively adding in extensions this way is bad. We don't want application specific state to be kept in extension object; we want the extension object to be usable by multiple apps. An…

Creating an Intermediate Flask Application

Before I dive in, here's a little background for those who aren't familiar with what Flask is.
Flask is a microframework for developing Web applications. It doesn't have all the bells and whistles of a Django or Ruby on Rails; it just includes the basics. It provides an API for request routing, sessions, templating, etc. but doesn't include a way to generate/validate forms, an MVC scaffolding or ORM out of the box.

That isn't to say that you can't get these features under Flask. In fact, Flask has a number of extensions that can provide many of the features you get with a more heavyweight solution. But they are optional.

None of this information is new. It's culled from disparate sources I found on the Web when learning Flask. I recommend the following, all of which I learned a great deal from:
Official Flask TutorialThe Flask Mega TutorialExplore FlaskTesting Flask ApplicationsDisclaimer: I am not an expert! These posts are from the point of view of someone…

A hack to configure workstations with chef

One of things that I used to hear a lot from people was, "How can we use chef to configure developer workstations?" or other kinds of unmanaged hosts.
Background Most of the solutions I have come across don't fit my use case for one of the following reasons:
They require adding the workstation as a node to the chef server. This adds unnecessary complexity.They use chef-solo instead of chef-zero, which is less flexible and harder to test.They don't leverage Berkshelf for cookbook mangement, which makes it harder to customize things. Since I'm starting to get back into chef development after a long break, I thought it would be a fun problem to tackle. In particular I wanted to see if I could use chef-zero in a vacuum, without requiring the user to do much setup.
My Solution I use chef-zero and Berkshelf to execute chef-client locally. This can be done in a couple of commands on a workstation or "dumb" VM. Assuming a Debian or Debian derivative host:
apt-ge…

RightScale Open Sources Chimp!

I'm very pleased to announce that my employer RightScale has open sourced the chimp program I've been working on!

The chimp is a command line program that allows users of the RightScale platform to execute scripts on their cloud servers. It also includes advanced error handling, concurrent execution, and a simple orchestration framework for large jobs. Check out my presentation from RightScale Compute 2013 for a demo.

On the RightScale operations team, we use chimp to fully automate our releases. We use Ruby rake files to group chimp commands together into jobs so that we can execute batches of commands at once. A release is then made up of these jobs.

The chimp is written in Ruby and distributed as a Ruby gem. The source is on github.  

A Grand Adventure: compiling transmission on my home router

My home router is running a Linux firmware that uses Optware for package management. For various reasons I needed to get Transmission 2.42 running on it. The only pre-built packages I could find were for versions 2.41 and 2.61, so I decided to embark on the grand adventure of compiling C programs on my router.

The ipkg can be downloaded here: transmission_2.42-1_mipsel.ipk

For anyone not familiar with Optware, it's essentially the Debian packaging system set up to install all packages into the /opt directory. This is useful for embedded systems (such as routers) where the root filesystem is backed by nvram. The /opt directory can be placed on a flash or hard drive to avoid writing to nvram too often.

Setting Out For GCC Understanding how C/C++ development is done under UNIX systems is a key piece of knowledge for systems administrators. The finer points of C programming are not as important in practice, but it behooves nerdlings embarking on the great UNIX Odyssey to know how thing…

execnofd.c: run a program on Linux after closing all fds

Hacked this up to solve a little problem at work. All it does is close all file descriptors above 2 and exec the program specified on the command line. You can use it like this:

execnofd /usr/bin/foo
Here's the gist: