Friday, January 29, 2010

Server provisioning and other technical infrastructure

One of the other major tasks I've been focusing on the last few weeks is building some infrastructure. As we continue to expand our team, I've realized we need to do some work on our own technology landscape to enable that growth.

Some of the major pieces I see for the technical infrastructure:
  • Automating the process for building a new server with our major apps installed -- a process called server provisioning.
  • Using the same framework for all of our applications.
  • Building a library of common code that we will use across all of our apps.
Server automation

Through a series of events I won't go in to, I've ended up doing most of the server administration for the web servers that the project application tool and the pulse run on. As a result, I've learned and experienced some of what it's like to administrate a server that runs ruby on rails, such as managing packages and libraries installed, deploying changes to the server, and doing operating system security updates.

When we had students at Waterloo project, I helped many of them get set up for development. As a result, I learned even more about how to set up a computer for ruby on rails development.

I realized that documenting the setup procedure would save us time and result in cleaner and more consistent development environments. We used a wiki to document the process.

As I kept copying and pasting commands, I figured there was an easier way, and it turns out there's a number of libraries out there that will run the commands you specify to set up an environment. After doing some research, I settled on the moonshine provisioning library, which is used and developed by the commercial hosting site railsmachine. Moonshine is built on another tool called puppet which is an industry standard in the provisioning area.

I've spent quite a bit of time now building a "manifest" that, when run on a compatible base install of debian or ubuntu linux, will set up everything needed to actually server the project application tool and the pulse. I even got to submit a few patches to the open source moonshine library through the process.

The end result is that a new developer can set up their environment in about 45 minutes, after running a single command to start the process. I can also set up a new server in roughly the same time.

Having this server provisioning will save us time in a number of ways. There's the obvious time saved in setting up new systems. Without going in to the technical details why, having this server provisioning set up properly will make it much easier to track down and fix bugs.

I've also built the server provisioning process with the international community in mind. I'm attempting to make it as easy as possible for other countries to adopt our script to their own environment.

More on the rest of the items later.

Friday, January 15, 2010

Pulse 1.1 gets released

Over the last few weeks I've been working hard on a number of tasks. The major ones: an update to the pulse, support raising, and building technical infrastructure. I've also spent time renewing my vision (along with all Campus for Christ staff) and managing interns.

I'll try to update this blog with details on the major tasks in the next few days. For now, I'll talk about the new pulse update. We've been working on it for a while. The two main features are 1 - emailing within the pulse and 2 - a restructuring the campus teams that make the pulse easier to browse for our staff.

Our staff can now use the pulse to email people in their Bible study group, or any subset of the directory returned from a search. We installed a library that runs tasks in the background so that the web page would still load as the emails get sent out. We used this feature to send 1500 emails (took about 15 minutes) to anyone who inputted their schedule last term, asking them to update their schedule again. This email feature was probably our #1 most-requested feature and it should save our staff a lot of time.

The second change is that we've grouped campuses by staff teams. For example, Toronto Metro has Ryerson, U of T, and York. One of the first things I noticed after this change is the directory page lists about 50-400 people (depending on the size of the c4c group at that campus) -- small enough for a single page. It really gives a sense of your specific campus and keeps it from being overwhelming. Stats now have much more meaning to the staff as it's for their specific campus.

Here's a video explaining ministry teams.

Here's the full changelog

There's a number of smaller changes as well, like a timetable last updated timestamp. That's a pretty simple thing to program but added a lot of value for staff and students.

Overall this pulse update is specifically made to be not too big as to overwhelm with the changes, but contain a number of really useful additions, which came from requests and overall driven by front-line needs. We really hope and are confident it's going to help our staff be more effective in building a movement.