Ye Olde Monolith-e

This entry is part of the “Deployment Quest” series.

Let’s get started by walking through deploying a single monolithic application to a single server. No fancy deployment tools, no containers, etc. For this exercise, we’ll stick to traditional basics.

We’re also going to make a few basic assumptions. First, you already have a server sufficiently powered to run this application and all of the necessary dependencies. Second, there is a network connection available allowing us to download relevant software as well as allow users to access the application once it’s up and running. We will cover a few network related topics, but we’ll be skipping over topics such as subnetting, routing, switching, etc.

Step one is to prepare the server itself. To start, we’re going to need an operating system. I’m a Linux guy, so we’ll be using Linux throughout this whole series. It’s possible to deploy applications on Windows, and there are a lot of people who do. I don’t trust Windows enough to run services on it, so I’ll happily stick to Linux.

There are a lot of different Linux distributions and we’re going to need to choose one. Taking a quick look at a top ten list, you’ll see some pretty well-known names such as Ubuntu, Redhat, CentOS, Suse, etc. I’m partial to Redhat and CentOS, so let’s use CentOS as our base.

Awesome. So we have an OS, which we’ll just install and get rolling. But wait, how are we going to configure the OS? Are we using the default drive layout? Are we going to customize it? What about the software packages that are installed? Do we just install everything so we have it in case we ever need it?

The answer to these questions depends, somewhat, on what you’re trying to accomplish. By default, most distributions seem to just dump all the space into the root drive, with a small carve out for swap. This provides a quick way to get going, but can lead to problems down the road. For instance, if a process spins out of control and writes a lot of data to the drive, it can fill up and result in degrading services, or worse, crashing. It also makes it harder to rebuild a server, if needed, as the entire drive needs to be reformatted versus specific mount points.

My recommendation here would be to split up the drive into reasonable chunks. Specifically, I tend to create mount points for /home and /var/log at a minimum. Depending on the role of the server, it may be wise to create mount points for /tmp and /var/tmp as well to ensure temporary files don’t cause issues. You’ll also likely need a mount point for the application you’re deploying. I tend to put software in /opt and, for web-based applications, /var/www. Ultimately, though, drive layouts tend to be personal choices.

Next up, packages. Most installers provide a minimal install and that would be my recommendation. Adding new packages is relatively easy while removing packages can often be a time consuming process. Sure, you can simply remove a single package, but ensuring that all unused dependencies are uninstalled as well is often a fools errand. The purpose here is to ensure that you have what you need to run the application without adding a lot of extra packages that take up space, at best, and provide attackers with tools they can use, at worst.

Take the time to go through all of the applications that run on startup. Are you sure you need to have cups running? What about portmap? Disable anything you’re not using, and go the extra mile to remove those packages from the system. You’ll also want to make a decision on security features such as SELinux. Yes, it’s complicated and can cause headaches, but the benefits are significant. I highly recommend running SELinux, or at least trying to deploy your application with it enabled first before deciding to remove it.

Finally, you need to configure the network connection on the server. The majority of this is left as an exercise to the reader, but I will highlight a few things. Security is important and you’ll want to protect your server and the assets on the server. To that end, I highly recommend looking into some sort of firewall. CentOS ships with iptables which can handle that task for you, but you can also use a network firewall. Additionally, look into properly segmenting your network. This makes more sense for multi-server deployments, however, and doesn’t necessarily apply for this specific example.

Spend the extra time to test that the network connection works. Doing this now before you get your application installed and running can save some headaches down the road. Can you ping from the server to the local network? How about to the Internet? Can you connect to the server from the local network? How about the Internet? If you cannot, then take the time to troubleshoot now. Check your IP, subnet, and firewall settings. Remember, ping uses ICMP while HTTP and SSH use TCP. It’s possible to allow one and not the other.

Now that we have a server with a working operating system and network, we should be ready to deploy our application. While different applications tend to be unique in how they’re deployed, there are a number of common tasks you should be looking at.

From an operational standpoint, ensure that the mount points your application is installed on have sufficient space for both the application and any temporary and permanent data that will be written. Some applications write log files and you’ll want to ensure those are put in a place where they can be handled appropriately. You’ll also want to make sure log rotation is handled so they don’t grow endlessly or become too large to manage.

On the security end of things, there are a number of items to look out for. Check the ownership of the files you’re deployed and ensure they’re owned by a user with only the privileges necessary to run the application. You’ll also want to check the SELinux labels to ensure they’re in the correct groups. Finally, check the user your application is running as. Again, you want this to be a user with the least privileges necessary to run the application.

The goal is to ensure that if an attacker is able to get access to the server, they end up with a user account that has insufficient privileges to do anything malicious. SELinux assists here in that the user will be prevented from accessing anything outside of the scope of the groups assigned.

And now, with all of this in place, test the application and debug accordingly. Congrats, you have a running application that you can build on in the future.

So, what have we accomplished here? And what are the pros/cons of deploying something like this?

We’ve deployed a simple application on a single server with some security in place to prevent attackers from gaining a foothold on the system. There’s a limit to how secure we can make this, though, since it’s a single server.

On the positive side, this is a very simplistic setup. A single server to manage, only one ingress and egress point, and we’ve minimized the packages installed on the system. On the other hand, if an attacker can gain a foothold, they’ll have access to everything. A single server is also a single point of failure, so if something goes wrong, your application will be down until it’s fixed.

A setup like this is good for development and can be a good starting point for hobbyist admins. There are more secure and resilient ways to deploy applications that we’ll cover in a future Deployment Quest entry.

Rising from the ashes

*cough* *cough*

Awfully dusty in here. Almost as if this place were abandoned. Of course, that was never the case, was it. Just a hiatus of sorts. A reprieve from the noise and the harshness of reality.

But it’s time, now. Time to whip this place back into shape. Time to put the pieces back together. Time to build something new and interesting.

I know it’s been a while, but it’s time to get back in the habit. I’ve learned a lot these past years and I want to start sharing it. Soon.

Boldly Gone

I have been and always shall be your friend.

It’s a sad day. We’ve lost a dear friend today, someone we grew up with, someone so iconic that he inspired generations. At the age of 83, Leonard Nimoy passed away. He will be missed.

It’s amazing to realize how much someone you’ve never met can mean to you. People larger than life, people who will live on in memory forever. I’ve been continually moved for hours at the outpouring of grief and love online for Leonard. He has meant so much for so many, and his memory will live on forever.

Of all the souls I have encountered in my travels, his was the most… human.

Programming Note

In 2012 I posted a little over a dozen entries to this blog. I like to think that each entry was well thought out and time well spent. But only a dozen? That’s about one entry a month… I’d really like to do more.

So, new year, time to make some changes.. I spent a lot of time judging whether each post was “worth the effort” and “long enough to matter.” I need to get past that. My goal is to start posting a number of smaller entries. I definitely want the quality to be there, but I want to avoid agonizing over each and every entry.

So here’s to a new year and more content!

Contemplating the Future

In 2005 I obtained a job at a regional ILEC as a Data Operations Technician. As part of this job, I took over development of one of the tools we used to diagnose customer DSL connections. Problem was, this tool was written in PHP, a programming language I was, as yet, unfamiliar with.

At the same time, I was also looking for a web-based tool I could use to keep track of various tasks. While there were a few open-source tools I could use, none had the features I was looking for. So I decided to write one myself, and to write it in PHP so I could learn the language better. In the end, I’m glad I did as PHP has become indispensable for writing web-based tools.

The tool I wrote was a web-based todo manager called phpTodo. Since the alpha release in 2005, I have released 7 more versions. Work on phpTodo has ebbed and lowed with time, often interrupted by work and life in general. In fact, the last formal release was made almost 5 years ago, bringing the current version up to 0.8.1. In 2009, I found out that phpTodo was being packaged and released with Fedora as well.

After releasing 0.8.1, I decided to switch from using categories to using tags, similar to how the blogging system I use, Serendipity, uses them. This required rewriting a good deal of the back end of the system, as well as making extensive changes to the front end. I also started using the Prototype and Scriptaculous Javascript frameworks, and then later switched to jQuery. In all, a great deal of code has been rewritten.

I’m quite happy with the general feel of the new version I’ve been working on. While there is a good deal more code to be written, I’m confident there will be a code release soon enough.

I’ve been thinking a lot about the future of phpTodo and where I want to take it. When I originally started, I wrote the system such that I could see my todo list items via an RSS feed. At the time, I had a Blackberry phone and this worked brilliantly. Of course, this was purely a one-way feed with no way to update any todo items on the go. Since that time, I started working on a mobile view for the system, but stopped quickly after I realized how horrible working with WAP was. Fortunately, technology has progressed quickly since that time and WAP is no longer necessary. So, I’m considering working on a mobile version again.

A mobile version brings new challenges, however. It should be trivial to develop a mobile view that can be used while online, but my hope was to have an offline version as well that can be synchronized with the online version. One possibility is to develop an app that can be loaded onto a phone. That, of course, severely limits the platforms it can be run on. Another possibility is an HTML5 version, though that brings challenges of its own.

Another thought was to build a web service into phpTodo. The basic premise is an XML generator that, given a set of parameters, can supply an XML feed for external systems to use as input. And an XML parser that can receive data from external systems in order to update phpTodo data. I believe this can be used as the interface for the mobile view.

A web service can also be used to power another idea I had. I stumbled across the website of Brett Terpstra a while back and found a treasure trove of interesting ideas and useful code snippets. Among these is an obsession for recording notes to keep track of projects, interesting ideas, and helpful code snippets. Brett uses a number of custom scripts and software packages, most of which are exclusive to his platform of choice, OS X. To be honest, I find this incredibly intriguing, and potentially useful. So, I’ve been thinking about developing a command-line tool I can use to interact with phpTodo. A web service could make this a great deal easier.

I have no plans to stop working on the project, and, in fact, I’m eager to keep moving forward. As I continue to rely on phpTodo itself for my daily work, I rely on improvements I can make to the system. So overall, the future of phpTodo is bright.

Mega Fail

So this happened :

Popular file-sharing website Megaupload shut down
Megaupload shut down by feds, seven charged, four arrested
Megaupload assembles worldwide criminal defense
Department of Justice shutdown of rogue site MegaUpload shows SOPA is unnecessary
And then.. This happened :

Megaupload Anonymous hacker retaliation, nobody wins

And, of course, the day before all of this happened was the SOPA/PIPA protest.

Wow.. The government, right? SOPA/PIPA isn’t even on the books, people are up in arms over it, and then they go and seize one of the largest file sharing websites on the planet! We should all band together and immediately protest this illegal seizure!

But wait.. hang on.. Since when does jumping to conclusions help? Let’s take a look and see what exactly is going on here.. According to the indictment, this case went before a grand jury before any takedown was performed. Additionally, this wasn’t an all-of-a-sudden thing. Megaupload had been contacted in the past about copyright violations and failed to deal with them as per established law.

There are a lot of people who are against this action. In fact, the hacktivist group, Anonymous, decided to display their dictate by performing DDoS attacks against high profile sites such as the US DoJ, MPAA, and RIAA. This doesn’t help things and may actually hurt the SOPA/PIPA protest in the long run.

Now I’m not going to say that the takedown was right and just, there’s just not enough information as of yet, and it may turn out that the government was dead wrong with this action. But at the moment, I have to disagree with those that point at this as an example of an illegal takedown. As a friend of mine put it, if the corner market is selling illegal bootleg videos, when they finally get raided, the store gets closed. Yes, there were legal uses of the services on the site, but the corner store sold milk too.

There are still many, many copyright and piracy issues to deal with. And it’s going to take a long time to deal with them. We need to be vigilant, and protesting when necessary does work. But jumping to conclusions like this, and then attacking sites such as the DoJ are not going to help the cause. There’s a time and a place for that, and I don’t believe we’re there yet.

Who turned the lights out?

You may have noticed that a number of websites across the Internet today have modified their look a bit. In many cases, the normal content of that site is unreachable. Why would they do such a thing, you may ask? Well, there are two proposed laws, SOPA and PIPA, that threaten what we, today, enjoy as the Internet. The short version of these laws is that, basically, if you’re found to have any material on your website that infringes copyright, you face having your website shut down, without due process, all of your advertising pulled, being stricken from search engines, and possible jail time. Pretty draconian. There are a number of places that can explain, in more detail, what the full text of the legislation says. If you’re interested, check out americancensorship.org or eff.org.

Or, you can check out this video, from ted.com, that explains the legislation and why it’s so bad.

e

If you’re coming here after the 18th of January, here are some images of the protesting.

Google

 

Wikipedia

 

Wired.com

Blacklisted!

Back in October of 2011, a bill was introduced in the House of Representatives called HR.3261, or the “Stop Online Privacy Act (SOPA).” Go take a look, I’ll wait. It’s a relatively straightforward bill, especially compared to others I’ve looked at. Hell, it’s only 15 pages long! And it’s going to kill the Internet.

Ok,ok.. It won’t *KILL* the Internet, but it has the potential to ruin what we consider to be the Internet. Personally, I believe that if this passes, it has the potential to turn the Internet into nothing more than a collection of business websites, at least in the US.

So how does this thing work? Well, it’s actually pretty straightforward. If your website is suspected of infringing on copyrighted material, your website is taken down, any advertising you have on your site is cut, and you are removed from search engines. But so what, you deserve it! You were breaking copyright law!

Not so fast. This applies to *any* content on your website. So if someone comments on a blog entry, or you innocently link to a website that infringes copyright, or other situations out of your control, you’re responsible. Basically, you have to police every single comment, link, etc. that appears on your website.

It’s even worse for service providers since they have to do the blocking. Every infringing site is blocked via DNS. And since the US doesn’t have control of all of DNS, and some infringing sites are not located in the US, this means we move into the realm of having DNS blacklist files. The ISP becomes the responsible party if they fail to block these sites, which in turn means more overhead for the ISP. Think you pay a lot for Internet access now?

So what can you do? Well, for one, you can contact your representative and tell them how insane this whole idea is. And you can protest SOPA itself by putting up a protest overlay on your site. There’s a github project with all of the source code you need to add an overlay to your website. Or, if you have a Serendipity web blog, you can download the Stop SOPA plugin I’ve written.

Get out there and protest!

In Memorium – Steve Jobs – 1955-2011

Somewhere in the early 1980’s, my father took me to a bookstore in Manhattan. I don’t remember why, exactly, we were there, but it was a defining moment in my life. On display was a new wonder, a Macintosh computer.

Being young, I wasn’t aware of social protocol. I was supposed to be awed by this machine, afraid to touch it. Instead, as my father says, I pushed my way over, grabbed the mouse, and went to town. While all of the adults around me looked on in horror, I quickly figured out the interface and was able to make the machine do what I wanted.

It would be over 20 years before I really became a Mac user, but that first experience helped define my love of computers and technology.

Thank you, Steve.