Thou Shalt Segment

This entry is part of the “Deployment Quest” series.

In the previous article, we discussed a very simple monolithic deployment. One server with all of the relevant services necessary to make our application work. We discussed details such as drive layout, package installation, and some basic security controls.

In this article, we’ll expand that design a bit by deploying individual services and explain, along the way, why we do this. This won’t solve the single point of failure issues that we discussed previously, but these changes will move us further down the path of a reliable and resilient deployment.

Let’s do a quick recap first. Our single server deployment includes a simple website application and a database. We can break this up into two, possibly three distinct applications to be deployed. The database is straightforward. Then we have the website itself which we can break into a proxy server for security and the primary web server where the application will exist.

Why a proxy server, you may ask. Well, our ultimate goals are security, reliability, and resiliency. Adding an additional service may sound counter-intuitive, but with a proxy server in front of everything we gain some additional security as well as a means to load balance when we eventually scale out the application for resiliency and reliability purposes.

The security comes by adding an additional layer between the client and the protected data. If the proxy server is compromised, the attacker still has to move through additional layers to get to the data. Additionally, proxy servers rarely contain any sensitive data on them. ie, there are no usernames or passwords located on the proxy server.

In addition to breaking this deployment into three services, we also want to isolate those services into purpose-built networks. The proxy server should live in what is commonly called a DMZ network, we can place the web server in a network designated for web servers, and the database goes into a database network.

Simple network diagram containing a DMZ, Web Network, and Database Network.
Segmented networks

Keeping these services separate allows you to add additional layers of protection such as firewall rules that limit access to each asset. For instance, the proxy server typically needs ports 80 and 443 open to allow http and https traffic. The web server requires whatever port the web application is running on to be open, and the database server only needs the database port open. Additionally, you can limit the source of the traffic as well. Proxy servers are typically open to the world, but the web server only needs to be open to the proxy server, and the database server only needs to be open to the web server.

With this new deployment strategy, we’ve increased the number of servers and added the need for a lot of new configuration which increases complexity a bit. However, this provides us with a number of benefits. For starters, we have more control over the security of each system, allowing us to reduce the attack surface of each individual server. We’ve also added the ability to place firewalls in between each server, limiting the traffic to specific ports and, in certain instances, from specific hosts.

Separating the services to their own servers also has the potential of allowing for horizontal scaling. For instance, you can run multiple proxy and web servers, allowing additional resiliency in case of the failure of one or more servers. Scaling the proxy servers requires some additional network wizardry or the presence of a load balancing device of some sort, but the capability is there. We’ll discuss horizontal scaling in a future post.

The downside of a deployment like this is the complexity and the additional overhead required. Instead of a single server to maintain, you now have three. You’ve also added firewalls to the mix which also need maintenance. There’s also additional latency added due to the network overhead of communication between the servers. This can be reduced through a number of techniques such as caching, but is generally not an issue for typical web applications.

The example we’ve used, thus far, is quite simplistic and this is not necessarily a good strategy for a small web application deployment, but provides an easy to understand example as we expand our deployment options. In future posts we’ll look at horizontal scaling, load balancing, and we’ll start digging into new technologies such as containerization.

Rising from the ashes

*cough* *cough*

Awfully dusty in here. Almost as if this place were abandoned. Of course, that was never the case, was it. Just a hiatus of sorts. A reprieve from the noise and the harshness of reality.

But it’s time, now. Time to whip this place back into shape. Time to put the pieces back together. Time to build something new and interesting.

I know it’s been a while, but it’s time to get back in the habit. I’ve learned a lot these past years and I want to start sharing it. Soon.

Boldly Gone

I have been and always shall be your friend.

It’s a sad day. We’ve lost a dear friend today, someone we grew up with, someone so iconic that he inspired generations. At the age of 83, Leonard Nimoy passed away. He will be missed.

It’s amazing to realize how much someone you’ve never met can mean to you. People larger than life, people who will live on in memory forever. I’ve been continually moved for hours at the outpouring of grief and love online for Leonard. He has meant so much for so many, and his memory will live on forever.

Of all the souls I have encountered in my travels, his was the most… human.

Programming Note

In 2012 I posted a little over a dozen entries to this blog. I like to think that each entry was well thought out and time well spent. But only a dozen? That’s about one entry a month… I’d really like to do more.

So, new year, time to make some changes.. I spent a lot of time judging whether each post was “worth the effort” and “long enough to matter.” I need to get past that. My goal is to start posting a number of smaller entries. I definitely want the quality to be there, but I want to avoid agonizing over each and every entry.

So here’s to a new year and more content!

Contemplating the Future

In 2005 I obtained a job at a regional ILEC as a Data Operations Technician. As part of this job, I took over development of one of the tools we used to diagnose customer DSL connections. Problem was, this tool was written in PHP, a programming language I was, as yet, unfamiliar with.

At the same time, I was also looking for a web-based tool I could use to keep track of various tasks. While there were a few open-source tools I could use, none had the features I was looking for. So I decided to write one myself, and to write it in PHP so I could learn the language better. In the end, I’m glad I did as PHP has become indispensable for writing web-based tools.

The tool I wrote was a web-based todo manager called phpTodo. Since the alpha release in 2005, I have released 7 more versions. Work on phpTodo has ebbed and lowed with time, often interrupted by work and life in general. In fact, the last formal release was made almost 5 years ago, bringing the current version up to 0.8.1. In 2009, I found out that phpTodo was being packaged and released with Fedora as well.

After releasing 0.8.1, I decided to switch from using categories to using tags, similar to how the blogging system I use, Serendipity, uses them. This required rewriting a good deal of the back end of the system, as well as making extensive changes to the front end. I also started using the Prototype and Scriptaculous Javascript frameworks, and then later switched to jQuery. In all, a great deal of code has been rewritten.

I’m quite happy with the general feel of the new version I’ve been working on. While there is a good deal more code to be written, I’m confident there will be a code release soon enough.

I’ve been thinking a lot about the future of phpTodo and where I want to take it. When I originally started, I wrote the system such that I could see my todo list items via an RSS feed. At the time, I had a Blackberry phone and this worked brilliantly. Of course, this was purely a one-way feed with no way to update any todo items on the go. Since that time, I started working on a mobile view for the system, but stopped quickly after I realized how horrible working with WAP was. Fortunately, technology has progressed quickly since that time and WAP is no longer necessary. So, I’m considering working on a mobile version again.

A mobile version brings new challenges, however. It should be trivial to develop a mobile view that can be used while online, but my hope was to have an offline version as well that can be synchronized with the online version. One possibility is to develop an app that can be loaded onto a phone. That, of course, severely limits the platforms it can be run on. Another possibility is an HTML5 version, though that brings challenges of its own.

Another thought was to build a web service into phpTodo. The basic premise is an XML generator that, given a set of parameters, can supply an XML feed for external systems to use as input. And an XML parser that can receive data from external systems in order to update phpTodo data. I believe this can be used as the interface for the mobile view.

A web service can also be used to power another idea I had. I stumbled across the website of Brett Terpstra a while back and found a treasure trove of interesting ideas and useful code snippets. Among these is an obsession for recording notes to keep track of projects, interesting ideas, and helpful code snippets. Brett uses a number of custom scripts and software packages, most of which are exclusive to his platform of choice, OS X. To be honest, I find this incredibly intriguing, and potentially useful. So, I’ve been thinking about developing a command-line tool I can use to interact with phpTodo. A web service could make this a great deal easier.

I have no plans to stop working on the project, and, in fact, I’m eager to keep moving forward. As I continue to rely on phpTodo itself for my daily work, I rely on improvements I can make to the system. So overall, the future of phpTodo is bright.

Mega Fail

So this happened :

Popular file-sharing website Megaupload shut down
Megaupload shut down by feds, seven charged, four arrested
Megaupload assembles worldwide criminal defense
Department of Justice shutdown of rogue site MegaUpload shows SOPA is unnecessary
And then.. This happened :

Megaupload Anonymous hacker retaliation, nobody wins

And, of course, the day before all of this happened was the SOPA/PIPA protest.

Wow.. The government, right? SOPA/PIPA isn’t even on the books, people are up in arms over it, and then they go and seize one of the largest file sharing websites on the planet! We should all band together and immediately protest this illegal seizure!

But wait.. hang on.. Since when does jumping to conclusions help? Let’s take a look and see what exactly is going on here.. According to the indictment, this case went before a grand jury before any takedown was performed. Additionally, this wasn’t an all-of-a-sudden thing. Megaupload had been contacted in the past about copyright violations and failed to deal with them as per established law.

There are a lot of people who are against this action. In fact, the hacktivist group, Anonymous, decided to display their dictate by performing DDoS attacks against high profile sites such as the US DoJ, MPAA, and RIAA. This doesn’t help things and may actually hurt the SOPA/PIPA protest in the long run.

Now I’m not going to say that the takedown was right and just, there’s just not enough information as of yet, and it may turn out that the government was dead wrong with this action. But at the moment, I have to disagree with those that point at this as an example of an illegal takedown. As a friend of mine put it, if the corner market is selling illegal bootleg videos, when they finally get raided, the store gets closed. Yes, there were legal uses of the services on the site, but the corner store sold milk too.

There are still many, many copyright and piracy issues to deal with. And it’s going to take a long time to deal with them. We need to be vigilant, and protesting when necessary does work. But jumping to conclusions like this, and then attacking sites such as the DoJ are not going to help the cause. There’s a time and a place for that, and I don’t believe we’re there yet.

Who turned the lights out?

You may have noticed that a number of websites across the Internet today have modified their look a bit. In many cases, the normal content of that site is unreachable. Why would they do such a thing, you may ask? Well, there are two proposed laws, SOPA and PIPA, that threaten what we, today, enjoy as the Internet. The short version of these laws is that, basically, if you’re found to have any material on your website that infringes copyright, you face having your website shut down, without due process, all of your advertising pulled, being stricken from search engines, and possible jail time. Pretty draconian. There are a number of places that can explain, in more detail, what the full text of the legislation says. If you’re interested, check out americancensorship.org or eff.org.

Or, you can check out this video, from ted.com, that explains the legislation and why it’s so bad.

e

If you’re coming here after the 18th of January, here are some images of the protesting.

Google

 

Wikipedia

 

Wired.com

Blacklisted!

Back in October of 2011, a bill was introduced in the House of Representatives called HR.3261, or the “Stop Online Privacy Act (SOPA).” Go take a look, I’ll wait. It’s a relatively straightforward bill, especially compared to others I’ve looked at. Hell, it’s only 15 pages long! And it’s going to kill the Internet.

Ok,ok.. It won’t *KILL* the Internet, but it has the potential to ruin what we consider to be the Internet. Personally, I believe that if this passes, it has the potential to turn the Internet into nothing more than a collection of business websites, at least in the US.

So how does this thing work? Well, it’s actually pretty straightforward. If your website is suspected of infringing on copyrighted material, your website is taken down, any advertising you have on your site is cut, and you are removed from search engines. But so what, you deserve it! You were breaking copyright law!

Not so fast. This applies to *any* content on your website. So if someone comments on a blog entry, or you innocently link to a website that infringes copyright, or other situations out of your control, you’re responsible. Basically, you have to police every single comment, link, etc. that appears on your website.

It’s even worse for service providers since they have to do the blocking. Every infringing site is blocked via DNS. And since the US doesn’t have control of all of DNS, and some infringing sites are not located in the US, this means we move into the realm of having DNS blacklist files. The ISP becomes the responsible party if they fail to block these sites, which in turn means more overhead for the ISP. Think you pay a lot for Internet access now?

So what can you do? Well, for one, you can contact your representative and tell them how insane this whole idea is. And you can protest SOPA itself by putting up a protest overlay on your site. There’s a github project with all of the source code you need to add an overlay to your website. Or, if you have a Serendipity web blog, you can download the Stop SOPA plugin I’ve written.

Get out there and protest!

In Memorium – Steve Jobs – 1955-2011

Somewhere in the early 1980’s, my father took me to a bookstore in Manhattan. I don’t remember why, exactly, we were there, but it was a defining moment in my life. On display was a new wonder, a Macintosh computer.

Being young, I wasn’t aware of social protocol. I was supposed to be awed by this machine, afraid to touch it. Instead, as my father says, I pushed my way over, grabbed the mouse, and went to town. While all of the adults around me looked on in horror, I quickly figured out the interface and was able to make the machine do what I wanted.

It would be over 20 years before I really became a Mac user, but that first experience helped define my love of computers and technology.

Thank you, Steve.