Education, then and now…

When I was going through grade school, we learned how to subtract by borrowing. We learned how to add by counting. It was ok to use your fingers to count when we started, and I still use them occasionally today.

Flash forward a few years and it seems like schools are teaching a whole new language. Is this the math I learned? Drawing counters? What makes this easier? Sure, I get that most people learn how to round to ten first before adding, and it definitely makes things easier and faster, but trying to teach that right from the get-go?

If I, someone who knows and excels in math, have trouble understanding these lessons, then how do the kids handle it? These seem to be such convoluted methods to get to the same answer! Take, for instance, this math paper from a Boing Boing employee’s daughter. Sure, I understand what they’re trying to do here, and it’s a trick we all learn. But starting out with this? Why?

My own children bring home math papers that use the oddest methods for adding and subtracting. My son brought home a math paper about subtraction and had trouble completing it. I tried to explain borrowing to him and he looked at me like I had two heads. Apparently borrowing is no longer taught, instead they use regrouping. They draw pictures to get the answer rather than using mathematics.

Yeah, yeah. In the end, they do subtract, but why the need to draw pictures? Do we not have enough fingers? Can we not use them to figure out every subtraction problem? Regrouping is basically borrowing, but the concept seems so much more convoluted and difficult to explain.

To make matters worse, I tried to explain why he needed to borrow/regroup. “You see, the top number is smaller than the bottom number, so if you subtract them, you’ll get a negative number.” He floored me by asking what a negative number was. Seriously? I tried explaining the number line and while he understood, he explained that the teacher never taught such things.

I fear for our youth’s future. These are simple concepts. What happens when they get to the difficult stuff? Will they start relying on calculators and computers to multiply and divide because it’s “difficult” and they’d have to draw really big pictures?

What is our education system coming to?

 

Live and Learn

As most of us know, making changes in a live environment is often something done with extreme care. In fact, best practices dictate that such changes should be made during a maintenance window wherever possible. Of course, there are often situations where changes are made outside of the aforementioned maintenance windows, though these are generally emergency situations in which services have failed or severe problems are being experienced.

Being human, though, there are also times when we make rash decisions, believing that our modifications are so insignificant that they can’t possibly cause problems. It’s times like this that the big red caution light should go off in our heads. Of course, we often ignore that light and move forward with our plan, certain that nothing will go wrong and that we’ll come out the other side as a hero. We’ll be the person who tweaked the service and made it run even smoother.

I was that person today. However, as I have a very good friend named Murphy, my insignificant change caused a wonderful chain reaction of events ending in a major service interruption. Go me.

In the end, we resolved the problem and life continues on. Thankfully I work with people who are willing to forgive the occasional mistake, provided it’s a learning experience and doesn’t happen again.

The point of all of this is to show that even the smallest changes, no matter how insignificant, can result in catastrophic failure. Every change should be evaluated, tested, and scheduled for deployment whenever possible. And when you make that inevitable mistake, learn from it. We’re all human and will, at one time or another, view ourselves as invincible. We’re not. We make mistakes, we screw things up.

So, learn from my mistake, go make your own, learn and move on. As they say, Live and Learn.

 

Moral Outrage

I just caught this story over on slashdot. The details are quite a bit fuzzy here, but the general premise is that the new Modern Warfare 2 game has scenes of terrorism. This is a bit more than just a cut scene, though, as it appears the player is actively participating in the action. Keep in mind, though, that this game is, as yet, unreleased. There is no real context here, so we don’t know why the player is participating in these acts. [Update] As I’m writing this, there have been some updates. Apparently Activision UK has authenticated the footage, claiming it’s an optional scenario.

Video footage is being taken down very rapidly by Activision, the publisher, so good luck finding live footage. At the time of writing, this site does have some rather fuzzy video.

The outrage is obviously centered around the terrorism portrayed in the video. From what I’ve seen, it looks like the player is participating in a terroristic act, that of killing civilians in an airport. The video shows mass carnage with no remorse, shooting people in the back, shooting civilians as they flee or attempt to help others, and tossing grenades into masses of people. Here is a still shot from the video:

Let’s keep in mind, however, that we’re talking about a mature rated game. This is NOT a game for children. But people are “outraged” anyway. And I have to ask, why? Let’s think about this for a bit. First, there have been claims throughout the years that games de-sensitize you from violence. That games are “murder simulators,” and other such insanity. Yet, after this footage was leaked, we hear outrage from gamers about the improper nature of it and how it makes some people feel uncomfortable. They wonder if Infinity Ward, the creators, are going to far.

I don’t see a problem here, though. This is a game designed to be as realistic as possible. And guess what, people die in real life. There are terrorist acts that actually involve real terrorists. So viewing the action through the guise of a terrorist is a perfectly valid way to tell a story. They do it in books, and they do it in movies. This just ups the interactivity a bit more. And, if it makes the player feel uncomfortable in the process, then it’s doing a damn good job, isn’t it. How about we make the player think a bit. Let’s show them how things really are. Not everything is a fantasy game, we can use games as a teaching tool too. Learning how the terrorists see things, what choices they make and how quickly they sometimes make them is important to understanding them.

I applaud Infinity Ward for their choices. From what I’ve read, it sounds like they’re doing it in a very responsible nature. I may pick this game up just to “speak with my wallet” as it were. Just to bring more innovation and daring to the industry. We need something new, something exciting, and yes, something that upsets our oh-so-delicate sensibilities. Bravo.

Pink!

Yup. Pink. I wrote about this last year. Unfortunately, it looks like the pinkforoctober.com site has vanished, but fear not! There’s National Breast Cancer Awareness Month!

The long and short is this. Breast cancer is a problem. While some women do die from breast cancer, even survivors can suffer for the rest of their lives. Support breast cancer research. In fact, support cancer research in general. If we all work together we can enjoy longer lives.

 

A tribute to Cosmos

I just ran across this today, thanks Slashdot. It’s a tribute to Carl Sagan of Cosmos fame and was put together by auto-tuning Sagan’s own dialogue. Auto-tuning is a method by which you can change the pitch of an existing sound. It is often used to “fix” mistakes musicians make when recording. More recently, it has been used to create entire new works of art by modifying the pitch of recordings to the point of distortion and “tuning” them to follow a given musical flow. In the end, you end up with something like the following video:

You can download this video, or an MP3 of this from the artists site. I also recommend checking out The Gregory Brothers and all of their auto-tuning goodness.

Centralized Firewall Logging

I currently maintain a number of Linux-based servers. The number of servers is growing, and management becomes a bit wieldy after a while. One move I’ve made is to start using Spacewalk for general package and configuration management. This has turned out to be a huge benefit and I highly recommend it for anyone in the same position as I am. Sure, Spacewalk has its bugs, but it’s getting better with every release. Kudos to Redhat for bringing such a great platform to the public.

Another area of problematic management is the firewall. I firmly believe in defense in depth and I have several layers of protection on these various servers. One of those layers is the iptables configuration on the server itself. Technically, iptables is the program used to configure the filtering ruleset for a Linux kernel with the ip_tables packet filter installed. For the purposes of this article, iptables encompasses both the filter itself as well as the tools used to configure it.

Managing the ruleset on a bunch of disparate servers can be a daunting task. There are often a set of “standard” rules you want to deploy across all of the servers, as well as specialized rules based on what role the server plays. The standard rules are typically for management subnets, common services, and special filtering rules to drop malformed packets, source routes, and more. There didn’t seem to be an easy way to deploy such a ruleset, so I ended up rolling my own script to handle the configuration. I’ll leave that for a future blog entry, though.

In addition to centralized configuration, I wanted a way to monitor the firewall from a central location. There are several reasons for this. One of the major reasons is convenience. Having to wade through tons of logwatch reports, or manually access each server to check the local firewall rules is difficult and quickly reaches a point of unmanageability. What I needed was a way to centrally monitor the logs, adding and removing filters as necessary. Unfortunately, there doesn’t seem to be much out there. I stumbled across the iptablelog project, but it appears to be abandoned.

Good did come of this project, however, as it lead me to look into ulogd. The ulog daemon is a userspace logger for iptables. The iptables firewall can be configured to send security violations, accounting, and flow information to the ULOG target. Data sent to the ULOG target is picked up by ulogd and sent wherever ulogd is configured to send it. Typically, this is a text file or a sql database.

Getting started with ulogd was a bit of a problem for me, though. To start, since I’m using a centralized management system, I need to ensure that any new software I install uses the proper package format. So, my first step was to find an RPM version of ulogd. I can roll my own, of course, but why re-invent the wheel? Fortunately, Fedora has shipped with ulogd since about FC6. Unfortunately for me, however, I was unable to get the SRPM for the version that ships with Fedora 11 to install. I keep getting a cpio error. No problem, though, I just backed up a bit and downloaded a previous release. It appears that nothing much has changed as ulogd 1.24 has been released for some time.

Recompiling the ulog SRPM for my CentOS 5.3 system failed, however, complaining about linker problems. Additionally, there were errors when the configure script was run. So before I could get ulogd installed and running, I had to get it to compile. It took me a while to figure it out as I’m not a linker expert, but I came up with the following patch, which I added to the RPM spec file.

— ./configure 2006-01-25 06:15:22.000000000 -0500
+++ ./configure 2009-09-10 22:37:24.000000000 -0400
@@ -1728,11 +1728,11 @@
EOF

MYSQLINCLUDES=`$d/mysql_config –include`
– MYSQLLIBS=`$d/mysql_config –libs`
+ MYSQLLIBS=`$d/mysql_config –libs | sed s/-rdynamic//`

DATABASE_DIR=”${DATABASE_DIR} mysql”

– MYSQL_LIB=”${DATABASE_LIB} ${MYSQLLIBS} ”
+ MYSQL_LIB=”${DATABASE_LIB} -L/usr/lib ${MYSQLLIBS}”
# no change to DATABASE_LIB_DIR, since –libs already includes -L

DATABASE_DRIVERS=”${DATABASE_DRIVERS} ../mysql/mysql_driver.o ”
@@ -1747,7 +1747,8 @@
echo $ac_n “checking for mysql_real_escape_string support””… $ac_c” 1>&6
echo “configure:1749: checking for mysql_real_escape_string support” >&5

– MYSQL_FUNCTION_TEST=`strings ${MYSQLLIBS}/libmysqlclient.so | grep mysql_real_escape_string`
+ LIBMYSQLCLIENT=`locate libmysqlclient.so | grep libmysqlclient.so$`
+ MYSQL_FUNCTION_TEST=`strings $LIBMYSQLCLIENT | grep mysql_real_escape_string`

if test “x$MYSQL_FUNCTION_TEST” = x
then

In short, this snippet modifies the linker flags to add /usr/lib as a directory and removed the -rdynamic flag which mysql_config seems to errantly present. Additionally, it modifies how the script identifies whether the mysql_real_escape_string function is present in the version of MySQL installed. Both of these changes resolved my compile problem.

After getting the software to compile, I was able to install it and get it running. Happily enough, the SRPM I started with included patches to add an init script as well as a logrotate script. This makes life a bit easier when getting things running. So now I had a running userspace logger as well as a standardized firewall. Some simple changes to the firewall script added ULOG support. You can download the SRPM here.

At this point I have data being sent to both the local logs as well as a central MySQL database. Unfortunately, I don’t have any decent tools for manipulating the data in the database. I’m using iptablelog as a starting point and I’ll expand from there. To make matters more difficult, ulogd version 2 seems to make extensive changes to the database structure, which I’ll need to keep in mind when building my tools.

I will, however, be releasing them to the public when I have something worth looking at. Having iptablelog as a starting point should make things easier, but it’s still going to take some time. And, of course, time is something I have precious little of to begin with. Hopefully, though, I’ll have something worth releasing before the end of this year. Here’s hoping!

 

Inexpensive Two Factor Authentication

Two Factor authentication is a means by which a user’s identity can be confirmed in a more secure manner. Typically, the user supplies a username and password, the first factor, and then an additional piece of information, the second factor. In theory, providing this additional information proves the user is who they say they are. Two different types of factors should be used to maximize security.

There are three general types of factors that are used. They are as follows (quoting from Wikipedia):

  • Human factors are inherently bound to the individual, for example [[biometrics]] (“Something you are”).
  • Personal factors are otherwise mentally or physically allocated to the individual as for example learned code numbers. (“Something you know”)
  • Technical factors are bound to physical means as for example a pass, an ID card or a token. (“Something you have”)

While Two Factor authentication can be secure, the security is often compromised through the use of less secure second factors. For instance, many institutions use a series of questions as a second factor. While this is somewhat more secure than a single username and password, these questions are often generic enough that they can be obtained through social engineering. This is an example of using the same factor twice, in this case, Personal factors. Personal factors are inexpensive, however, often free to the institution requiring the security.

On the other hand, use of either Human or Technical factors is often cost prohibitive. Biometrics, for instance, requires some sort of interface to read the biometric data and convert it to something the computer can understand. Technical factors are typically physical electronic devices with a cost per device. As a result, institutions are unwilling to put forth the cost necessary to protect their data.

Banks, in particular, are unwilling to provide this enhanced security due to their large customer base and the prohibitive cost of providing physical hardware. But, banks may be willing to provide a more cost effective second factor, if one existed. Australian inventor, Matt Walker, may be able to provide such a solution.

Passwindow is a new authentication method consisting of a transparent window with seemingly random markings on it. The key is to combine these markings with similar markings provided by the application requiring authentication. The markings are similar to those on an LED clock and combining the two sources reveals a series of numbers, effectively creating a one-time password. The Passwindow provides a Physical factor, making it an excellent second factor. The following video demonstrates how Passwindow works.

What makes Passwindow so incredible is the how inexpensive it is to implement it. The bulk of the cost is in providing users with their portion of the pattern. This can easily be added to new bank cards as they are sent out, or provided as a second card to customers until they require a new card. There is sufficient space on existing bank cards to integrate a clear window with the pattern on it.

Passwindow seems to be secure for a number of reasons. It’s a physical device, something that cannot be socially engineered. In order for it to be compromised, an attacker needs to have a copy of the segment pattern on your specific card. While users generally have a difficult time keeping passwords safe, they are exceedingly good at keeping physical objects secure.

If an attacker observes the user entering the generated number, the user remains secure because the number is a one-time password. While it is theoretically possible for that number to come up again, it is highly unlikely. A well written generator will ensure truly random patterns, ensuring they can’t be predicted. Additional security can be added by having the user rotate the card into various positions or adding additional lines to the card.

If Passwindow can find traction, I can see it being integrated into most bank cards, finally providing a more secure means of authentication. Additionally, it brings an inexpensive second factor to the table, giving other institutions the ability to use enhanced security. This is some pretty cool technology, I’m eager to see it implemented in person.

 

Strange Anatomy …

This is some of the coolest art I’ve seen in a while. Both imaginative and realistic. Just plain awesome.. Some of it is for sale, too..

Ever wonder what was inside of those balloon animals? How about the innards of a Lego Minifig? Or even the Ginger Bread man! Now you’ll think twice before taking a bite out of one of those …

Jason Freeny, the artist, is an interface designer for a New York based company. He has previously worked for MTV, creating sets, props, artwork, etc. He also worked briefly as a toy designer. He also has a blog where he posts his latest artwork. Definitely some cool stuff. Be sure to check out the store on his site, too.