Network Enhanced Telepathy

I’ve recently been reading Wired for War by P.W. Singer and one of the concepts he mentions in the book is Network Enhanced Telepathy. This struck me as not only something that sounds incredibly interesting, but something that we’ll probably see hit mainstream in the next 5-10 years.

According to Wikipedia, telepathy is “the purported transmission of information from one person to another without using any of our known sensory channels or physical interaction.“ In other words, you can think *at* someone and communicate. The concept that Singer talks about in the book isn’t quite as “mystical” since it uses technology to perform the heavy lifting. In this case, technology brings fantasy into reality.

Scientists have already developed methods to “read” thoughts from the human mind. These methods are by no means perfect, but they are a start. As we’ve seen with technology across the board from computers to robotics, electric cars to rockets, technological jumps may ramp up slowly, but then they rocket forward at a deafening pace. What seems like a trivial breakthrough at the moment may well lead to the next step in human evolution.

What Singer describes in the book is one step further. If we can read the human mind, and presumably write back to it, then adding a network in-between, allowing communication between minds, is obvious. Thus we have Network Enhanced Telepathy. And, of course, with that comes all of the baggage we associate with networks today. Everything from connectivity issues and lag to security problems.

The security issues associated with something like this range from inconvenient to downright horrifying. If you thought social engineering was bad, wait until we have a direct line straight into someone’s brain. Today, security issues can result in stolen data, denial of service issues, and, in some rare instances, destruction of property. These same issues may exist with this new technology as well.

Stolen data is pretty straightforward. Could an exploit allow an attacker to arbitrarily read data from someone’s mind? How would this work? Could they pinpoint the exact data they want, or would they only have access to the current “thoughts” being transmitted? While access to current thoughts might not be as bad as exact data, it’s still possible this could be used to steal important data such as passwords, secret information, etc. Pinpointing exact data could be absolutely devastating. Imagine, for a moment, what would happen if an attacker was able to pluck your innermost secrets straight out of your mind. Everyone has something to hide, whether that’s a deep dark secret, or maybe just the image of themselves in the bathroom mirror.

I’ve seen social engineering talks wherein the presenter talks about a technique to interrupt a person, mid-thought, and effectively create a buffer overflow of sorts, allowing the social engineer to insert their own directions. Taken to the next level, could an attacker perform a similar attack via a direct link to a person’s mind? If so, what access would the attacker then attain? Could we be looking at the next big thing in brainwashing? Merely insert the new programming, directly into the user.

How about Denial of Service attacks or physical destruction? Could an attacker cause physical damage in their target? Is a connection to the mind enough access to directly modify the cognitive functions of the target? Could an attacker induce something like Locked-In syndrome in a user? What about blocking specific functions, preventing the user from being able to move limbs, or speak? Since the brain performs regulatory control over the body, could an attacker modify the temperature, heart rate, or even induce sensations in their target? These are truly scary scenarios and warrant serious thought and discussion.

Technology is racing ahead at breakneck speeds and the future is an exciting one. These technologies could allow humans to take that next evolutionary step. But as with all technology, we should be looking at it with a critical eye. As technology and biology become more and more intertwined, it is essential that we tread carefully and be sure to address potential problems long before they become a reality.

Suspended Visible Masses of Small Frozen Water Crystals

The Cloud, hailed as a panacea for all your IT related problems. Need storage? Put it in the Cloud. Email? Cloud. Voice? Wireless? Logging? Security? The Cloud is your answer. The Cloud can do it all.

But what does that mean? How is it that all of these problems can be solved by merely signing up for various cloud services? What is the cloud, anyway?

Unfortunately, defining what the cloud actually is remains problematic. It means many things to many people. The cloud can be something “simple” like extra storage space or email. Google, Dropbox, and others offer a service that allows you to store files on their servers, making them available to you from “anywhere” in the world. Anywhere, of course, if the local government and laws allow you to access the services there. These services are often free for a small amount of space.

Google, Microsoft, Yahoo, and many, many others offer email services, many of them “free” for personal use. In this instance, though, free can be tricky. Google, for instance, has algorithms that “read” your email and display advertisements based on the results. So while you may not exchange money for this service, you do exchange a level of privacy.

Cloud can also be pure computing power. Virtual machines running a variety of operating systems, available for the end-user to access and run whatever software they need. Companies like Amazon have turned this into big business, offering a full range of back-end services for cloud-based servers. Databases, storage, raw computing power, it’s all there. In fact, they have developed APIs allowing additional services to be spun up on-demand, augmenting existing services.

As time goes on, more and more services are being added to the cloud model. The temptation to drop self-hosted services and move to the cloud is constantly increasing. The incentives are definitely there. Cloud services are affordable, and there’s no need for additional staff for support. All the benefits with very little of the expense. End-users have access to services they may not have had access to previously, and companies can save money and time by moving services they use to the cloud.

But as with any service, self-hosted or not, there are questions you should be asking. The answers, however, are sometimes a bit hard to get. But even without direct answers, there are some inferences you can make based on what the service is and what data is being transferred.

Data being accessible virtually anywhere, at any time, is one of major draws of cloud services. But there are downsides. What happens when the service is inaccessible? For a self-hosted service, you have control and can spend the necessary time to bring the service back up. In some cases, you may have the ability to access some or all of the data, even without the service being fully restored. When you surrender your data to the cloud, you are at the mercy of the service provider. Not all providers are created equal and you cannot expect uniform performance and availability across all providers. This means that in the event of an outage, you are essentially helpless. Keeping local backups is definitely an option, but oftentimes you’re using the cloud so that you don’t need those local backups.

Speaking of backups, is the cloud service you’re using responsible for backups? Will they guarantee that your data will remain safe? What happens if you accidentally delete a needed file or email? These are important issues that come up quite often for a typical office. What about the other side of the question? If the service is keeping backups, are those backups secure? Is there a way to delete data, permanently, from the service? Accidents happen, so if you’ve uploaded a file containing sensitive information, or sent/received an email with sensitive information, what recourse do you have? Dropbox keeps snapshots of all uploaded data for 30 days, but there doesn’t seem to be an official way to permanently delete a file. There are a number of articles out there claiming that this is possible, just follow the steps they provide, but can you be completely certain that the data is gone?

What about data security? Well, let’s think about the data you’re sending. For an email service, this is a fairly simple answer. Every email goes through that service. In fact, your email is stored on the remote server, and even deleted messages may hang around for a while. So if you’re using email for anything sensitive, the security of that information is mostly out of your control. There’s always the option of using some sort of encryption, but web-based services rarely support that. So data security is definitely an issue, and not necessarily an issue you have any control over. And remember, even the “big guys” make mistakes. Fishnet Security has an excellent list of questions you can ask cloud providers about their security stance.

Liability is an issue as well, though you may not initially realize it. Where, exactly, is your data stored? Do you know? Can you find out? This can be an important issue depending on what your industry is, or what you’re storing. If your data is being stored outside of your home country, it may be subject to the laws and regulations of the country it’s stored in.

There are a lot of aspects to deal with when thinking about cloud services. Before jumping into the fray, do your homework and make sure you’re comfortable with giving up control to a third party. Once you give up control, it may not be that easy to reign it back in.

Looking into the SociaVirtualistic Future

Let’s get this out of the way. One of the primary reasons I’m writing this is in response to a request by John Carmack for coherent commentary about the recent acquisition of Oculus VR by Facebook. My hope is that he does, in fact, read this and maybe drop a comment in response. <fanboy>Hi John!</fanboy> I’ve been a huge Carmack fan since the early ID days, so please excuse the fanboyism.

And I *just* saw the news that Michael Abrash has joined Oculus as well, which is also incredibly exciting. Abrash is an Assembly GOD. <Insert more fanboyism here />

Ok, on to the topic a hand. The Oculus Rift is a VR headset that got its public start with a Kickstarter campaign in September of 2012. It blew away it’s meager goal of $250,000 and raked in almost $2.5 Million. For a mere $275 and some patience, contributors would receive an unassembled prototype of the Oculus Rift. Toss in another $25 and you received an assembled version.

But what is the Oculus Rift? According to the Kickstarter campaign :

Oculus Rift is a new virtual reality (VR) headset designed specifically for video games that will change the way you think about gaming forever. With an incredibly wide field of view, high resolution display, and ultra-low latency head tracking, the Rift provides a truly immersive experience that allows you to step inside your favorite game and explore new worlds like never before.

In short, the Rift is the culmination of every VR lover’s dreams. Put a pair of these puppies on and magic appears before your eyes.

For myself, Rift was interesting, but probably not something I could ever use. Unfortunately, I suffer from Amblyopia, or Lazy Eye as it’s commonly called. I’m told I don’t see 3D. Going to 3D movies pretty much confirms this for me since nothing ever jumps out of the screen. So as cool as VR sounds to me, I would miss out on the 3D aspect. Though it might be possible to “tweak” the headset and adjust the angles a bit to force my eyes to see 3D. I’m not sure if that’s good for my eyes, though.

At any rate, the Rift sounds like an amazing piece of technology. In the past year I’ve watched a number of videos demonstrating the capabilities of the Rift. From the Hak5 crew to Ben Heck, the reviews have all been positive.

And then I learned that John Carmack joined Oculus. I think that was about the time I realized that Oculus was the real deal. John is a visionary in so many different ways. One can argue that modern 3D gaming is largely in part to the work he did in the field. In more recent years, his visions have aimed a bit higher with his rocket company, Armadillo Aerospace. Armadillo started winding down last year, right about the time that John joined Oculus, leaving him plenty of time to deep dive into a new venture.

For anyone paying attention, Oculus was recently acquired by Facebook for a mere $2 Billion. Since the announcement, I’ve seen a lot of hatred being tossed around on Twitter. Some of this hatred seems to be Kickstarter backers who are under some sort of delusion that makes them believe they have a say in anything they back. I see this a lot, especially when a project is taking longer than they believe it should.

I can easily write several blog posts on my personal views about this, but to sum it up quickly, if you back a project, you’re contributing to make something a reality. Sometimes that works, sometimes it doesn’t. But Kickstarter clearly states that you’re merely contributing financial backing, not gaining a stake in a potential product and/or company. Nor are you guaranteed to receive the perks you’ve contributed towards. So suck it up and get over it. You never had control to begin with.

I think Notch, of Minecraft fame, wrote a really good post about his feeling on the subject. I think he has his head right. He contributed, did his part, and though it’s not working out the way he wanted, he’s still willing to wish the venture luck. He may not want to play in that particular sandbox, but that’s his choice.

VR in a social setting is fairly interesting. In his first Oculus blog post, Michael Abrash mentioned reading Neal Stephenson’s incredible novel, Snow Crash. Snow Crash provided me with a view of what virtual reality might bring to daily life. Around the same time, the movie Lawnmower Man was released. Again, VR was brought into the forefront of my mind. But despite the promises of books and movies, VR remained elusive.

More recently, I read a novel by Ernest Cline, Ready Player One. Without giving too much away, the novel centers around a technology called the OASIS. Funnily enough, the OASIS is, effectively, a massive social network that users interact with via VR rigs. OASIS was the first thing I thought about when I heard about the Facebook / Oculus acquisition.

For myself, my concern is Facebook. Despite being a massively popular platform, I think users still distrust Facebook quite a bit. I lasted about 2 weeks on Facebook before having my account deleted. I understand their business model and I have no interest in taking part. Unfortunately, I’m starting to miss out on some aspects of Internet life since some sites are requiring Facebook accounts for access. Ah well, I guess they miss out on me as well.

I have a lot of distrust in Facebook at the moment. They wield an incredible amount of information about users and, to be honest, they’re nowhere near transparent enough for me to believe what they say. Google is slightly better, but there’s some distrust there as well. But more than just the distrust, I’m afraid that Facebook is going to take something amazing and destroy it in a backwards attempt to monetize it. I’m afraid that Facebook is the IOI of this story. (It’s a Ready Player One reference. Go read it, you can thank me later)

Ultimately, I have no stake in this particular game. At least, not yet, anyway. Maybe I’m wrong and Facebook makes all the right moves. Maybe they become a power for good and are able to bring VR to the masses. Maybe people like Carmack and Abrash can protect Oculus and fend off any fumbling attempts Facebook may make at clumsy monetization. I’m not sure how this will play out, only time will tell.

How will we know how things are going? Well, for one, watching his Facebook interacts with this new property will be pretty telling. I think if Facebook is able to sit in the shadows and watch rather than kicking in the front door and taking over, maybe Oculus will have a chance to thrive. Watching what products are ultimately released by Oculus will be another telling aspect. While I fully expect that Oculus will add some sort of Facebook integration into the SDK over time, I’m also hoping that they continue to provide an SDK for standalone applications.

I sincerely wish Carmack, Abrash, and the rest of the Oculus team the best. I think they’re in a position where they can make amazing things happen, and I’m eager to see what comes next.

Keepin’ TCP Alive

I was debugging an odd network issue lately that turned out to have a pretty simple explanation. A client on the network was intermittently experiencing significant delays in accessing the network. Upon closer inspection, it turned out that prior to the delay, the client was being left idle for long periods of time. With this additional information it was pretty easy to identify that there was likely a connection between the client and server that was being torn down for being idle.

So in the end, the cause of the problem itself was pretty simple to identify. The fix, however, is more of a conundrum. The obvious answer is to adjust the timers and prevent the connection from being torn down. But what timers should be adjusted? There are the keepalive timers on the client, the keepalive timers on the server, and the idle teardown timers on the firewall in the middle.

TCP keepalive handling varies between operating systems. If we look at the three major operating systems, Linux, Windows, and OS X, then we can make the blanket statement that, by default, keepalives are sent after two hours of idle time. But, most firewalls seem to have a default TCP teardown timer of one hour. These defaults are not conducive to keeping idle connections alive.

The optimal scenario for timeouts is for the clients to have a keepalive timer that fires at an interval lower than that of the idle tcp timeout on the firewall. The actual values to use, as well as which devices should be changed, is up for debate. The firewall is clearly the easier point at which to make such a change. Typically there are very few firewall devices that would need to be updated as compared to the larger number of client devices. Additionally, there will likely be fewer firewalls added to the network over time, so ensuring that timers are properly set is much easier. On the other hand, the defaults that firewalls are generally configured with have been chosen specifically by the vendor for legitimate reasons. So perhaps the clients should conform to the setting on the firewall? What is the optimal solution?

And why would we want to allow idle connections anyway? After all, if a connection is idle, it’s not being used. Clearly, any application that needed a connection to remain open would send some sort of keepalive, right? Is there a valid reason to allow these sorts of connections for an extended period of time?

As it turns out, there are valid reasons for connections to remain active, but idle. For instance, database connections are often kept for longer periods of time for performance purposes. The TCP handshake can take a considerable amount of time to perform as opposed to the simple matter of retrieving data from a database. So if the database connection remains established, additional data can be retrieved without the overhead of TCP setup. But in these instances, shouldn’t the application ensure that keepalives are sent so that the connection is not prematurely terminated by an idle timer somewhere along the data path? Well, yes. Sort of. Allow me to explain.

When I first discovered the source of the network problem we were seeing, I chalked it up to lazy programming. While it shouldn’t take much to add a simple keepalive system to a networked application, it is extra work. As it turns out, however, the answer isn’t quite that simple. All three major operating systems, Windows, Linux, and OS X, all have kernel level mechanisms for TCP keepalives. Each OS has a slightly different take on how keepalive timers should work.

Linux has three parameters related to tcp keepalives :

tcp_keepalive_time
The interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further
tcp_keepalive_intvl
The interval between subsequential keepalive probes, regardless of what the connection has exchanged in the meantime
tcp_keepalive_probes
The number of unacknowledged probes to send before considering the connection dead and notifying the application layer

OS X works quite similar to Linux, which makes sense since they’re both *nix variants. OS X has four parameters that can be set.

keepidle
Amount of time, in milliseconds, that the connection must be idle before keepalive probes (if enabled) are sent. The default is 7200000 msec (2 hours).
keepintvl
The interval, in milliseconds, between keepalive probes sent to remote machines, when no response is received on a keepidle probe. The default is 75000 msec.
keepcnt
Number of probes sent, with no response, before a connection is dropped. The default is 8 packets.
always_keepalive
Assume that SO_KEEPALIVE is set on all TCP connections, the kernel will periodically send a packet to the remote host to verify the connection is still up.

Windows acts very differently from Linux and OS X. Again, there are three parameters, but they perform entirely different tasks. All three parameters are registry entries.

KeepAliveInterval
This parameter determines the interval between TCP keep-alive retransmissions until a response is received. Once a response is received, the delay until the next keep-alive transmission is again controlled by the value of KeepAliveTime. The connection is aborted after the number of retransmissions specified by TcpMaxDataRetransmissions have gone unanswered.
KeepAliveTime
The parameter controls how often TCP attempts to verify that an idle connection is still intact by sending a keep-alive packet. If the remote system is still reachable and functioning, it acknowledges the keep-alive transmission. Keep-alive packets are not sent by default. This feature may be enabled on a connection by an application.
TcpMaxDataRetransmissions
This parameter controls the number of times that TCP retransmits an individual data segment (not connection request segments) before aborting the connection. The retransmission time-out is doubled with each successive retransmission on a connection. It is reset when responses resume. The Retransmission Timeout (RTO) value is dynamically adjusted, using the historical measured round-trip time (Smoothed Round Trip Time) on each connection. The starting RTO on a new connection is controlled by the TcpInitialRtt registry value.

There’s a pretty good reference page with information on how to set these parameters that can be found here.

We still haven’t answered the question of optimal settings. Unfortunately, there doesn’t seem to be a correct answer. The defaults provided by most firewall vendors seem to have been chosen to ensure that the firewall does not run out of resources. Each connection through the firewall must be tracked. As a result, each connection uses up a portion of memory and CPU. Since both memory and CPU are finite resources, administrators must be careful not to exceed the limits of the firewall platform.

There is some good news. Firewalls have had a one hour tcp timeout timer for quite a while. As time has passed and new revisions of firewall hardware are released, the CPU has become more powerful and the amount of memory in each system has grown. The default one hour timer, however, has remained in place. This means that modern firewall platforms are much better prepared to handle an increase in the number of connections tracked. Ultimately, the firewall platform must be monitored and appropriate action taken if resource usage becomes excessive.

My recommendation would be to start by setting the firewall tcp teardown timer to a value slightly higher than that of the clients. For most networks, this would be slightly over two hours. The firewall administrator should monitor the number of connections tracked on the firewall as well as the resources used by the firewall. Adjustments should be made as necessary.

If longer lasting idle connections are unacceptable, then a slightly different tactic can be used. The firewall teardown timer can be set to a level comfortable to the administrator of the network. Problematic clients can be updated to send keepalive packets at a shorter interval. These changes will likely only be necessary on servers. Desktop systems don’t have the same need as servers for long-term establishment of idle connections.

SSL “Security”

SSL, a cryptographically secure protocol, was created by Netscape in the mid-1990’s. Today, SSL, and it’s replacement, TLS, are used by web browsers and other programs to create secure connections between devices across the Internet.

SSL provides the means to cryptographically secure a tunnel between endpoints, but there is another aspect of security that is missing. Trust. While a user may be confident that the data received from the other end of the SSL tunnel was sent by the remote system, the user can not be confident that the remote system is the system it claims to be. This problem was partially solved through the use of a Public Key Infrastructure, or PKI.

PKI, in a nutshell, provides the trust structure needed to make SSL secure. Certificates are issued by a certificate authority or CA. The CA cryptographically signs the certificate, enabling anyone to verify that the certificate was issued by the CA. Other PKI constructs offer validation of the registrant, indexing of the public keys, and a key revocation system. It is within these other constructs that the problems begin.

When SSL certificates were first offered for sale, the CAs spent a great deal of time and energy verifying the identity of the registrant. Often, paper copies of the proof had to be sent to the CA before a certificate would be issued. The process could take several days. More recently, the bar for entry has been lowered significantly. Certificates are now issued on an automated process requiring only that the registrant click on a link sent to one of the email addresses listed in the Whois information. This lack of thorough verification has significantly eroded the trust a user can place in the authenticity of a certificate.

CAs have responded to this problem by offering different levels of SSL certificates. Entry level certificates are verified automatically via the click of a link. Higher level SSL certificates have additional identity verification steps. And at the highest level, the Extended Validation, or EV certificate requires a thorough verification of the registrants identity. Often, these different levels of SSL certificates are marketed as stronger levels of encryption. The reality, however, is that the level of encryption for each of these certificates is exactly the same. The only difference is the amount of verification performed by the CA.

Despite the extra level of verification, these certificates are almost indistinguishable from one another. With the exception of EV certificates, the only noticeable difference between differing levels of SSL certificates are the identity details obtained before the certificate is issued. An EV certificate, on the other hand, can only be obtained from certain vendors, and shows up in a web browser with a special green overlay. The intent here seems to be that websites with EV certificates can be trusted more because the identity of the organization running the website was more thoroughly validated.

In the end, though, trust is the ultimate issue. Users have been trained to just trust a website with an SSL certificate. And trust sites with EV certificates even more. In fact, there have been a number of marketing campaigns targeted at convincing users that the “Green Address Bar” means that the website is completely trustworthy. And they’ve been pretty effective. But, as with most marketing, they didn’t quite tell the truth. sure, the EV certificate may mean that the site is more trustworthy, but it’s still possible that the certificate is fake.

There have been a number of well known CAs that have been compromised in recent years. Diginotar and Comodo being two of the more high profile ones. In both cases, it became possible for rogue certificates to be created for any website the attacker wanted to hijack. That certificate plus some creative DNS poisoning and the attacker suddenly looks like your bank, or google, or whatever site the attacker wants to be. And, they’ll have a nice shiny green EV certificate.

So how do we fix this? Well, one way would be to use the certificate revocation system that already exists within the PKI infrastructure. If a certificate is stolen, or a false certificate is created, the CA has the ability to put the signature for that certificate into the revocation system. When a user tries to load a site with a bad certificate, a warning is displayed telling the user that the certificate is not to be trusted.

Checking revocation of a certificate takes time, and what happens if the revocation server is down? Should the browser let the user go to the site anyway? Or should it block by default? The more secure option is to block, of course, but most users won’t understand what’s going on. So most browser manufacturers have either disabled revocation checking completely, or they default to allowing a user to access the site when the revocation site is slow or unavailable.

Without the ability to verify if a certificate is valid or not, there can be no real trust in the security of the connection, and that’s a problem. Perhaps one way to fix this problem is to disconnect the revocation process from the process of loading the webpage. If the revocation check happened in parallel to the page loading, it shouldn’t interfere with the speed of the page load. Additional controls can be put into place to prevent any data from being sent to the remote site without a warning until the revocation check completes. In this manner, the revocation check can take a few seconds to complete without impeding the use of the site. And after the first page load, the revocation information is cached anyway, so subsequent page loads are unaffected.

Another option, floated by the browser builders themselves, is to have the browser vendors host the revocation information. This information is then passed on to the browsers when they’re loaded. This way the revocation process can be handled outside of the CAs, handling situations such as those caused by a CA being compromised. Another idea would be to use short term certificates that expire quickly, dropping the need for revocation checks entirely.

It’s unclear as to what direction the market will move with this issue. It has been over two years since the attacks on Diginotar and Comodo and the immediacy of this problem seems to have passed. At the moment, the only real fix for this is user education. But with the marketing departments for SSL vendors working to convince users of the security of SSL, this seems unlikely.

Pebble Review

In April of 2012, a Kickstarter project was launched by a company aiming to create an electronic watch that served as a companion to your smartphone. A month later, the project exceeded it’s funding goal by over 100%, closing at over $10 million in pledges. Happily, I was one of the over 68,000 people that pledged. I received my Pebble about a month ago or so and I’ve been wearing it ever since.

The watch itself is fairly simple, a rectangular unit with an e-ink display, four buttons, and a rubberized plastic strap. The screen resolution is 144×168, plenty of pixels for some fairly impressive detail. The watch communicates with your mobile phone (Android or iPhone only) via a bluetooth connection. All software updates and app installation occurs over the bluetooth connection. There is a 3-axis accelerometer as well a a pretty standard vibrating motor for silent alerts.

According to the official Pebble FAQ, battery life is 7+ days on a single charge, but this depends on your overall use of the device. The more alerts your receive, the more the backlight comes on, and the more apps you use on the device, the shorter your battery life.

Pebble is still in the process of building the initial run of watches for backers. Black watches, being the majority of the orders, were built first. Other colors are coming online in more recent weeks. Pebble has a website where interested parties can track how many pebbles have been built and shipped.

I’ve been pretty impressed with the watch thus far. Pebble has been fairly responsive to inquiries I’ve made, and they seem dedicated to making sure they have a top quality product. Of course, as is typical on the Internet, not everyone is happy. There seem to be a lot of complaints about communication, how long it’s taking to get watches, and about the features themselves.

It’s hard to say whether these complaints have any merit, though. For starters, I can’t imagine it’s a simple task to design and build 68,000 watches in a short period of time. And to complicate matters further, it seems that many backers of Kickstarter projects don’t understand the difference between being a backer and being a customer.

When you back a Kickstarter project, you’re pledging money to help start the project. As a “reward” for contributing, if the project is successful, you are entitled to whatever the project owners have designated for your level of contribution. The key part of this being, if the project is successful. Some projects take longer than others, and times often slip. That said, I’ve only been part of one Kickstarter that has failed, and even that one is being resurrected by other interested parties.

But there are some legitimate complaints, some that can be addressed, and others that likely won’t. For instance, I’ve noticed that with recent firmware releases, the battery life on my watch had dropped considerably. Based on communication with the developers, they are aware of this and are actively working to resolve it. I’m not sure what the problem is, exactly, but I’m confident they’ll have it fixed in the next firmware update.

The battery indicator is a source of frequent discussion. Right now, there’s no indicator of battery life until the battery is running low. And that indicator doesn’t show on the watchface, it only shows when you are in other menus. This, in my opinion, is a poor UI choice. I’d much rather see a battery indicator option available for the watchface itself.

Menu layout was also a frequent source of frustration for users. In previous firmware releases, you had to actively go to the watchface you wanted. Recent releases changed this so that the watch was the default view and other screens were chosen as needed. The behavior of the navigation buttons on the watch were also updated to reflect this new choice.

So Pebble continues to improve over time. It’s an iterative process that will take some time to get right. I’m eager to see what future releases will bring. Next week, Pebble is scheduled to release the watch SDK, allowing users, for the first time, to start adding their own customizations to the watch.

The Pebble watch has a lot of potential. As the platform matures, I’m hoping to see a number of features I’m interested in come to fruition. Interaction between Pebble and other apps on iPhone devices would be a welcome addition. I would love to see an actigraphy app that uses the Pebble for sleep monitoring. From what I’ve read, sleep monitoring is even more accurate when the monitor is placed on the sleeper’s wrist. Seems like a perfect use for the Pebble.

I’d also like to see more of an open SDK, allowing users such as myself to write code for the Pebble. While I’m aware of the closed nature of the iPhone platform itself, it is still possible to add applications to the Pebble itself. I can’t wait to see what others build for this platform. Given a bit of time, I think this can grow into something even more amazing.

Customer Dis-Service

In general, I’m a pretty loyal person. Especially when it comes to material things. I typically find a vendor I like and stick with them. Sure, if something new and flashy comes along, I’ll take a look, but unless there’s a compelling reason to change, I’ll stick with what I have.

But sometimes a change is forced upon me. Take, for instance, this last week. I’ve been a loyal Verizon customer for … wow, about 15 years or so. Not sure I realized it had been that long. Regardless, I’ve been using Verizon’s services for a long time. I’ve been relatively happy with them, no major complaints about services being down or getting the runaround on the phone. In fact, my major gripe with them had always been their online presence which seemed to change from month to month. I’ve had repeated problems with trying to pay bills, see my services, etc. But at the end of the day, I’ve always been able to pay the bill and move on. Since that’s really the only thing I used their online service for, I was content to leave well enough alone.

In more recent months, we’ve been noticing that the 3M DSL service we had is starting to lack a bit. Not Verizon’s fault at all, but the fault of an increased strain on the system at our house. Apparently 3M isn’t nearly enough bandwidth to satisfy our online hunger. That, coupled with the price we were paying, had me looking around for other services. Verizon still doesn’t offer anything faster than 3M in the area and, unfortunately, the only other service in the area is from a company that I’d rather not do business with if I could avoid it.

In the end, I thought perhaps I could make some slight changes and at least reduce the monthly bill by a little until we determined a viable solution. I was considering adding a second DSL line, connected to a second wireless router, to relieve the tension a bit. This would allow me to avoid that other company and provide the bandwidth we needed. My wife and I could enjoy our own private upstream and place the rest of the house on the other line.

Ok, I thought, let’s dig into this a bit. First things first, I decided to get rid of the home phone, or at least transfer it to a cheaper solution. My cell provider offered a $10/month plan for home phones. Simple process, port he number over, install this little box in the house, and poof. Instant savings. Best part, that savings would be just about enough to get that second DSL line.

Being cautious, and not wanting to end up without a DSL connection, I contacted Verizon. Having worked for a telco in the past, I knew that some telcos required that you have a home phone line in order to have DSL service. This wasn’t a universal truth, however, and it was easy enough to verify. The first call to Verizon went a little sideways, though. I ended up in an automated system. Sure, everyone uses these automated systems nowadays, but I thought this one was particularly condescending. They added additional sound effects to the prompts so that when you answered a question, the automated voice would acknowledge your request and then type it in. TYPE IT IN. I don’t know why, but this drove me absolutely crazy. Knowing that I was talking to a recorded voice and then having that recorded voice playing sounds like they were typing on a keyboard? Infuriating. And, on top of it, I ended up in some ridiculous loop where I couldn’t get an operator unless I explicitly stated why I wanted an operator, but the automated system apparently couldn’t understand my request.

Ok, time out, walk away, try again later. The second time around, I lied. I ended up in sales, so it seems to have worked. I explained to the lady on the phone what I was looking for. I wanted to cancel my home phone and just keep the DSL. I also wanted to verify that I was not under contract so I wouldn’t end up with some crazy early termination fee. She explained that this was perfectly acceptable and that I could make these changes whenever I wanted. I verified again that I could keep the DSL without issue. She agreed, no problem.

Excellent! Off I went to the cell carrier, purchased (free with a contract) the new home phone box, and had them port the number. The representative cautioned that he saw DSL service listed when he was porting and suggested I contact Verizon to verify that the DSL service would be ok.

I called Verizon again to verify everything would work as intended. I explained what I had done, asked when the port would go through, and stressed that the DSL service was staying. The representative verified the port date and said that the DSL service would be fine.

You can guess where this is going, can’t you. On the day of the port, the phone line switched as expected. The new home phone worked perfectly and I made the necessary changes to the home wiring to ensure that the DSL connection was isolated away from the rest of the wiring. DSl was still up, phone ported, everything was great. Until the next morning.

I woke up the following morning and started my normal routine. Get dressed, go exercise, etc. Except that on the way to exercise, I noticed that the router light was blinking. Odd, I wonder what was going on. Perhaps something knocked the system online overnight? The DSL light on the modem was still on, so I had a connection to the DSLAM. No problem, reboot the router and we’ll be fine. So, I rebooted and walked away. After a few minutes I checked the system and noticed that I was still not able to get online. I walked through a mental checklist and decided that the username and password for the PPPoE connection must be failing. Time to call Verizon and see what’s wrong.

I contacted Verizon and first spoke to a sales rep who informed me that my services had been cancelled per my request. Wonderful. Al that work and they screw it up anyway. I explained what I had done and she took a deeper look into the account. Turns out the account was “being migrated” and she apologized for the mixup. Since I was no longer bundled, the DSL account had to be migrated. I talked with her some more about it and she decided to send me to technical support to verify everything was ok. Off I go to technical support, fully expecting them to ask be to reset my DSL modem. No such luck, however, the technical support rep explained that I had no DSL service.

And back to sales I went. I explained, AGAIN, what was going on. The representative confirmed my story, verified that the account was being migrated, and asked me to check the service again in a few hours. All told, I spent roughly an hour on the phone with Verizon and missed out on my morning exercise.

After rushing through the remainder of my morning routine and explaining to my wife why the Internet wasn’t working, I left for work. My wife checked in a few hours later to let me know that, no, we still did not have an Internet connection. So I called Verizon again. Again I’m told I have no service and that I have cancelled them. Again I explain the problem and what I had done. And this time, the representative explains to me that they do not offer unbundled DSL service anymore, they haven’t had that service in about a year. She goes on to offer me a bundled package with a phone line and explains that I don’t have to use the phone line, I just have to pay for it.

So all of the careful planning I had done was for naught. In an effort to make sure this didn’t happen to anyone else, the rep checked back on my account to see who had informed me about the DSL service. According to the notes, however, I had never called about such a thing. I called to complain about unsolicited phone calls and they referred me to their fraud and abuse office and explains about the magical phone code I could put in to block calls. Ugh! She then went on to detail every aspect of my problem, again so someone else didn’t have this problem.

This is the sort of situation that will, very rapidly, cause me to look elsewhere for service. And that’s exactly what I did. I’ve since cut all ties with Verizon and moved on to a different Internet service provider. I’m not happy with having to deal with this provider, but it’s the only alternative at the moment. Assuming I don’t have any major problems with the service, I’ll probably continue with them for a while. Of course, if I run into problems here, the decision becomes more difficult. A “lesser of two evils” situation, if you will. But for now, I’ll deal with what comes up.

So you want to talk at a conference

Last year at this time I was attending an absolutely amazing conference known as DerbyCon. It was an amazing time where I met some absolutely amazing people and learned amazing things. Believe me, there was a lot of amazing.

I attended one talk that really got me thinking about blue-team security. That is, defensive security, basically what I’m all about these days. And I decided that I wanted to help the cause .. So, I started putting together the pieces in my head and decided I wanted to do a talk at the following DerbyCon ..

And so, when the CFP was placed, I submitted my thoughts and ideas. Honestly, while I hoped it would be accepted, I didn’t think I had a chance in hell given the talent that talked the previous year.. Boy was I wrong.. Talk accepted. And so I started putting things together, working on the talk itself, pushing forward the design I wanted for this new tool. I aimed high and came up a little short..

As luck would have it, this past summer was a beast. Just no time to work on anything in-depth .. And time went by. And before I knew it, DerbyCon was here.. I did a dry-run of my talk to get some feedback and suggestions. Total talk time? 15 minutes. Uhh.. That might be an issue.. 50 minute talk window and all..

So, back to the drawing board. Fortunately, I received some awesome feedback and expanded my talk a bit. The revised edition should be a bit longer, I would hope.. I’ll find out tomorrow. I’m talking at 2pm.

I’m terrified.

But I’m surrounded by some of the most awesome people I have ever met. I’ll be fine.. I hope..

The Future of Personal Computers

The latest version of OS X, Mountain Lion, has been out for a few months and the next release of Windows, Windows 8, will be out very soon. These operating systems continue the trend of adding new and radical features to a desktop operating system, features we’ve only seen in mobile interfaces. For instance, OS X has the launchpad, an icon-based menu used for launching applications similar to the interface used on the iPhone and iPad. Windows 8 has their new Metro interface, a tile-based interface first seen on their Windows Mobile operating system.

As operating systems evolve and mature, we’ll likely see more of this. But what will the interface of the future look like? How will we be expected to interact with the computer, both desktop and mobile, in the future? There’s a lot out there already about how computers will continue to become an integral part of daily life, how they’ll become so ubiquitous that we won’t know we’re actually using them, etc. It’s fairly easy to argue that this has already happened, though. But putting that aside, I’m going to ramble on a bit about what I think the future may hold. This isn’t a prediction, per se, but more of what I’m thinking we’ll see moving forward.

So let’s start with today. Touch-based devices such as IOS and Android based devices have become the standard for mobile phones and tablets. In fact, the Android operating system is being used for much more than this, appearing in game consoles such as the OUYA, as the operating system behind Google’s Project Glass initiative, and more. It’s not much of a surprise, of course, as Linux has been making these in-roads for years and Android is, at it’s core, an enhanced distribution of Linux designed for mobile and embedded applications.

The near future looks like it will be filled with more touch-based interfaces as developers iterate and enhance the current state of the art. I’m sure we’ll see streamlined multi-touch interfaces, novel ways of launching and interacting with applications, and new uses for touch-based computing.

For desktop and laptop systems, the traditional input methods of keyboards and mice will be enhanced with touch. We see this happening already with Apple’s Magic Mouse and Magic Pad. Keyboards will follow suit with enhanced touch pads integrated into them, reducing the need to reach for the mouse. And while some keyboard exist today with touchpads attached already, I believe we’ll start seeing tighter integrations with multi-touch capabilities.

We’re also starting to see the beginnings of gesture-based devices such as Microsoft’s Kinect. Microsoft bet a lot on Kinect as the next big thing in gaming, a direct response to Nintendo’s Wii and Sony’s Move controllers. And since the launch of Kinect, hobbyists have been hacking away, adding Kinect support to “traditional” computer operating systems. Microsoft has responded, releasing a development kit for Windows and designing a Kinect intended for use with Dekstop operating systems.

Gesture based interfaces have long been perceived as the ultimate in computer interaction. Movies such as Minority Report and Iron Man have shown the world what such interfaces may look like. But life is far different from a movie. Humans were not designed to hold their arms in a horizontal position for long periods of time, a syndrome known as “Gorilla Arm.” Designers will have to adapt the technology in ways that work around these physical limitations.

Tablet computers work well at the moment because most interactions with them are on a horizontal and not vertical plane, thus humans do not need to strain themselves to use them. Limited applications, such as ATMs, are more tolerant of these limitations since the duration of use is very low.

Right now we’re limited to 2D interfaces for applications. How will technology adapt when true 3D display exist? It stands to reason that some sort of gesture interface will come into play, but in what form? Will we have interfaces like those seen in Iron Man? For designers, such an interface may provide endless insight into new designs. Perhaps a merging of 2D and 3D interfaces will allow for this. We already have 3D renderings in modern design software, but allowing such software to render in true 3D where the designer can move their head instead of their screen to interact? That is truly a breakthrough.

What about mobile life? Will touch-based interfaces continue to dominate? Or will wearable computing with HUD style displays become the new norm? I’m quite excited at the prospect of using something such as Google’s Project Glass in the near future. The cost is still prohibitive for the average user, but it’s still far below the cost of similar cutting edge technologies a mere 5 years ago. And prices will continue to drop.

Perhaps in the far future, 20+ years from now, the input device will be our own bodies, ala Kinect, with a display small enough that it’s embedded in our eyes, or inserted as a contact lens. Maybe in that timeframe, we truly become one with the computer and transform from mere humans into cyborgs. There will always be those who won’t follow suit, but for those of us with the interest and the drive, those will be interesting times, won’t they.

Multiple Personalities With The Linux Kernel

Virtualization is all the rage these days. Taking a single computer system and installing multiple “guest” operating systems on it. The benefits are a reduced footprint and better utilization of existing resources. There is a danger, however, in having many systems dependent on a single piece of hardware. The solution, of course, is to use multiple pieces of hardware and allow your “guests” to be moved between the individual hardware units, thus making the system more resilient to failure.

I’ve started playing a bit with virtualization, specifically, KVM virtualization. For my purposes, I’m using CentOS 6.x on a 64-bit capable system.

The hypervisor itself is a standard CentOS base install with the addition of kvm and various management packages. I installed the hypervisor on a RAID1 LVM, allowing me some room to grow if necessary, and reserving the remainder of the hard drive for virtual hosts. While you can use binary blobs for virtual disk, I prefer using a raided LVM which gives me the ability to grow the disk if necessary as well as minor bumps in speed.

Using yum, adding KVM to an existing installation is a pretty straightforward process :

yum install virt-manager libvirt libvirt-python python-virtinst

That should take care of any dependencies required to get KVM virtualization up and running.

Next up, we need to tackle networking. There are many, many different configurations, far too many to go through here. So, I’m going to keep it simple. For my purposes, I need a single connection to the outside network, all in the same VLAN, as well as a local NAT for some VMs that I need local access to, but that don’t need to be accessed via the Internet.

Setting this up is brilliantly simple. First, copy the /etc/sysconfig/network-scripts/ifcfg-eth0 file to /etc/sysconfig/network-scripts/ifcfg-br0. Next, edit the ifcfg-eth0 file. You’ll need to remove a bunch of lines and add a BRIDGE line as follows :

DEVICE=”eth0″

BRIDGE=”br0″

HWADDR=”00:11:22:33:44:55″

ONBOOT=”yes”

Next, edit the ifcfg-br0 file. All you really need to do here is change the DEVICE= line to reflect br0. I also recommend disabling NM_CONTROLLED … NetworkManager shouldn’t be installed anyway since you used a base install, but better safe than sorry. In the end, the ifcfg-br0 file should look something like this :

DEVICE=”br0″

BOOTPROTO=”static”

BROADCAST=”204.10.167.63″

IPADDR=”204.10.167.50″

NETMASK=”255.255.255.192″

ONBOOT=”yes”

TYPE=”Bridge”

DELAY=”0″

Restart networking and you’ll be all set. The NAT portion of this is handled by KVM itself, so there’s nothing to do there. And networking should be all ready to go.

Without guests, however, all you have is a basic Linux system with a few extra packages taking up space. The real magic starts when you create and install your first VM. My recommendation is to start with creating a template system you can clone later rather than hand-installing every single VM. To install the template, first decide on the base disk size. I’m using 15 GB volumes which is more than enough for the base install and leaves room for most basic server configurations. If you need more space, you can attach additional disks later.

I’m not going to go into how I set up LVM, there are plenty of tutorials out there. For the purposes of this article, I have a volume group names vg_libvirt where I plan to store all of the virtual machines. So first we create the disk necessary for the template :

lvcreate -L15G -n template_base vg_libvirt

Next we install the OS. virt-install is essentially a wrapper script that sets all the necessary values within KVM to get you going. After the settings are configured and the VM is started, girt-installer will automatically attach you to the VM console. The full command I used to install is as follows :

virt-install –accelerate –hvm –connect qemu:///system –network bridge:bra –name template –ram 512 –disk=/dev/mapper/vg_libvirt-template_base –vcpus=1 –check-cpu –nographics –extra-args=”console=ttyS0 text” –location=/tmp/CentOS-6.2-x86_64-bin-DVD1.iso

Since this is effectively a text install, you do run into a bit of a problem. Namely, you can’t configure the drives the way you want. There is a way around this, though it takes a bit of work. Of course, since you’re creating a template, the little bit of work now is easily made up for later. So, here’s how I handled the drive configuration.

First, run through a basic install using the above install method. Once you’re up and running, log into the new VM and head to the root home directory. In that directory you’ll find a kickstart file called anaconda-ks.cfg. Make a local copy of that file and shut down the VM.

The kickstart file gives you the basic parameters that CentOS used to configure the system. You can edit this file and use it yourself to automatically install and configure systems. For our purposes, we’re interested in editing the drive configuration and then using the kickstart file to create the template. So, edit the file and set the parameters as you see fit. An example is as follows :

# Kickstart file automatically generated by anaconda.

#version=DEVEL

install

cdrom

lang en_US.UTF-8

keyboard us

network –onboot no –device eth0 –noipv4 –noipv6

rootpw –iscrypted somerandomstringthatiwontrevealtoyoubutnicetry

firewall –service=ssh

authconfig –enableshadow –passalgo=sha512

selinux –enforcing

timezone –utc America/New_York

bootloader –location=mbr –driveorder=vda –append=” console=ttyS0 crashkernel=auto”

# The following is the partition information you requested

# Note that any partitions you deleted are not expressed

# here so unless you clear all partitions first, this is

# not guaranteed to work

clearpart –all –drives=vda

part /boot –fstype=ext4 –size=500

part swap –size=2048

part pv.253002 –grow –size=1

volgroup VolGroup –pesize=4096 pv.253002

logvol / –fstype=ext4 –name=lv_root –vgname=VolGroup –size=4096

logvol /tmp –fstype=ext4 –name=lv_tmp –vgname=VolGroup –size=2048

logvol /var –fstype=ext4 –name=lv_var –vgname=VolGroup –size=4096

logvol /home –fstype=ext4 –name=lv_home –vgname=VolGroup –size=2048

#repo –name=”CentOS” –baseurl=cdrom:sr0 –cost=100

%packages –nobase

@core

%end

Once you have this, you can re-run the girt-install command from above with a slight tweak to make the install use the kickstart file you created (I named it kick1.ks) :

virt-install –accelerate –hvm –connect qemu:///system –network bridge:bra –name template –ram 512 –disk=/dev/mapper/vg_libvirt-template_base –vcpus=1 –check-cpu –nographics –initrd-inject=/path/to/kick1.ks –extra-args=”ks=file:/kick1.ks console=ttyS0 text” –location=/tmp/CentOS-6.2-x86_64-bin-DVD1.iso

This will nuke the existing VM and replace it with one configured with the drive partitions as set in the kickstart file. And now you almost have a template.

You could use this new VM as a clone, but if you’ve set an IP on it, you’ll run into duplicate IP problems. SSH keys on the machine will be cloned, making all of your systems contain the same keys. And other machine-specific settings will be cloned as well. This can be worked around, though.

I recommend that you first configure this new template with the basic settings you want on all of your VMs. For instance, if you’re using Spacewalk for server management, you can install all of the necessary spacewalk binaries. You can configure a standard iptables template for the system. Maybe you have some standard security software you use such as OSSEC. And, of course, create the standard users on the system so you don’t have to create them each time you clone the VM. Once everything is installed and running how you want it, perform the following actions to make the template :

touch /.unconfigured

rm -rf /etc/ssh/ssh_host_*

poweroff

The VM will power down and you’ll have your template. Cloning this to a new VM is quick and simple. First, create the new logical volume as we did before. Next, clone the VM to the new drive :

virt-clone -o template -n new_vm -f /dev/mapper/vg_libvirt-new_vm_base

Simple enough, right? Run this command and when it completed, you can start the VM and connect to the console. You’ll be greeted with the standard first boot process and then dropped at a login prompt. Congratulations, you now have a VM. Set the IP, configure whatever services you need, and you’re off to the races.

If you need to modify the RAM, number of CPUs, etc., then use the virsh command on the hypervisor. You’ll need to shut down the VM and restart it in order for these changes to take effect.

And that’s really all there is to it. The VMs themselves can be treated as self-contained systems with no special care necessary … One note, however. If you reboot the hypervisor, the VMs are paused before rebooting and resumed after reboot. This leads to an interesting problem in that the uptime on a VM can easily exceed that of the hypervisor. Be aware of this and don’t depend on a VMs uptime to be accurate.