It’s been a very odd weekend: lots of tech frustration, lots of tech success, and a few lamentations.
Last Thursday night, I spent a lot of time trying to solve a home theatre issue. I was trying to get my TV set up to show content from my main media source. Based upon the specs for the Samasung LN52A750, I was trying to play content from my PC onto my LCD. I tried the Samasung PC Share solution first. But I couldn’t get the PC to recognize the TV using PnP. So I switched to using TVesity. [Note: TVersity is absolutely the coolest free software solution to stream videos and photos. In particular, it can do on-the-fly transcoding. It is well worth setting this tool up.] So I set up TVersity to support DLNA as well as Windows Media Center. I got my son’s XBox to work perfectly.
But after several hours of fiddling around while Texas lost the national championship, I finally realized something: I didn’t have a Samsung LN52A750 LCD. Indeed, I had a Samsung LN52A650. So all my efforts would bear no fruit with my current LCD. Rats.
But my frustrations were only starting.
I had a few dollars left from a Christmas gift certificate (from Amazon), so I decided I needed some music to chill out. So I got onto Amazon and decided it was time to download a few Earth, Wind & Fire songs. So I bought a “Best of” album. Unfortunately, it didn’t load properly. In fact, Amazon said I downloaded the songs when they had failed to download.
So I started a search for a means to contact Amazon support. In the past, I’ve had no trouble reaching them. This time was different. The only way I could reach them was via email. But when I reached them, they were eager to help. They reset the album so that I could download the music. So as Saturday wound down, I tried again – and it failed again. So I left another support request and moved on to other matters.
I had been having trouble with my video driver for a couple of months. Every now and then, my system would get a blue screen in the video driver. And over the past couple of weeks, I had been getting a number of notifications when the video driver would fail and successfully restart. The problem seemed to happen whenever I was driving the CPU and the graphics processor heavily. Sometimes it would happen when I was using Media Center. Sometimes, it would happen when I was streaming something from Hulu. And sometimes it would happen when I was using VLC to watch some of my videos. But the result was frustrating.
As the problem became more acute, I began to suspect a possible hardware problem – though I wasn’t convinced. I had noticed that I had gotten a driver update in late October. And it seemed as if the problems began after that. But I could find no correlation to recent increases in reboots. Nevertheless, I started down the path assuming that I had a video error.
I got onto Intel’s website (as I use an Intel Graphics Media adapter). They did have a new update. So I gave that a whirl. Unfortunately, I got the typical message that this driver would not work with the custom HP implementation. Indeed, the new software refused to install.
But I never let error messages deter me. So I unpacked the executable file to my hard drive. From there, I updated the driver via Device Manager (and selecting the path myself). I was able to load the driver. Now I had to reboot, wait and hope.
After ten hours of complete stability, I am ready to declare an interim success. After a week, I’ll flag the issue as resolved. But in the meantime, it sure seems to be resolved. And as the weekend was quite frustrating thus far, I was glad to declare a success somewhere.
So it was time to return my attention to Earth, Wind & Fire. My desire to seriously chill out had subsided but had not disappeared. So I had to solve the Amazon issue. Since their technical support was providing little real help, I decided to solve it myself. Figuring that this would take some time, I decided to download a couple of podcasts so I could listen while I surfed/researched.
But I couldn’t download any podcasts. Now I was starting to sense a pattern. With a new release of iTunes and some recent sharing changes (for TVersity), I started to wonder whether I was having permission problems with my media libary. So I dug a little deeper. And voila, the problem became manifest. Somehow, my file permissions had changed. I was no longer the owner of my media directories. And even though I was an administrator of the system, I could not update my own files. Arghhhh.
So I went ahead and formally took ownership of all files and directories in the media library. From there, I changed file permissions. Of course, it was not quite this simple. In the end, I had to delete all permissions on all files and then rely upon inherited permissions only. But once I did this, things started to whirl into action again. My podcast downloads worked. And I could finally download files from Amazon.
In the final analysis, I got the problem solved. But I still am not entirely sure how permissions on my media library were changed. In troubleshooting parlance, I had a solution – but i had no root cause identified. So I can’t guarantee that it won’t happen again. But as of last night, I understood my home theater challenges, I fixed my unexplainable reboots and I could download the media comtent I wanted.
In celebration, I’m finally listening to some exceptional music. And it does help me chill a bit.
-Roo
Author: Lorin Olsen
Contract With Americans + Fanning the Embers
Re: Global warning – on recess, late to return to class
Lorin Olsen <cyclingroo@gmail.com> | Mon, Nov 23, 2009 at 9:39 AM | |
To: <Addresses suppressed> | ||
|
Whose Leash Is It? – Mobile Phone Development
A few weeks ago, my brother-in-law asked if I would be interested in developing an iPhone application for him. I won’t explain the app or its details as that would violate the NDA that I am under. 😉 Nevertheless, I thought that this might be fun as I haven’t played with Apple’s development platform since 1995.
Well, Apple hasn’t changed. It appears that their goal is to lock you in a comfortable room and make sure you never leave – even if you can’t afford to stay. In order to build an iPhone app, you need to use Apple tools. That started off simply enough. I tried to put together a Cygwin environment on my Windows 7 system. After a few days, I did have a working environemnt that I could build Unix apps on. But the iPhone SDK isn’t just any old Unix environment. It absolutely needs Mac OS X – and OS X 10.5.3 for good measure.
I don’t have a Mac. So I figured that I could put together a development environment using VirtualBox or VMWare. And if you have enough patience (and can find the right image files) you can run OS X 10.5.2 (through 10.5.5) from within a VMWare host. But to do it legally, you need to buy a license for the OS as well as purchase the iPhone SDK. Before I plunked down any coin of the realm, I had to try it out first. And after a couple of weeks of tinkering, I found that I could indeed build a virtual environment that would run the iPhone SDK.
But performance was labored. And to do it properly, you really need VMWare Workstation (not VMWare Player). So the final cost for putting all of this together would have been a couple of hundred dollars. But you can get a Mac Mini for a few hundred bucks. And with that, you can remote onto a head-less device that is more than adequate for compiling the code. So I would need a few hundred dollars if I went via VM and a few hundred dollars for a fully functioning Apple hardware platform.
But that is just for starters. Add to that the cost of the iPhone (or iPod Touch) and the cost of the service contract. And when you are done, you have access to one platform on one carrier. In my mind, that is both a fully closed and a highly distasteful investment.
As a former Sprint employee, I had always hoped that Sprint would be the team that would bring forth the best and brightest from a cool new platform. I was wrong. Verizon has brought a solid contender into view with the Moto Droid. And they have brought the marketing pizazz that the Android platform really needed. So I started wondering what it would take to bring together a functioning development platform.
After being disheartened by the cost of an iPhone development platform I was thrilled at what I found when constructing the Android development platform. First, I needed the SDK. Low and behold, the SDK could run on any platform that would support C/C++. And the SDK was free.
And the reference platform for the IDE is Eclipse – which is also freely available. Being a former Java developer, I had no problems getting re-acquainted. I downloaded Eclipse and then downloaded the Android Development Tools (ADT). All along the way, these investments required no financial outlays. And the Android platform even included an interpreter so that I could do rudimentary testing – even w/o the hardware.
So here is the bottom-line. The iPhone costs some serious scratch in order to have the privilege of being locked onto a single hardware provider and a single carrier. On the other hand, Android’s barriers to entry are negligible. I put together a functioning testbed in a couple of hours – including the download time. And once done, I have a platform where I can build apps for any carrier and any number of hardware providers.
Indeed, this reminds me of the Apple-Microsoft PC wars of the nineties. Will Apple ever learn from their mistakes? And will developers choose to be on yet another vendor-dictated leash?
-Roo
Chrome: More Than A Browser – Less Than A Desktop
Take a look at the picture above. What do you see? Here’s a quick summary:
- That’s Windows 7 running on my system. Yeah, it’s the release candidate as I haven’t upgraded to the GA version – yet.
- You see Tweetdeck. While I like other clients, I can’t quite swallow the Seesmic brew that includes Silverlight.
- You also see a Chrome browser. I like a lot of things about the Chrome browser. But oddly enough, I still have to use Firefox to edit my posts to WordPress.
- While hidden by a few windows, you also see Windows Media Center.
- For those who are looking carefully at the task bar, you see an icon for Eclipse. I’m using that for my Android development environment.
- Sun’s VirtualBox is running. You see it running on the desktop. And you see several operating systems images.
- One of those operating systems is the Chrome OS. And that VM is running. In the image, you’ll see what looks like a Chrome browser. There’s a tab for GMail and a tab for GCal. You’ll also see the Start/Welcome tab. There’s a pretty good chess game and there are a lot of web apps.
So what is Chrome? Is it a desktop? Nope. Is it just a browser? Nope. It IS a down-payment on Google’s gambit to move people from desktop apps to cloud/network services. And it is a completely open framework for new innovation.
Will it win? Well, it won’t displace Windows on new system sales – at least, not yet. Will it be the platform for netbooks? Maybe. But they may be fighting against Android for that honor.
But unlike other desktop contenders, this offering is not designed for a head-to-head fight with Windows. Unlike Safari and Mac OS X, this platform is not seeking to be another desktop in the fight. Rather, it seeks to move the battlefield to an entirely new venue. This is the same fight that Sun started with the NC (i.e., the “Network Computer”). But Sun had no traction in the consumer marketplace. And they saw meager penetration in the enterprise space.
But Chrome OS is the inheritor of a unique phenomenon; some of the best technologies are a redux of something that was already in existence. MP3 players existed for quite some time before the iPod arrived. The Apple iPod won because it captured the consumer imagination. In the same way, Chrome OS is a redux of things we’ve seen before. Can Google transform a moribund market for network computing? I sure hope that they will. But they will need a spark for that to happen. In the mobile phone industry, I think that the Verizon Droid may be the spark needed for Android’s explosion into the market.
In a very strange way, Chrome OS’ real competitors maybe the netbooks and wireless platforms like Android.
-Roo
Star (and Google) Gazing
I love the classics. And this week has been replete with allusions to the past. As everyone knows, I’ve fiddled with Google technology for a very long time. Indeed, I remember when the first posts about Google hit Slashdot. That was when Yahoo! had the pre-eminent navigation technology. And web navigation was menu-based, not search based. But I prattle on… as usual. I also remember when I was given an opportunity to invest in the Google IPO. [And hindsight confirms that I can be extraordinarily short-sighted.] And with all of this Google background / engagement, it’s taken me a whole lot of time to come to the conclusion that Google has a very expansive strategy – or they are exquisitely fortunate.
So what leads me to think they have a strategy? Here’s the short list:
- They have a fantastic base. From that base, they are the center for web navigation. As that center, they can skim their advertising taxes. Indeed, they are to the Internet what broadcast TV was in the latter-half of last century. Specifically, they are the launch point for content.
- They recognize that the browser is the current (and near-term) means to leverage their launch pad. Consequently, they are offering a branded browser. Do I like Chrome? Yes, I really do. Is it still a bit buggy and problematic? Yes, indeed it is. For example, I still have trouble using the ‘out-of-the-box” Chrome with the WordPress hosting site. In fact, I have to use Firefox as there are still scripting issues with the current Chrome dev branch (and WordPress). But Chrome is my default browser on most of my platforms – the only exception being my default workstation in the office. And yes, I work for a company that requires IE.
- Google has some hella-good “cloud” apps. This includes GMail, Google Docs, GTalk, Google Maps, Google Latitude, Google Earth and even Picasa. Many of these apps are my primary apps in specific app categories. And the Google app strategy seems to be squarely targeted to network-based apps. As I am always switching from machine to machine, I really need storage on the network. Right now, this includes email, bookmarks, preferences and the like. But in the future, it WILL include a whole lot more. And this isn’t just for personal use. More and more of our corporate apps are “stateless” and require network connectivity.
- Google has also laid down a marker in the enterprise collaboration space. Google Wave extends the promise that new collaboration technologies will eventually transform current email systems. Is Wave there yet? It sure isn’t. But it shows obvious promise. I think of it this way: Lotus Agenda showed glimmers of what became Lotus Notes. In the right hands, Google Wave will evolve into something truly spectacular. Of course, it really does need someone with vision – and technical chops.
- Google has also taken a few bold steps into the development market. Are they building an IDE? Not yet. But they are aligning themselves with Eclipse. And they are investing immense amounts of money in both Javascript and in the development of a whole new language: GO.
- Google has leveraged their expertise in Linux in order to build embedded systems expertise. I have used Linux for years – since the mid-nineties to be precise. And desktop Linux has always eluded critical mass. Is it cool? Sure. Is it going to replace the current desktop paradigm? Probably not. But Google’s approach has been to change the paradigm (and move apps off the PC). So they’ve used their platform expertise to build new platforms. To this end, they realized the success of the iPhone and knew that it was not just a hand-held phone story but also a development platform story. So Android was born. Is Android a game-changer? Not yet. Will it become a game-changer? Most definitely. And the Verizon Droid may just be the match that lights the conflagration.
- While Google has recognized that their browser is important, they’ve realized that the browser must also run on a platform that runs other applications. Hence, the Chrome team has focused on “native client” technology. I’ve written about native client before. But as I consider Android (and Chrome OS), I realize just how important native client will become. It is important for the purpose of performance. But it also holds immense promise for running those pesky apps that aren’t network-based. Indeed, native client (combined with the right virtualization engine) may hold the key to unlocking the Microsoft shackles that constrain most of us.
- And this week, Google demoed what many think will be a coup de gras: Google Chrome OS.
Is Chrome OS going to dethrone Microsoft Windows? Not any time soon. Is Chrome OS going to take market share from Apple’s Mac OS X? Again, I wouldn’t expect that to happen any time soon. But could it attack both by changing the battlefield? It absolutely could.
But what will it take for Google to accelerate these changes? Wow, that is a huge question. I think that they need the following:
- Google has a great strategic vision. But from the outside, it looks as if they lack someone who has the chops (and cred) to execute on the vision. This will mean some additions (or changes) to the senior leadership at Google. Someone must be given a couple of years to build the tactical plan from the strategic plan.
- Google needs some platform partners. By this, I mean that they need a Hewlett Packard (or some other company) to provide home-based “server” products that can wean households off the Microsoft desktop teat. This won’t be desktop Linux. It will be household servers that stores files, streams applications, automates systems, stores and streams media, etc. Do the components exist? Yes, they do. But they need a tactical vision to place the household server into new houses. That way, everyone in the house can use a netbook (or other untethered device). [Note: I think that Google is showing that they can effectively manage such partnerships. For evidence, look at the Android strategy. They are doing exactly what Apple is not: Google is building a cooperative eco-sphere that features their carrier partners. Again, they are doing what Microsoft couldn’t do (or wouldn’t do) with Windows Mobile.]
- Google needs to double-down on their investment with developers, developers, developers. Microsoft earned the allegiance of a generation because they blatantly pandered to developers. And many developers have rewarded them with unflagging fealty. Google needs to do the same thing. But in this case, they need to invest in Eclipse. And they need to carry through on the promise of new languages. I would hate to still be coding C/C++ (or worse, Java or C#) in a decade.
- Google needs to either develop (or sponsor) a number of emerging virtualization platforms. I would have preferred to see VirtualBox in Google’s hands. But Google needs to sponsor free and open virtualization platforms. Even Microsoft realizes just how much VMWare has changed the game in data centers. And Google has so much more to offer in this space. Indeed, I would love to see some of their data center management technologies emerge into the mainstream. Think Loudcloud/Opsware meets Amazon AWS.
- Finally, Google needs the time for all of the elements to cook. Strategic visions like this take years to gestate and mature. And Google needs to remember that they can’t get it all at once. But unlike Microsoft, their core business is NOT dependent upon a single iteration of the technology wheel. Google is a marketing and advertising company. As long as they keep that core engine going for a few more years, they will have a good shot at allowing new technologies to thrive as they grow within the nest.
So am I like the early astrologers? Am I trying to see patterns and visions in the visible stars? Do I see Ursa Major and not realize how far apart these stars are from one another? That’s certainly possible. I may be seeing non-existent patters. But from my perspective, I really do see an emerging Google leviathan.
Just as we moved from the IBM mainframe vision to the Microsoft PC vision, are we finally seeing the market leader emerge on the long-anticipated move from the Microsoft PC vision to the Google service vision?
-Roo
Is Anybody There? Does Anybody Care?
Chrome Extends Its Capabilities
Along with the Native Client support that came in 4.0.220.1, little attention was paid to another addition in the dev branch: extensions are here (or soon will be).
The initial support is good – but it needs some polish. The existence of sites like Chrome Extensions will surely help. And the good news is that some must-have Firefox extensions are now available in Chrome – including AdBlock+. I can’t wait for NoScript and FoxyProxy (or would it be ChromoProxy?) to arrive.
In the meantime, I really do like having things like Bubble Translate. I can highlight text I want to translate and simply click a button. Bam! I see the comments I need to understand. I couldn’t compare this to other extension-based translators. But it is really handy to have this – especially as there is a growing international support community for Chrome.
-Roo
Going Native (Client) – For Today
It’s been a while since I’ve taken the time to actually post anything substantial on this site. I have been swamped at work. And I have focused more of my personal time on family matters and on micro-blogging. And today was going to be no exception to that rule.
I got up first thing this morning. My intentions were to spend time on yard work and to enjoy time away from the computer. After two months of heads-down work, I wanted the break.
But I decided that I would spend a few minutes on Google Reader. That was my first mistake. It didn’t take long for me to notice lots of kerfuffle about Google Chrome and built-in support for Native Client. I remembered the Native Client buzz from Google I/O but I hadn’t really dug into the subject. That changed this morning.
Native Client was a Google Code initiative that has developed into something far more transformative. Stated simply, Native Client is a way that web applications can access/run native instructions on an x86 system.
That’s nothing new, per se. Indeed, the idea of interpreting code to make it portable has been around for a long time. When I built my first computer (a Heathkit H89 system), I decided to run UCSD’s Pascal p-system as an OS. For those too young to know what I’m talking about, this was a PC operating system that ran completely on interpreted pseudo-code (i.e., interpreted byte-code). Since then, the more obvious examples of this are Java and .Net applications.
And Google is now building their own instance of portable code. I think this is all preparatory for Chrome OS and the cloud-based services that they are soon to unleash upon the computing world. But that is a subject for a different post. Today’s post is about getting started in Native Client.
So after sharing a few articles in Google Reader and tweeting a little bit about it, I decided to launch down the path of understanding it by installing the new platform. But that was easier said than done. Now that I am done, it doesn’t seem all that hard. But it took me quite a few fits and starts.
Before launching into the list of tasks, let me note that I did all of this work on my Windows 7 system. So I spent a lot of time figuring out which issues were part of the Native Client experience and which were part of the Windows 7 experience. But I’ve been through the tunnel and it isn’t nearly as hard as it seemed while stumbling in the dark.
Here’s what I did: [I’ll update this post with links after I finish my dinner.]
- I downloaded the latest version of Chrome (4.0.220.1). Actually, this happened automatically.
- I enabled the browser to run the Native Client. [You must add a run-time option to your Chrome invocation. In my case, I used the following: chrome.exe –enable-nacl
- I downloaded the Getting Started guide and I realized that there was far more to Native Client than just the browser additions. Indeed, I needed to download and install the Native Client interpreter.
- Before I could launch into that installation, I needed to install Python. I hadn’t installed that onto the Windows 7 system yet. So off I went to http://www.python.org.
- Of course, Python wasn’t enough. I had to also install the PyWin extensions.
- Once I had Python installed, I ran the installation and configuration steps in the Getting Started document. Of course, things failed. At first, I saw errors indicating that Visual Studio was not properly installed. Huh? So I had to actually go into the installation scripts. Once there, it was obvious what the problem was: I didn’t have a working C compiler or development environment on this system.
- The next step took way too much time. I had to decide which C compiler I should install. I don’t have a licese for VS 2008 at home. So I had to decide whether to use Cygwin, MinGW or gcw. I had read that there were problems with Cygwin so I tried gcw. No joy. I then tried MinGW. Also, less joy than anticipated. Since I had great success with Cygwin on other systems, I decided to try it after all. Well, I had no troubles at all installing and using Cygwin with Native Client. [Note: The only problem with Cygwin was apparently a problem with the zip functionality in Cygwin. I avoided this and had no issues whatsoever.]
- Now that I had a compiler and a scripting engine, I could actually run the installer as delivered from Google. Lo and behold, things began to work. But I still needed to have a local web server for testing. Since I didn’t want to use any of my other web servers lying around the bat cave, I decided to try the hpptd instance that comes with the Native Client code itself. [BTW, I still don’t know if I like a Python-based web server, but it works fine. So why not.]
- I ran the Mandelbrot and Life examples from the command line. And they worked flawlessly. So it was time to move on to the browser tests.
- I tried the samples in Chrome and ended up getting 404 errors thrown by the web server. I was not feeling happy. But I wanted to get this done. So I pushed ahead and installed the Firefox extension. And once I used the Firefox extension that ships with Native Client, everything worked. All the sample apps worked like a charm. At some point (probably tomorrow) I’ll try to get Chrome to use the Native Client environment. But that is for tomorrow.
Now that I can sit down and think about what I’ve done, I realize that this is still a developer preview. But the only reason that it is in such a state is because no one has packaged everything up properly. The code works. And it has immense promise. It just needs someone to put a pretty wrapper around it.
And that person won’t be me – at least, not tonight.
-Roo
*Update: The solution to the Chrome issue was simple. I mispelled the execution parm. Once corrected, Native Client goodness is available within my Chrome browser. w00t!
**Update: Curious…Native Client works with an invalid parameter (–enable-nacl) but doesn’t work with the valid one (–internal-nacl). And it also works w/o any parameter. I wonder if Chrome is using the NPAPI plugin.
Summer of Insecurity
First, I need to apologize to many of my faithful readers. I think I’ve finally succumbed to the Twitter disease. As many of you know, I’ve been using Twitter for over two years. Indeed, I’m one of those technology saps that picked it up, set it down, and picked it up again.
And I really love Twitter. You can connect with others at the same time that you post your thoughts on any subject. And for me, it has the added value that you only have to edit a 140 character posting.
I state all of this for one reason: I must apologize to my readers as I have forsaken the “long form” for the micro-blog. It has been almost a month since my last post to this blog. And that is thoughtless of me. If I want you to continue to read the things that I write, I must continue to write them. In the meantime, I’m trying to work out an adequate penance. Please leave me a comment with your ideas on how I can attone for the sin of neglecting my readers.
Now, on to the meat of today’s missive…
Last month, I started a security voyage. Much of the reason for being so concerned about security is that Noah has challenged me. He didn’t even realize that he had challenged me. But those pesky Starbucks conversations have a way of provoking an immune response reflex. He would tell me about going to Defcon and how thrilled he was to meet with his friends in the hacker community. His joy at being able to “crack” technology barriers perked my concerns. So it was time to convert concern into action.
Last month, I knew I needed to address some chronic architectural flaws. Think of last month as stiffening and strengthening the girders. I put a VLAN in place to isolate the most insecure aspects of my infrastructure from the most valuable jewels in the collection. I turned off all but the most necessary of protocols. I began utilizing a lot of tunneling. This allowed me to lessen the surface area of my risk. But it just put all of my “risk” into one basket. In effect, I had one basket of very dense risk.
As I type these words, I think of the last scene in Terry Gilliam’s “Time Bandits” movie. In the last scene, the totality of evil t be found in the movie is condensed down to a single charred briquette of absolute evil. That’s what I unintentionally had created last month.
As of yesterday, I started to address some of that evil by working on the doors and the locks that protect my house. I’ll start by noting that I do have a few web servers that are relatively open. These are the webcams I referred to last month. They are older and inherently less secure. But they are now “isolated” and provide rather limited value to an intruder – unless you want to watch me typing on the computer or loading my new panniers.
But I’m wandering off topic…
Yesterday morning, my biggest “door” was the cable modem connection and the wireless router that I use at home. I’ve been pretty good about securing the wireless. And last month, I closed a whole bunch of windows on the facade (i.e., open ports for unneeded services). But the locks on my front door weren’t very solid. Yes, I use a custom firmware build. And yes, I use ssh for the majority of my access needs. But it wasn’t a strong enough lock. So I set to work on replacing the locks on the front door.
- I started by using Steve Gibson’s “Shields Up” service. I quickly noted that while port 22 was open, there was still a remnant of port 80 that was still visible. After stumbling through some documentation, I realized that there are a couple of “options” in the DD-WRT firmware that I needed to tweak. In order to really lock down the leakages, I had to set some nvram options as well.
- I then improved the locks by switching from a password-based authentication approach to a PKI approach. Using PuTTYgen, I created a 1024-bit public/private key pair for myself. [No, I haven’t posted my public key on a keyserver yet.] I then generated a horribly long passphrase tat I would remember. Now I had to get the public key onto the router.This proved to be quite a challenge. After editing the generated keyfile, and using cut/paste operations (from Notepad into the router’s web GUI), all I had to show for it was a series of failures – on many levels. After what seemed like hours (but was actually just a few hours), I finally noticed that PuTTYgen places the public key component it generates into a portion of its key generation window. And the output was quite a bit different than the output PuTTYgen places into the keyfile. Every security wonk reading this must be saying, “Gosh, you’re kinda slow, eh.” Well, I guess I am. I took the text (in OpenSSH key format) and pasted it into the DD-WRT ssh public key segment of the DD-WRT -> Services dialog. And voila, things began to work.
- After adding the key through the GUI, I realized that I didn’t even want the management GUI (for DD-WRT) to be generally available – even from the LAN side of the router. So I set nvram parms so that the web GUI would not start at all. And if/when I needed it, I could start it via the command-line.At this point, I had locked down ssh in my environment, right? The answer wasn’t quite that simple.
- Since I was still routing port 22 from the WAN interface to the WinSSHd instance on my main system, I still had a problem: ssh needed to be hardened on my Windows 7 device.I use WinSSHd. It is free for personal use. And since I’m a person, I felt I can take advantage of their generosity. From a personal viewpoint, I’ve used a variety of Windows SSH tools (including the full-featured Tunnelier product). And I think that the personal version of this tool is excellent.I set up the server to utilize my public key. I then went to my laptop. After setting up some additional session profile in PuTTY, I had a serviceable session established for testing. But for the life of me, I couldn’t get the crazy thing to work. I started to assume that it was a public key problem as was the case with DD-WRT. But after a few hours of fumbling and trying a number of things, I started to get frustrated.
I finally noticed an inconspicuous link on the main WinSSHd server management page. It pointed me to the server management log folders. Well, I had been through the session management logs. But I figured I’d give this a try. In a few moments, I was treated with a rich feast of information. And I casually noted that the key exchange was failing because the client was offering a 2048-bit key while the server was expecting a 1024-bit key.
It dawned on me that I had trouble copying the public keys to this machine many hours earlier. Earlier in the day, I couldn’t find my USB key. So I had used one of the Sandisk Cruzer drives my wife had squirreled away. And amidst all of the trouble associated with the U3 drivers for the USB device, I had probably copied the wrong version of the key that I had generated many hours earlier.
The solution was simple: I took the right key and loaded it onto my laptop. Once corrected, the ssh tunnel sprang into life. Here’s a reminder. When doing a multi-step project, write down what you do and when you do it. It may prove helpful at a later point in time. - Once I got the tunnels working, I realized that I really didn’t want a 1024-bit key. So I regenerated new keys and deployed the public key component to both ssh servers (Dropbear in DD-WRT and WinSSHd on Windows). It only took a few minutes – now that I had solved the earlier issues.
So after ten hours of security tinkering, I had installed stronger and more tamper-resistant locks onto the one door I have onto the Internet. I am effectively tunneling all of the valuable protocls through ssh. So I’m feeling a lot better.
But after doing all of this, am I any safer?
That’s such a tough question to answer. I am smarter than I was a few hours ago. I know a lot more about PKI. And I know that having 2048-bit asymmetric keys is better than a weaker alternative. And I know that even longer keys may not be worth the effort. And I remember that if you want to stop casual hacking, you only have to have a stronger door than your neighbor.
But am I safer?
All the windows are shut. And I’ve got better locks on the door. But if someone wants to get in, there is precious little that I can do to stop them. So we need to remind ourselves that multiple layers may be the best defense. Even though the door is locked, put your valuables in a secure place. Some of my most sensitive data is not stored on my online systems. Indeed, that data may be in the form of offline media that I have in my desk or in a filing cabinet. But such distribution of data is not the only defense. Make sure that your computers are secured with strong passwords.
And try not to leave the keys near the locks. Some folks write down their passwords and leave them on a sticky note – just like the idiot office clerk in “Wargames.” If you must have a repository for passwords, use a secure password manager tool.
And always remember that security is a perpetual process of improving what you already have in place.
-Roo
Battening Down the Hatches at Home
How many times have you heard the phrase “batten down the hatches?” But do you know what it means? Well, it’s a nautical term referring to sealing ship hatches with strips of wood and caulk. This is done to prevent water from penetrating the hatches of the ship.
Well, I’ve been battening down the computing hatches here at Chez Roo. As most of you know, I’m focused on security – but not obsessed by it. I have a wireless network that is fairly well protected with WPA2/AES encryption, strong passkeys and strong credentials/passwords on all of the systems in the network. I use MAC filtering. And I try not to broadcast my SSID.
But nothing is totally secure. And every measure or counter-measure should be periodically reviewed. So when I added both a Wii and a new LCD TV to the wireless network, I figured that it was time to start doing a network review as sone of the new devices requred that I enable SSID broadcasting on my main access point.
At the same time, I had finally gotten around to addressing some remote access problems. Specifically, I had finally been able to successfully configure my Windows 7 test system to allow remote mamangement via either VNC or Windows Remote Desktop. Up until this week, I had tried to open all of the various ports needed for both products. But I really hate having lots of ports open to the Internet. So I reconfigured everything to tunnel through SSH. BTW, I’m using WinSSH in a non-commercial role – and it is working fantastically well.
Of course, nothing is nearly as simple as it would at first appear. I do use DynDNS to manage/publish the dynamic address that my cable provider doles out to me. So I installed update to my DynDNS “updater” tool. I also switched over to OpenDNS in order to improve performance and in order to get some rudimentary namespace management tools.
So once I changed three or four things at the same time, things stopped working – of course. It turns out that as I cleaned up the router to eliminate the now unnecessary port forwarding, I could no longer connect to the UltraVNC server on my main system. It was a simple problem. I had used the FQDN name (in DynDNS) in the tunnel definitions I had put into PuTTY. So once I established a tunnel, it would try and connect to the external name (i.e., the router) on the real VNC and RDP ports. Of course, this wouldn’t work once I removed the port forwarding rules. How did I correct it? I decided to use the blunt force trauma approach: I updated my hosts file to point the external DynDNS name to localhost. Once done, things started working again.
And now was the time to call a friend and ask for a favor. While I trust my skills, I always want a set of unbiased eyes. So I called @ax0n and had him do a Nessus scan on my network. So what did he find? First, he found my wireless IP cameras. [Note: We put these in so that we could monitor the house while we were away.] And he also saw the other ports that I expected.
But when he saw the cameras, I decided that these were the weakest link in my security chain. You see, I run two different wireless networks. One supports the main systems in the house while the other supports the wireless cameras that we installed. The camera network is not nearly as secure as the main wireless router. That’s because the camera network is over five years old. And when it was first designed, WEP-128 was still the standard encryption model. But I didn’t want my whole household to be limited to WEP-128. So I set up an access point just for the cameras. That network uses WEP. I ran a separate network cable from the router to the camera AP so I could physically separate the traffic.
But I never took the next logical step. This weekend, I took that step. I set up a series of virtual LAN’s in the house. And the cameras are now on their own VLAN. Of course, this meant that I needed to reconfigure all of the cameras to provide them with new IP addresses. And that took quite a while as I had to directly attach them to my laptop in order to reconfigure them. It’s a simple process, but it does take time.
Then I had to set up the VLAN’s on the router. The good news is that I use DD-WRT. So VLAN setup is relatively easy. But in addition to adding the VLAN, I had to set up new autostart options in order to relate the VLAN to a specific physical port on the router. Finally, I had to update the builtin firewall to ensure that the VLAN for the cameras couldn’t access the other systems behind the router. Yeah, this was the whole reason to reconfigure everything; I didn’t want someone to be able to connect to the camera network and then launch an assault against the more secured portions of my network.
So the annual security review is drawing o a close. Yes, I expect that I may see a few more minor changes. But the major re-designs and major changes are done. And I sure am glad for that. I sure hope that the next minor project is as fun as this one has been!
-Roo