Broadband Haircut: Economics Meets Technology

Cutting the cord is a dramatic step - and a complicated one.
Cord Cutting Can Be Dangerous

I love it when I can blend my passion (for technology) and my training (in economics). Over the past six weeks, I’ve been doing just that – as I’ve tried to constrain household Internet usage. Six weeks ago, we began a voyage that has been years in the making: we’ve finally given ourselves a ‘broadband haircut’. And the keys to our (hopeful) success have been research, data collection, and data analysis.

Background

We have been paying far too much for broadband data services. And we’ve been doing this for far too many years. For us, our broadband voyage started with unlimited plans. Unlike most people, I’ve spent many years in the telecom business. And so I’ve been very fortunate to pay little (or nothing) for my wireless usage. At the same time, most household broadband was priced based upon bandwidth and not total usage. So we have always made our decisions based upon how much peak data we required at any given point in time.

But things are changing – for myself and for the industry.

First, I no longer work for a telecom. Instead, I work for myself as an independent consultant. So I must buy wireless usage in the “open” marketplace. [Note: The wireless market is only “open” because it is run by an oligopoly and not by a monopoly.]

Second, things have changed in the fixed broadband marketplace. Specifically, sanctioned, local access “monopolies” are losing market – and revenue. There is ample evidence to unequivocally state that cable companies charge too much for their services. For many years, they could charge whatever they wanted as long as they kept the local franchise in a particular municipality. But as competition has grown – mostly due to new technologies – so has the eventual downward pressure on cable revenues.

Starting a few years ago, cable companies started to treat their fixed broadband customers just as wireless operators have treated their mobile customers. Specifically, they started to impose data caps.  But for many long-term customers, they just kept paying the old (and outrageously high) prices for “unlimited” services.

“But the times, they are a changin’.”

Cord Cutting Has Increased Pressure

As more and more content delivery channels are opening up, more customers are starting to see that they are paying far too much for things that they don’t really want or need. How many times have you wondered what each of the ESPN channels is costing you? Or have you ever wondered if the H&G DIY shows are worth the price that you pay for them?

Many people have been feeling the way that you must feel. And for some, the feelings of abuse are intolerable. Bundling and price duress have infuriated many customers. Some of those customers have been fortunate to switch operators – if others are available in their area. Some customers have just cut the cord to bundled TV altogether.

And this consumer dissatisfaction has led to dissatisfaction in the board rooms of most telecom companies. But instead of reaching out to under-served customers and developing new products and new markets (both domestic and overseas), most telecom executives are looking for increases in “wallet share”; they are trying to bundle more services to increase their revenue. Unfortunately, the domestic markets are pretty much tapped out. “Peak cable” is upon most operators.

Nevertheless, some boards think that punishing their customers is the best means of revenue retention. Rather than switching to new products and new services, some operators have put debilitating caps on their customers in the hopes that they can squeeze a few more dollars from people that are already sick and tired of being squeezed. The result will be an even further erosion of confidence and trust in these corporations.

Making It Personal

Six weeks ago, we decided that it was time to cut the cord. We’ve been planning this for eighteen months. However, we had a contract that we needed to honor. But the instant that we dropped off our set top devices at Comcast, they brought out their real deals. In a matter of moments, we had gone from $125 per month (w/o fees) to $50 per month (w/o fees). So we took that deal – for one year. After all, we would be getting almost the same bandwidth for a tremendously reduced price. Ain’t competition grand?

But like most people, we didn’t know how much data we used while we were on an ‘unlimited’ plan. And in fairness, we didn’t care – until we started to see just how much data we were using. Bottom line: Once we had to pay for total consumption (and not just for peak consumption), we started to look at everything that would spin the consumption ‘meter’. And when we got the first email from Comcast indicating that we had exceeded their artificial, one terabyte (per month) cap [that was buried somewhere deep within the new contract], we began a frantic search for ‘heavy hitters’.

Make Decisions Based Upon Data
Pi-hole data points the way.
DNS Data

Our hunt for high-bandwidth consumers began in earnest. And I had a pretty good idea about where to start. First, I upped my bet on ad blocking. Most ad blockers block content after it has arrived at your device. Fortunately, my Pi-hole was blocking ads before they were downloaded. At the same time, I was collecting information on DNS queries and blocked requests. So I could at least find some evidence of who was using our bandwidth.

Pi-hole identifies largest DNS consumers.
Pi-hole Data: Biggest Ad Conveyors

After a few minutes of viewing reports, I noted that our new content streaming service might be the culprit. But when we cut the cord on cable TV, we had switched to YouTube TV (YTTV) on a new Roku device. And when I saw that device on the ‘big hitter’ list, I knew to dive deeper. I spent a few too many hours ensuring that my new Roku would not be downloading ad content. And after a few attempts, I’ve finally gotten the Pi-hole to block most of the new advertising sources. After all, why would I want to pay traffic fees for something that I didn’t even want!

The Price Of Freedom Is Eternal Vigilance

As is often the case, the first solution did not solve the real problem. Like President G.W. Bush in Gulf War II, I had prematurely declared success.  So I started to look deeper. It would have helped if I had detailed data on just which devices (and clients) were using what amounts of bandwidth.  But I didn’t have that data. At least, not then. Nevertheless, I had a sneaking suspicion that the real culprit was still the new content streamer.

Daily usage data shows dramatic usage reductions after solving Roku shutdown problem.
DD-WRT Daily Usage

After a whole lot of digging through Reddit, I learned that my new Roku remote did not actually shut off the Roku. Rather, their ‘power’ button only turned off the television set. And in the case of YouTube TV, the app just kept running. Fundamentally, we were using the Roku remote to turn the TV off at night – while the Roku device itself kept merrily consuming our data on a 7×24 basis.

The solution was simple: we had to turn off YouTube TV when we turned off the TV. It isn’t hard to do. But remembering to do it would be a challenge. After all, old habits do die hard. So I took a piece of tech from the electrical monopoly (ConEd) to solve a problem with the rapacious Internet provider.  A few months ago, we had an energy audit done. And as part of that audit, we got a couple of TrickleStar power strips. I re-purposed one of those strips so that when the TV was turned off, the Roku would be turned off as well.

What’s Next?

Now that we have solved that problem, I really do need to have better visibility on those things that can affect our monthly bill. Indeed, the self-imposed ‘broadband haircut’ is something that I must do all of the time. Consequently, I need to know which devices and applications are using just how much data. The stock firmware from Netgear provides no such information. Fortunately, I’m not running stock firmware. By using DD-WRT, I do have the ability to collect and save usage data.

To do this, I first need to attach an external USB  drive to the router. Then I need to collect this data and store it on the external drive. Finally, I need to routinely analyze the data so that I can keep on top of new, high-bandwidth consumers as they emerge.

Bottom Line

Economics kicked off this effort. Data analysis informed and directed this effort. With a modest investment (i.e., Pi-hole, DD-WRT, an SSD drive, and a little ingenuity), I hope to save over a thousand dollars every year.  And I am not alone. More and more people will demand a change from their operators – or they will abandon their operators altogether.

If you want to perform a similar ‘broadband haircut’, the steps are easier than they used to be. But they are still more difficult than they should be. But there is one clear piece of advice that I would offer: start planning your cable exit strategy.

Home Assistant Portal: TNG

Over the past few months, I have spent much of my spare time deepening my home automation proficiency.  And most of that time has been spent understanding and tailoring Home Assistant. But as of this week, I am finally at a point where I am excited to share the launch of my Home Assistant portal. 

Overview

Some of you may not be familiar with Home Assistant (HA). So let me spend one paragraph outlining the product. HA is an open source “home” automation hub. As such, it can turn your lights on and off, manage your thermostat, open/close your garage door (and window blinds). And it can manage your presence within (and around) your home. And it works with thousands of in-home devices. It provides an extensive automation engine so that you can script countless events that occur throughout your home.  It securely integrates with key cloud services (like Amazon Alexa and Google Assistant). Finally, it is highly extensible – with a huge assortment of add-ons available to manage practically anything.

Meeting Project Goals

Today, I finished my conversion to the new user interface (UI). While there have been many ways to access the content within HA before now, the latest UI (code-named Lovelace) make it possible to create a highly customized user experience. And coupled with the theme engine baked into the original UI (i.e., the ‘frontend’), it is possible to make a beautiful portal to meet your home automation needs.

In addition to controlling all of the IoT (i.e., Internet of Things) devices our home, I have baked all sorts of goodies into the portal. In particular, I have implemented (and tailored) the data collection capabilities of the entire household. At this time, I am collecting key metrics from all of my systems as well as key state changes for every IoT device. In short, I now have a pretty satisfying operations dashboard for all of my home technology.

Bottom Line

Will my tinkering end with this iteration? If you know me, then you already know the answer. Continuous process improvement is a necessary element for the success of any project. So I expect rapid changes will be made almost all of the time – and starting almost immediately. And as a believer in ‘agile computing’ (and DevOps product practices), I intend to include my ‘customer(s)’ in every change. But with this release, I really do feel like my HA system can (finally) be labeled as v1.0! 

Time Series Data: A Recurring Theme

When I graduated from college (over three-and-a-half decades ago), I had an eclectic mix of skills. I obtained degrees in economics and political sciences. But I spent a lot of my personal time working on building computers and writing computer programs. I also spent a lot of my class time learning about econometrics – that is, the representation of economic systems in mathematical/statistical models. While studying, I began using SPSS to analyze time series data.

Commercial Tools (and IP) Ruled

When I started my first job, I used SPSS for all sorts of statistical studies. In particular, I built financial models for the United States Air Force so that they could forecast future spending on the Joint Cruise Missile program. But within a few years, the SPSS tool was superseded by a new program out of Cary, NC. That program was the Statistical Analysis System (a.k.a., SAS). And I have used SAS ever since.

At first, I used the tool as a very fancy summation engine and report generator. It even served as the linchpin of a test-bed generation system that I built for a major telecommunications company. In the nineties, I began using SAS for time series data analysis. In particular, we piped CPU statistics (in the form of RMF and SMF data) into SAS-based performance tools.

Open Source Tools Enter The Fray

As the years progressed, my roles changed and my use of SAS (and time series data) began to wane. But in the past decade, I started using time series data analysis tools to once again conduct capacity and performance studies. At a major financial institution, we collected system data from both Windows and Unix systems throughout the company. And we used this data to build forecasts for future infrastructure acquisitions.

Yes, we continued to use SAS. But we also began to use tools like R. R became a pivotal tool in most universities. But many businesses still used SAS for their “big iron” systems. At the same time, many companies moved from SAS to Microsoft-based tools (including MS Excel and its pivot tables).

TICK Seizes Time Series Data Crown

Over the past few years, “stack-oriented” tools have emerged as the next “new thing” in data centers. [Note: Stacks are like clouds; they are everywhere and they are impossible to define simply.] Most corporations have someone’s “stack” running their business – whether it be Amazon AWS, Microsoft Azure, Docker, Kubernetes, or a plethora of other tools.  And most commercial ventures are choosing hybrid stacks (with commercial and open source components).

And the migration towards “stacks” for execution is encouraging the migration to “stacks” for analysis. Indeed, the entire shift towards NoSQL databases is being paired with a shift towards time series databases.  Today, one of the hottest “stacks” for analysis is TICK (i.e., Telegraf, InfluxDB, Chronograf, and Kapacitor).

TICK Stack @ Home

Like most projects, I stumbled onto the TICK stack. I use Home Assistant to manage a plethora of IoT devices. And as the device portfolio has grown, my need for monitoring these devices has also increased. A few months ago, I noted that an InfluxDB add-on could be found for HassIO.  So I installed the add-on and started collecting information about my Home Assistant installation.

Unfortunately, the data that I collected began to exceed my capacity to store the data on the SD card that I had in my Raspberry Pi. So after running the system for a few weeks, I decided to turn the data collection off – at least until I solved some architectural problems. And so the TICK stack went on the back burner.

I had solved a bunch of other IoT issues last week. So this week, I decided to focus on getting the TICK stack operational within the office. After careful consideration, I concluded that the test cases for monitoring would be a Windows/Intel server, a Windows laptop, my Pi-hole server, and my Home Assistant instance.

Since I was working with my existing asset inventory, I decided to host the key services (or daemons) on my Windows server. So I installed Chronograf, InfluxDB, and Kapicitor onto that system. Since there was no native support for a Windows service install, I used the Non-Sucking Service Manager (NSSM) to create the relevant Windows services. At the same time, I installed Telegraf onto a variety of desktops, laptops, and Linux systems. After only a few hiccups, I finally got everything deployed and functioning automatically. Phew!

Bottom Line

I implemented the TICK components onto a large number of systems. And I am now collecting all sorts of time series data from across the network. As I think about what I’ve done in the past few days, I realize just how important it is to stand on the shoulders of others. A few decades ago, I would have paid thousands of dollars to collect and analyze this data. Today, I can do it with only a minimal investment of time and materials. And given these minimal costs, it will be possible to use these findings for almost every DevOps engagement that arises.

Continuous Privacy Improvement

In its latest release, Firefox extends its privacy advantage over other browsers. Their efforts at continuous privacy improvement may keep you ahead of those who wish to exploit you.
Firefox 63 Extends Privacy Lead

In the era of Demmings, the mantra was continuous process improvement. The imperative to remain current and always improve continues even to this day. And as of this morning, the Mozilla team has demonstrated its commitment to continuous privacy improvement; the release of Firefox 63 is continuing the commitment of the entire open source community to the principle that Internet access is universal and should be unencumbered.

Nothing New…But Now Universally Available

I’ve been using the new browsing engine (in the form of Firefox Quantum) for quite some time. This new engine is an incremental improvement upon previous rendering engines. In particular, those who enabled tracker protection often had to deal with web sites that would not render very successfully. It then became a trade-off between privacy and functionality.

But now that the main code branch has incorporated the new engine, there is more control over tracker protection. And this control will allow those who are concerned about privacy to still use some core sites on the web. This new capability is not fully matured. But in its current form, many new users can start to implement protection from trackers.

Beyond Rendering

But my efforts at continuous privacy improvement are also including enhanced filtering from my Pi-hole DNS platforms. The Pi-hole has faithfully blocked ads for several years. But I’ve decided to up the ante a bit.

  1. I decided to add regular expressions to increase the coverage of ad blocking. I added the following regex filters:
         
         ^(.+[-_.])??ad[sxv]?[0-9]*[-_.]
         ^adim(age|g)s?[0-9]*[-_.]
         ^adse?rv(e(rs?)?|ices?)?[0-9]*[-.]
         ^adtrack(er|ing)?[0-9]*[-.]
         ^advert(s|is(ing|ements?))?[0-9]*[-_.]
         ^aff(iliat(es?|ion))?[-.]
         ^analytics?[-.]
         ^banners?[-.]
         ^beacons?[0-9]*[-.]
         ^clicks?[-.]
         ^count(ers?)?[0-9]*[-.]
         ^pixels?[-.]
         ^stat(s|istics)?[0-9]*[-.]
         ^telemetry[-.]
         ^track(ers?|ing)?[0-9]*[-.]
         ^traff(ic)?[-.]
  2.      
  3. My wife really desires to access some sites that are more “relaxed” in their attitude. Consequently, I set her devices to use the Cloudfare DNS servers (i.e., 1.1.1.1, and 1.0.0.1). I then added firewall rules to block all Google DNS access. This should allow me to bypass ads embedded in Google devices that configure Goggle’s DNS (e.g., Chromecast, Google Home, etc). I then added these rules to my router.

         iptables -I FORWARD –destination 8.8.8.8 -j REJECT
         iptables -I FORWARD –destination 8.8.4.4 -j REJECT

These updates now block ads on my Roku devices and on my Chromecast devices.

Bottom Line

In the fight to ensure your privacy, it is not enough to “fire and forget” with a fixed set of tools. Instead, you must always be prepared to improve your situation. After all, advertisers and identity thieves are always trying to improve their reach into your wallet. Show them who the real boss is. It should be (and can be) you!

Youtube Outage Weakens Trust

Youtube Outage Damages Trust
Youtube Outage

Why do we trust cloud services? That’s simple: We trust cloud service providers because we don’t trust ourselves to build and manage computer services – and we desperately want the new and innovative services that cloud providers are offering. But trust is a fleeting thing. Steve Wozniak may have said it best when he said, “Never trust a computer you can’t throw out a window.” Yet how much of our lives is now based upon trusting key services to distant providers? Last night confirmed this reality for many people; the great Youtube outage of October 16 may have diminished the trust that many people had in cloud services.

A Quiet Evening…

It was chilly last evening. After all, it is October and we do live in Chicago. So neither Cindy nor I were surprised. Because it is becoming cold, we are starting to put on our more sedentary habits. Specifically, we have been having soups and chili. And last night, we had brats in marinara sauce. After dinner, we settled down to watch a little television. Cindy was going to watch “This Is Us” while I wanted to catch up on “Arrow”.

Everything was going serenely.

It had not been so the previous evening. We were having some trouble with one of the new Roku enhanced remotes. These devices use WiFi Direct rather than IR. And my specialized WiFi configuration was causing trouble for the remote. It was nothing serious. But I like things solved. So I spent  six (6) hours working on a new RF implementation for my router. [Note: At 0130CST, I abandoned that effort and went back to my ‘last known good’ state on the router.]

…gone terribly wrong!

Yesterday morning brought a new day. I had solved the problems that I had created on Monday evening. Now, everything was working well – until the television stopped working. While I was watching “Arrow” and Cindy was watching “This Is Us”, I started getting errors in the YoutubeTV stream. Then I heard my wife ask the dreaded question: “Is there something wrong with the television?”  And my simple response was, “I’ll check.”

At first, I thought that it might have been the new ISP hookup. It wasn’t. Then I wondered if it was something inside the house. Therefore, I started a Plex session on the Roku so that Cindy could watch “Ant-man and the Wasp” while I dug deeper. Of course, that worked well. So I knew that there must have been a different problem occurring.  I wondered if YoutubeTV was the problem? So I tried it while disconnected from our network (i.e., on my phone which is on the T-Mobile network).  When that didn’t work, I knew that we were part of a larger problem. My disappointment grew because we had just switched from cable TV to streaming YoutubeTV. But it was Google. So I figured it would be solved quickly.

I decided to catch up on a few Youtube channels that I follow. And I couldn’t reach them either. My disappointment grew into astonishment: could Google be having such a widespread problem? Since I had network  connectivity, I searched DuckDuckGo and found many links to the outage. And we just happened to use all of the affected services (i.e., Youtube and YoutubeTV). My wife was happy to watch the movie. And I was happy to move onto something else – like Home Assistant.

And Then The Youtube Outage Occurred

As I started to think about this outage, I wondered what might have caused it. And I mentally recited operations protocols that I would use to find the root cause and to implement irreversible corrective actions. But those steps were currently being taken by Google staff. So I focused on what this might mean to end users (like myself). What will I do with this info? First, I can no longer assume that “Google couldn’t be the problem.” In one stroke, years of trust were wiped away. And with the same stroke, days of trust in the YoutubeTV platform were discarded. Unfortunately, Google will be the first thing I check when I go through my problem-solving protocols. 

Eventually, I will rebuild that lost trust – if Google is transparent in their communications concerning the Youtube outage. Once I learn what really happened, I can let time heal the trust divide. But if Google is not transparent, then distrust will become mistrust. Here’s hoping that Google doesn’t hide it’s troubles. In the meantime, their customers should demand that Google fully explain what happened.

I Am Not A Product!

I have been a technology “early adopter” all of my life. And I have been a “social media” adopter since its inception. Indeed, I joined Twitter in the fall of 2006 (shortly after its launch in July 2006). I was also an early adopter of Facebook. And in the early days, I (and many others) thought of these platforms as the eventual successors to email. But as of this moment, I am now one of the large stream of people abandoning these platforms.

Why am I abandoning these platforms? They do have some value, right? As a technologist, they do “connect” me to other technologists. But it seems that even as I become more connected to many of these platforms, I am becoming even more disconnected from the community in which I live. 

At the same time, these platforms are becoming more of a personal threat. This week, we learned of yet another data breach at Facebook. I am sure that there are millions of people that have been compromised – again. After the first breach, I could make a case that Facebook would improve their system. But after the numerous and unrelenting breaches, I can no longer make a case that I am “safe” when I use these platforms.

Finally, these platforms are no longer fostering unity. Instead, they are making it easy to be lax communicators. We can abandon the civility of face-to-face dialog. And we can dismiss those with whom we disagree because we do not directly interact with them. Consequently, we do not visualize them as people but as “opponents”.

Social media was supposed to be about community. It was also supposed to be a means of engaging in disagreement without resorting to disunity. Instead, most social media platforms have degenerated into tribalism. And for my part in facilitating this devolution, I am exceedingly sorry.

I will miss a lot of things by making this stand. Indeed, my “tribe” (which includes my family) has come to rely upon social media. But I can no longer be part of such a disreputable and inharmonious ecosystem. 

Hopefully, I won’t miss it too much.

By the way, one of the most important benefits of disconnecting from the Matrix is that my personal life, my preferences, and my intentions will no longer be items that can be sold to the highest bidder. It is well said that “if you are not paying for the product, then you probably are the product.” So I’m done with being someone else’s product.

As for me, I am taking the red pill. Tata, mes amis

#FarewellFacebook

VPNFilter Scope: Talos Tells A Tangled Tale

IoT threats
Hackers want to take over your home.

Several months ago, the team at Talos (a research group within Cisco) announced the existence of VPNFilter – now dubbed the “Swiss Army knife” of malware. At that time, VPNFilter was impressive in its design. And it had already infected hundreds of thousands of home routers. Since the announcement, Talos continued to study the malware. Last week, Talos released its “final” report on VPNFilter. In that report, Talos highlighted that the VPNFilter scope was/is far larger than first reported.

“Improved” VPNFilter Capabilities

In addition to the first stage of the malware, the threat actors included the following “plugins”:

  • ‘htpx’ – a module that redirects and inspects the contents of unencrypted Web traffic passing through compromised devices.
  • ‘ndbr’ – a multifunctional secure shell (SSH) utility that allows remote access to the device. It can act as an SSH client or server and transfer files using the SCP protocol. A “dropbear” command turns the device into an SSH server. The module can also run the nmap network port scanning utility.
  • ‘nm’ – a network mapping module used to perform reconnaissance from the compromised devices. It performs a port scan and then uses the Mikrotik Network Discovery Protocol to search for other Mikrotik devices that could be compromised.
  • ‘netfilter’ – a firewall management utility that can be used to block sets of network addresses.
  • ‘portforwarding’ – a module that allows network traffic from the device to be redirected to a network specified by the attacker.
  • ‘socks5proxy’ – a module that turns the compromised device into a SOCKS5 virtual private network proxy server, allowing the attacker to use it as a front for network activity. It uses no authentication and is hardcoded to listen on TCP port 5380. There were several bugs in the implementation of this module.
  • ‘tcpvpn’ – a module that allows the attacker to create a Reverse-TCP VPN on compromised devices, connecting them back to the attacker over a virtual private network for export of data and remote command and control.
Disaster Averted?

Fortunately, the impact of VPNFilter was blunted by the Federal Bureau of Investigation (FBI). The FBI recommended that every home user reboot their router. The FBI hoped that this would slow down infection and exploitation. It did. But it did not eliminate the threat.

In order to be reasonably safe, you must also ensure that you are on a version of router firmware that protects against VPNFilter. While many people heeded this advice, many did not. Consequently, there are thousands of routers that remain compromised. And threat actors are now using these springboards to compromise all sorts of devices within the home. This includes hubs, switches, servers, video players, lights, sensors, cameras, etc.

Long-Term Implications

Given the ubiquity of devices within the home, the need for ubiquitous (and standardized) software update mechanisms is escalating. You should absolutely protect your router as the first line of defense. But you also need to routinely update every type of device in your home.

Bottom Line
  1. Update your router! And update it whenever there are new security patches. Period.
  2. Only buy devices that have automatic updating capabilities. The only exception to this rule should be if/when you are an accomplished technician and you have established a plan for performing the updates manually.
  3. Schedule periodic audits of device firmware. Years ago, I did annual battery maintenance on smoke detectors. Today, I check every device at least once a month. 
  4. Retain software backups so that you can “roll back” updates if they fail. Again, this is a good reason to spend additional money on devices that support backup/restore capabilities. The very last thing you want is a black box that you cannot control.

As the VPNFilter scope and capabilities have expanded, the importance of remediation has also increased. Don’t wait. Don’t be the slowest antelope on the savanna.

Social Media Schisms Erupt

A funny thing happened on the way to the Internet: social media schisms are once again starting to emerge. When I first used the Internet, there was no such thing as “social  media”. If you were a defense contractor, a researcher at a university, or part of the telecommunications industry, then you might have been invited to participate in the early versions of the Internet. Since then, we have all seen early email systems give way to bulletin boards, Usenet newsgroups, and early commercial offerings (like CompuServe, Prodigy, and AOL). These systems  then gave way to web servers in the mid-nineties.  And by the late nineties, web-based interactions began to flourish – and predominate.

History Repeats Itself

Twenty years ago, people began to switch from AOL to services like MySpace. And just after the turning of the millennium, services like Twitter began to emerge. At the same time, Facebook nudged its way from a collegiate dating site to a full-fledged friendship engine and social media platform. With each new turning of the wheel of innovation, the old has been vanquished by the “new and shiny” stuff.  It has always taken a lot of time for everyone to hop onto the new and shiny from the old and rusty. But each iteration brought something special.

And so the current social media title holders are entrenched. And the problem with their interaction model has been revealed. In the case of Facebook and Twitter, their centralized model may very well be their downfall. By having one central system, there is only one drawbridge for vandals to breach. And while there are walls that ostensibly protect you, there is also a royal guard that watches everything that you do while within the walls. Indeed, the castle/fortress model is a tempting target for enemies (and “friends”) to exploit.

Facebook (and Twitter) Are Overdue

The real question that we must all face is not if Facebook and Twitter will be replaced, but when will it happen. As frustration has grown with these insecure and exposed platforms, many people are looking for an altogether new collaboration model. And since centralized systems are failing us, many are looking at decentralized systems.

A few such tools have begun to emerge. Over the past few years, tools like Slack are starting to replace the team/corporate systems of a decade ago (e.g., Atlassian Jira and Confluence). For some, Slack is now their primary collaboration engine. And for the developers and gamers among us, tools like Discord are gaining notoriety – and membership.

Social Media Schisms Are Personal

But what of Twitter and what of Facebook?  Like many, I’ve tried to live in these walled gardens. I’ve already switched to secure clients. I’ve used containers and proxies to access these tools. And I have kept ahead of the wave of insecurity – so far. But the cost (and risk) is starting to become too great. Last week, Facebook revealed that it had been breached – again. And with that last revelation, I decided to take a Facebook break.

My current break will be at least two weeks. But it will possibly be forever. That is because the cost and risk of these centralized systems is becoming higher than the convenience that these services provide.  I suspect that many of you may find yourselves in the same position.

Of course, a break does not necessarily mean withdrawal from all social media. In fairness, these platforms do provide value. But the social media schisms have to end. I can’t tolerate the politics of some of my friends. But they remain my friends (and my family) despite policy differences that we may have. But I want to have a way of engaging in vigorous debate with some folks while maintaining collegiality and a pacific mindset while dealing with others.

So I’m moving on to a decentralized model. I’ve started a Slack community for my family. My adult kids are having difficulty engaging in even one more platform. But I’m hopeful that they will start to engage. And I’ve just set up a Mastodon account (@cyclingroo@mastodon.cloud) as a Twitter “alternative”. And I’m becoming even more active in Discord (for things like the Home Assistant community).

All of these tools are challengers to Facebook/Twitter. And their interaction model is decentralized. So they are innately more secure (and less of a targeted threat). The biggest trouble with these systems is establishing and maintaining an inter-linked directory.

A Case for Public Meta-directories

In a strange way, I am back to where I was twenty years ago. In the late nineties, my employer had many email systems and many directories. So we built a directory of directories. Our first efforts were email-based hub-and-spoke directories based upon X.500. And then we moved to Zoomit’s Via product (which was later acquired by Microsoft). [Note: After purchase, Microsoft starved the product until no one wanted its outdated technologies.] These tools served one key purpose: they provided a means of linking all directories together

Today, this is all  done through import tools that any user can employ to build personalized contact lists. But as more people move to more and different platforms, the need for a distributed meta–directory has been revealed. We really do need a public white pages model for all users on any platform.

Bottom Line

The value of a directory of directories (i.e., a meta-directory) still exists. And when we move from centralized to decentralized social media systems, the imperative of such directory services becomes even more apparent. At this time, early adopters should already be using tools like Slack, Discord, and even Mastodon. But until interoperability technologies (like meta-directories) become more ubiquitous, either you will have to deal with the hassle of building your own directory or you will have to accept the insecurity inherent in a centralized system.

Household Certificates: The New Economic Reality

Household Certificates Everywhere
Certificate Market Share

How many of you remember your first economics class? For most, it was a macroeconomics survey course that met a behavioral and social sciences requirement. But whether you took an econ class, became an econ major, or you are just a participating member of the economy, you have likely heard about the “law” of supply and demand. [Actually, there is no “law”, per se. But there are real outcomes that are necessary results of the actions that we take.] In a market where resources are limited, increased demand for a good (or service) will almost always result in increased prices. At the same time, an increased supply of that good (or service) will drive the price lower.  And when that price declines, the demand for that good (or service) will probably increase. Simple, right? The same thing is true for the car market, the computer market, and the market for household certificates (i.e., secure services in the home).

The Security Market

Most people have not yet implemented household certificates (or other security mechanisms) because the “cost” was way too high. Historically, the exorbitant cost for a good home security system meant that only those with disposable income could afford these devices (and services). Some people bypassed the initial outlay by building it into the price of a new home. That way, the costs could be distributed over fifteen (or thirty) years. But either way, the number of willing customers remained small.

The same reality is true for digital security and household certificates. You might have heard about two-factor authentication. But you may not have the skills – nor do you have the money – to implement a digitally secure household. So you left those kinds of security steps for others to implement. Basically, you want digital security, but you can’t afford to install or support it.

Household Certificates: Mandatory…and Cheap

The times are changing. As any technology is introduced, early adopters pay excessive amounts of money to have a tool that is cool. If this weren’t the case, then how could anyone justify a $1,200 iPhone?  Yes, the iPhone is cool. But you can get something similar for $800-$900. And if you bypass just a couple of features, you can get a good phone for between $300 and $500. [This is exactly what Dell did when it disrupted the desktop computer market that was previously owned by Apple and IBM.

In security circles, the cost of security certificates (and the learning curve associated with their use) has meant that corporations would be the only users of this kind of technology. But just as the iPhone spurred cheaper competitors, the Internet security industry is also beginning to get its price disruption. You no longer have to go to the “big players” to install household hubs. You can build them yourself. And you don’t have to get certificates from the same places as the big corporations: you can get workable certificates for free from Let’s Encrypt.

You may be asking yourself why you would need security certificates. And if you don’t have any services running at home, then you may not need certificates. But if you have a Plex server, or if you use home automation, or if you have mainstream home security tools (from folks like SimpliSafe, or August, or Blink, or Netgear), then you really do need household certificates.

Why are household certificates important? Because when you connect to services at home, you will want to make sure that it is your home services that are responding to you. Without certificates, there is a real risk that someone will step in between you and your household services. Hackers do this so they can impersonate your servers – and collect valuable data directly from you.  [In security parlance, this is called a man-in-the-middle attack.] By having household certificates, your systems can present secure ‘credentials’ to ensure that the server is who it reports itself to be.

Secure Authentication

Similarly, you may want to ensure that anyone trying to log into your household must present a trusted token to access the treasures inside your house. [Think of this as the digital equivalent of a front door key.]  This can be done with strong passwords. But it can also be done with digital certificates. And almost every implementation of two-factor authentication uses encryption (and certificates) to validate a user’s identity. Without certificates, the only thing that lies between your treasures and digital assailants is your password.  [Let’s hope that your password is both strong and totally unique.]

And with Google’s recent announcement that they will be producing security tokens (i.e., the Google Titan key), the authentication market is finally being commoditized. Prices will no longer be set by only one or two vendors (like RSA or Yubico). And I am sure that other vendors will take advantage of the reduced costs that will be a necessary result of increased key production (needed to meet the Google demand).

Let’s Encrypt: Supply-side Answers

According to Wikipedia, ” The Let’s Encrypt project was started in 2012 by two Mozilla employees, Josh Aas and Eric Rescorla, together with Peter Eckersley at the Electronic Frontier Foundation and J. Alex Halderman at the University of Michigan.” The first public product launch was on April 12, 2016. At the time of launch, Let’s Encrypt entered a market that was dominated by Symantec, GoDaddy, and Comodo

The Let’s Encrypt price point is simple: zero cost certificates. The catch is that these certificates are only good for three months. But with a little scripting (and a few tools from the EFF), the certificate refresh process is almost effortless. And Let’s Encrypt is being built into most household management systems. So with no production costs and with decreasing skill requirements, household certificates are becoming impossible to ignore.

Bottom Line

If you have a little technical know-how, then now is the time to start using Let’s Encrypt on your household servers. And if you aren’t technically savvy, then expect the hardware and software providers to start bundling this security technology into their products. For them, the cost is limited. And adding real security features can only improve customer satisfaction – if it is completely friction-less.

Alexa Dominance: Who Can Compete?

Alexa Dominance
Amazon Echo devices now have a foothold in most American homes.

Voice control is the ‘holy grail’ of UI interaction. You need only look at old movies and television to see that voice is indeed king. [For example, the Robinson family used voice commands to control their robot. And Heywood Floyd used voice as his means of teaching and communicating with HAL.] Today, there are many voice assistants available on the market. These include: Amazon Alexa, Apple Siri, Google Assistant (aka Google Home), Microsoft Cortana, Nuance Nina, Samsung Bixby, and even the Voxagent Silvia.  But the real leaders are only now starting to emerge from this crowded market. And as of this moment, Alexa dominance in third-party voice integration is apparent.

Apple Creates The Market

Apple was the first out-of-the-gate with the Apple Siri assistant. Siri first arrived on the iPhone and later on the iPad. But since its introduction, it is now available as part of the entire Apple i-cosystem. If you are an Apple enthusiast, Siri is on your wrist (with the watch). Siri is on your computer. And Siri is on your HomePod speaker. It is even on your earbuds. And in the past six months, we are finally starting to see some third-party integration with Siri.

Amazon Seizes The Market

Amazon used an entirely different approach to entrench its voice assistant. Rather than launch the service across all Amazon-branded products, Amazon chose to first launch a voice assistant inside a speaker. This was a clever strategy. With a fairly small investment, you could have an assistant in the room with you. Wherever you spent time, your assistant would probably be close enough for routine interactions.

This strategy did not rely upon your phone always being in your pocket.  Unlike Apple, the table stakes for getting a voice assistant were relatively trivial. And more importantly, your investment was not limited to one and only one ecosystem.  When the Echo Dot was released at a trivial price point (including heavy discounts), Alexa started showing up everywhere. 

From the very outset, an Amazon voice assistant investment required funds for a simple speaker (and not an expensive smartphone). You could put the speaker in a room with a Samsung TV. Or you could set it in your kitchen. So as you listened to music (while cooking), you could add items to your next shopping list.  And you could set the timers for all of your cooking.  In short, you had a hands-free method of augmenting routine tasks.   In fact, it was this integration between normal household chores coupled with the lower entry price that helped to spur consumer purchases of the Amazon Echo (and Echo Dot).

A second key feature of Amazon’s success was its open architecture. Alexa dominance was amplified as additional hardware vendors adopted the Alexa ecosystem. And the young Internet-of-Things (IoT) marketplace adopted Alexa as its first integration platform. Yes, many companies also provided Siri and Google Assistant integration. But Alexa was their first ‘target’ platform.

The reason for Alexa integration was (and is) simple: most vendors sell their products through Amazon. So vendors gained synergies with their main supplier. Unlike the Apple model, you didn’t have to go to a brick and mortar store (whether it be the Apple Store, the carriers’ stores, or even BestBuy/Target/Walmart).  Nor did a vendor need to use another company’s supply chain. Instead, they could bundle the whole experience through an established sales/supply channel.

Google Arrives Late To The Party

While Apple and Amazon sparred with one another, Google jumped into the market. They doubled-down on ‘openness’ and interoperability.  And at this moment, the general consensus is that the Google offering is the most open. But to date, they have not gained traction because their entry price was much higher than Amazon’s. We find this to be tremendously interesting. Google got the low price part down when they offered a $20-$30 video streamer.

But with the broader household assistant, Google focused first upon the phone (choosing to fight with Apple) rather than a hands-free device that everyone could use throughout the house. And rather than follow the pricing model that they adopted with the Chromecast, Google chose to offer a more capable (and more expensive) speaker product. So while they used one part of the Amazon formula (i.e., interoperability), they avoided the price-sensitive part of the formula.

Furthermore, Google could not offer synergies with the supply chain. Consequently, Google still remains a third-place contender. For them to leap back into a more prominent position, they will either have to beat ‘all-comers’ on price or they will have to offer something really innovative that the other vendors haven’t yet delivered.

Alexa Dominance

Amazon dominance in third-party voice integration is apparent. Not only can you use Alexa on your Amazon ‘speakers’, you can use it on third-party speakers (like Sonos). You can launch actions on your phone and on your computer. And these days, you can use it with your thermostat, your light bulbs, your power sockets, your garage door, your blinds, and even your oven. In my case, I just finished integrating Alexa with Hue lights and with an ecobee thermostat.

Bottom Line

Market dominance is very fleeting. I remember when IBM was the dominant technology provider. After IBM, Microsoft dominated the computer market. At that time, companies like IBM, HP, and Sun dominated the server market. And dominance in the software market is just as fleeting. Without continually focusing on new and emerging trends, leadership can devolve back into a competitive melee, followed by the obsolescence of the leader. Indeed, this has been the rule as dominant players have struggled to maintain existing revenue streams while trying to remain innovative.

Apple is approaching the same point of transition. Their dominance of the phone market is slowly coming to an end. Unless they can pivot to something truly innovative, they may suffer the same fate as IBM, Sun, HP, Dell, Microsoft, and a host of others.

Google may be facing the same fate – though this is far less certain. Since Google’s main source of revenue is ‘search-related’ adverstising, they may see some sniping around the edges (e.g., Bing, DuckDuckGo, etc). But there is no serious challenge to their core business – at this time.

And Amazon is in a similar position: their core revenue is the supply chain ‘tax’ that they impose upon retail sales. So they may not see the same impact on their voice-related offerings. But they dare not rest upon their laurels. In candor, the Amazon position is far more appealing than the Google position. The Amazon model relies upon other companies building products that Amazon can sell. So interoperability will always be a part of any product that Amazon brands – including voice assistants. 

Only time will sort out the winners and losers. And I daresay that there is room enough for multiple ‘winners’ in this space. But for me, I am now making all of my personal and business investments based upon the continued dominance of Alexa.