I spent several hours this weekend on Reddit. I’ve been discussing Mozilla and their future. After finally de-googling my mobile life, I have been confronted by one simple truth: the organization that provides my default browser is now a larger threat to my safety than most other threat actors that I now face. Why is that? That’s simple. The new Google and Mozilla browser deal ensures that private data – my private data and your private data – will be collected by Mozilla and then delivered to Google.
Does This Make Any Sense?
From Google’s viewpoint, this makes perfect sense. They can make sure that their search engine is still the default search engine for a large number of Firefox users. For most folks, search engines don’t make sense. And if you don’t understand something, you usually don’t try to change it. All of us know the adage that if it isn’t broken, then you shouldn’t fix it. For this reason, most people don’t touch the browser that they use. This deal extension plays right into Newton’s First Law: the law of inertia. This extension enshrines a docile Mozilla. It ensures that they will act as the “dutiful competitor” whenever state and federal regulators get too inquisitive.
From Mozilla’s viewpoint, this also makes sense. The Firefox market share is shrinking – and has been shrinking for years. And Mozilla just laid off hundreds of employees. Obviously, they are not going to innovate their way out of their death spiral. Some think that this deal simply provides sufficient financial cushion for the leaders of the Mozilla Foundation to land on their feet.
From A Personal Vantage Point
When you have a list and you move things off of that list, new things surface as “the most important” thing that you must address. So as I’ve reduced my risks from Google, something else had to take Google’s place. And at this moment, it is the browser. More specifically, it is Mozilla’s browser.
Firefox for Android has four (4) embedded trackers. These include: Adjust, Google AdMob, Google Firebase Analytics, and LeanPlum. Half of these trackers report data directly to Google. So after recently breaking the chain that kept me in Google’s sway, I am now left with someone else taking the very same data treasure and “gifting” it right back to Google. Given their financial peril, I truly doubt that the Mozilla Foundation will be convinced to remove these trackers on my behalf.
A Historical Alliance
All of this makes some historical sense. When the Mozilla Foundation first started, they were fighting Microsoft. Today, I am sure that many of the people at Mozilla still see Google as “the enemy of my enemy” and not just “the ‘new’ enemy”.
But the times have changed, right? Microsoft was cowed. And Google rose triumphantly – as did the Mozilla Foundation. Nevertheless, one thing remains the same. There is a ravenous competitor prowling the field. And the consumers that were threatened before are threatened once again. But this time, it is Google that needs to be cowed.
What’s A Geek To Do?
My situation is simple: my most important mobile app is my browser. And this product is now siphoning private information into Google’s coffers. I can’t tolerate this. So I’ve been struggling with this all weekend.What can I do?
I could continue to use the Fennec variant available on F-Droid. This will work. But this product is EOL. And so there will be no new versions. So while I can keep on using this product, I am living on borrowed time.
I could change my browser. There are some very good browsers that meet some very specific needs. I could use Chromium – or any one of a number of derivative works. But it is very difficult to cross this bridge. After all, Chromium is the basis for all of Google’s proprietary browser investments.
I could also use any of a bunch of browsers that are descended from Firefox. IceCast is one such descendant. It is a good browser that is built upon Gecko. And it is actively being maintained. They are trying to keep up with Firefox. But their next update probably won’t happen until the Mozilla folks lay down their next “extended support release” (or ESR). Consequently, this release is intentionally behind the times as the last ESR release is quite dated.
I could use another browser that is not part of either legacy. But to be fair, there are very few new options that fall into this category.
I could switch and use the “new” Firefox for Android. This one stings. I am emotionally hurt by the gyrations that Mozilla is inflicting upon their users. Nevertheless, their new version is a very good browser – albeit with several Google trackers. Fortunately, I can neutralize those trackers. By using Pihole, I can ensure that connections made to named Google services will not be properly resolved. In this way, I can have Firefox and still block Google – at least until Mozilla defeats this DNS-oriented defense.
Bottom Line
So what will I do? For now, I’m switching from Fennec F-Droid to Firefox for Android. And I’ve reviewed all of the adlists included on my Pihole. For now, I can use Mozilla Firefox while still intercepting any private data being fed to Google.
Is the Mozilla browser deal good for me? It absolutely is not. Is the deal good for the industry? It probably isn’t. Will I make a temporary compromise until a better solution emerges? Yes, I will make that compromise. But I am altogether unhappy living in this compromised state.
Continuous Improvement is nothing new. In the early nineties, total quality management (TQM) was all the rage. And even then, TQM was a re-visitation of techniques applied in preceding decades. Today, continuous improvement is embraced in nearly every development methodology. But whether from the “fifties” or the “twenties”, the message is still the same: any measurable improvement (whether in processes or in technologies) is the result of a systematic approach. This is true for software development. And it is true for continuous privacy improvements.
Privacy Is Threatened
With every wave of technology change, there have been concurrent improvements in determining what customers desire – and what they will “spend” in order to obtain something. At the same time, customers have become increasingly frustrated with corporate attempts to “anticipate” their “investment” habits. For example, the deployment of GPS and location technologies has allowed sellers to “reach” potential customers whenever those customers are physically near the point of sale. In short, when you got to the Magnificent Mile in Chicago, you’ll probably get adds for stores that are in your vicinity.
While some people find this exhilarating, many people find it frustrating. And some see these kinds of capabilities as demonstrative of a darker capability: the ability for those with capability to monitor and manage the larger populace. For some, the “sinister” people spying on them are corporations. For many, the “malevolent” forces that they fear are shadowy “hackers” that can steal (or have already stolen) both property and identity. And for a very small group of people, the powers that they fear most are governments and / or similar authorities. For everyone, the capability to monitor and influence behavior is real.
Surveillance And Exploitation Are Not New
Governments have tried to “watch” citizens – whether to protect them from threats or to “manage” them into predetermined behaviors. You can look at every society and see that there have always been areas of our life that we wish to keep private. And balanced against those desires are the desires of other people. So with every generation (and now with every technology change), the dance of “personal privacy” and “group management” is renewed.
As the technology used for surveillance has matured, the tools for ensuring privacy have also changed. And the methods for ensuring privacy today have drastically changed from the tools used even a few years ago. And if history is a good predictor of the future, then we can and should expect that we must continually sharpen our tools for privacy – even as our “adversaries” are sharpening their tools of surveillance. Bottom Line: The process of maintaining our privacy is subject to continuous threat and must be handled in a model akin to continuous process improvement. So let’s start accepting the need for continuous privacy improvement.
Tackling Your Adversaries – One At A Time
If you look at the state of surveillance, you probably are fatigued by the constant fight to maintain your privacy. I know that I am perpetually fatigued. Every time that you harden your defenses, new threats emerge. And the process of determining your threats and your risks seems to be never-ending. And in truth, it really is never-ending. So how do you tackle such a problem? I do it systematically.
As an academic (and lifetime) debater – as well as a trained enterprise architect – I continually assess the current state. That assessment involves the following activities:
Specify what the situation is at the present moment.
Assess the upsides and downsides of the current situation.
Identify those things that are the root causes of the current situation.
Outline what kind of future state (or target state) would be preferable.
Determine the “gaps” between the current and future states.
Develop a plan to address those gaps (and their underlying problems).
And there are many ways to build plans. Some folks love the total replacement model. And while this is feasible for some projects, it is rarely practical for our personal lives. [Note: There are times when threats do require a total transformation. But they are the exception and not the general rule.] Since privacy is such a fundamental part of our lives, we must recognize that changes to our privacy posture must be made incrementally – and continuously. Consequently, we must understand the big picture and then attack in small and continuous ways. In military terms, you want to avoid multi-front campaigns at all cost. Both Napoleon and Hitler eschewed this recommendation. And they lost accordingly.
My Current State – And My Problems
I embarked on my journey towards intentional privacy a few years ago. I’ve given dozens of talks about privacy and security to both IT teams and to personal acquaintances. And I’ve made it a point to chronicle my personal travails along my path to a more private life. But in order to improve, I needed to assess what I’ve done – and what remains to be done.
So here goes…
Over the past two years, I’ve switched my primary email provider. I’ve changed my search providers and my browsers – multiple times. And I’ve even switched from Windows to Linux. But my transformation has always been one step away from its completion.
The Next (to Last) Step: De-googling
This year, I decided to address the elephant in the room: I decided to take a radical step towards removing Google from my life. I’ve been using Google products for almost half of my professional life. Even though I knew that Google was one of the largest threat actors my ecosystem, I still held on to to a Google lifeline. Specifically, I was still using a phone based upon Google’s ecosystem. [Note: I did not say Android. Because Android is a Linux-oriented phone that Google bought and transformed into a vehicle for data collection and advertising delivery.]
I had retained my Google foothold because I had some key investments that I was unwilling to relinquish. The first of these was a Google Voice number that had been at the heart of my personal life (and my business identity). That number was coupled with my personal Google email identity. It was the anchor of hundreds of accounts. And it was in the address books of hundreds of friends, relatives, colleagues, customers, and potential customers.
Nevertheless, the advantages of keeping a personal Google account were finally outweighed by my firm realization that Google wasn’t giving me an account for free; Google was “giving” me an account to optimize their advertising delivery. Or stated differently, I was willing to sell unfettered access to myself as long as I didn’t mind relinquishing any right to privacy. And after over fifteen years with the same account, I was finally ready to reclaim my right to privacy.
Too Many Options Can Lead To Inaction
I had already taken some steps to eliminate much of the Google stranglehold on my identity. But they still had the lynch pins:
I still had a personal Google account, and
Google had unfettered access to my mobile computing platform.
So I had to break the connection from myself to my phone. I carefully considered the options that were available to me.
I could switch to an iPhone. Without getting too detailed, I rejected this option as it was simply trading one master for another one. Yes, I had reason to believe that Apple was “less” invasive than Google. But Google was “less” invasive at one point in time. So I rejected trading one for another.
I could install a different version of Android on my current phone. While I have done this in the past, I was not able to do this with my current phone. I had bought a Samsung Galaxy S8+ three years ago. And when I left Sprint for the second time (due to the impending merger), I kept the phone. But this phone was based upon the Qualcomm SnapDragon 855. Consequently, the phone had a locked bootloader. And Qualcomm has never relented and unlocked the bootloader. So I cannot flash a new ROM (like LineageOS) on this phone.
I could install a different version of Android on a new phone. This option had some merit – at the cost of purchasing new phone hardware. I could certainly buy a new (or used) phone that would support GraphenOS or LineageOS. But during these austere times (when consulting contracts are sparse), I will not relinquish any coin of the realm to buy back my privacy. And buying a Pixel sounds more like paying a ransomware demand that buying something of value.
I could take what I had and live with it. Yes, this is the default option. And while I diddled with comparisons, this WAS what I did for over a year. After all, it fell into the adage that if it isn’t broken, then why fix it? But such defaults never last – at least, not for me.
I could use the current phone and take the incremental next step in using a phone with a locked bootloader: I could eliminate the Google bits by eliminating the Google account and by uninstalling (and/or disabling) Google, Samsung, and T-Mobile apps using the Android Debug Bridge (a.k.a., adb).
I had previously decided to de-google my phone before my birthday (in July). So once Independence Day came and went, I got serious about de-googling my phone.
The Road Less Taken
Of all of the options available to me, I landed on the one that cost the least amount of my money but required the most investment of my personal time. So I researched many different lists of Google apps (and frameworks) on the Samsung Galaxy S8+. I first disabled the apps that I had identified. Then I used a tool available on the Google Play Store called Package Disabler Pro. I have used this before. So I used it again to identify those apps that I could readily disable. By doing this, I could determine the full impact of deleted some of these packages – before I actually deleted them. Once I had developed a good list and had validated that the phone would still operate, I made my first attempt.
And as expected, I ran into a few problems. Some of them were unexpected. But most of them were totally expected. Specifically, Google embeds some very good technology in the Google Play Services (gms) and Google Services Framework (gsf). And when you disable / delete these tools, a lot of apps just won’t work completely. This is especially true with notifications.
I also found out that there were some key multimedia messaging services (MMS) capabilities that I was using without realizing it. So when I deleted these MMS tools, I had trouble with some of my routine multi-recipient messages. I solved this by simply re-installing those pieces of software. [Note: If that had not worked, then I was ready to re-flash to a baseline T-Mobile ROM. So I had multiple fallback plans. Fortunately, the re-installation solved the biggest problem.]
Bottom Line
After planning for the eventual elimination of my Google dependence, I finally took the necessary last step towards a more private life; I successfully de-googled my phone – and my personal life. Do I still have some interaction with Google? Of course I do. But those interactions are far less substantial, far more manageable, and far more private. At the same time, I have eliminated a large number of Samsung and T-Mobile tracking tools. So my continuous privacy improvement process (i.e., my intentional privacy improvements) has resulted in a more desirable collaboration between myself and my technology partners.
Over the past two quarters, we’ve focused upon the technologies and practices that help to establish (and maintain) an effective privacy posture. We’ve recommended ceasing almost all personal activity on social media. But the work of ensuring personal privacy cannot end there. Our adversaries are numerous – and they counter every defensive action that we take with increasingly devastating offensive tools and techniques. While the tools of data capture are proliferating, so are the tools for data analysis. Using open source intelligence (OSINT) tools, it is possible to transform vast piles of data into meaningful and actionable chunks of information. For this reason, our company has extended its security and privacy focus to include the understanding and the use of OSINT techniques.
Start At the Beginning
For countless generations, a partner was someone that you knew. You met them. You could shake their hand. You could see their smiling face. You knew what they could do. And you probably even knew how they did it. In short, you could develop a trust-based relationship that would be founded upon mutual knowledge and relative proximity. It is no coincidence that our spouses are also known as our ‘partners‘ as we can be honest and forthcoming about our goals and desires with them. We can equitably (and even happily) share the burdens that will help us to achieve our shared goals.
But that kind of relationship is no longer the norm in modern business. Most of our partners (and providers) work with us from behind a phone or within a computer screen. We may know their work product. But we have about as much of a relationship with them as we do with those civil servants who work at the DMV.
So how can we know if we should trust an unknown partner?
A good privacy policy is an essential starting point in any relationship. But before we partner with anyone, we should know exactly how they will use any data that we share with them. So our first rule is simple: before sharing anything, we must ensure the existence of (and adherence to) a good privacy policy. No policy? No partnership. Simple, huh?
That sounds all well and good. But do you realize just how much data you share without your knowledge or explicit consent? If you want to really know the truth, read the end user license agreements (EULA’s) from your providers. What you will usually find is a blanket authorization for them to use any and all data that is provided to them. This certainly includes names, physical addresses, email addresses, birth dates, mothers’ maiden names, and a variety of other data points. If you don’t believe me (or you don’t read the EULA documents which you probably click past), then just use a search engine and enter your name in the search window. There will probably be hundreds of records that pertain to you.
But if you really want to open your eyes, just dig a little deeper to find that every government document pertaining to you is a public record. And all public records are publicly indexed. So every time that you pass a toll and use your electronic pass, your location (and velocity) data is collected. And every time that you use a credit card is logged.
Know the difference between a partner and a provider!
A partner is someone that you trust. A provider is someone that provides something to/for you. Too often, we treat providers as if they were partners. If you don’t believe that, then answer this simple question: Is Facebook a partner in your online universe? Or are they just someone who seeks to use you for their click bait (and revenue)?
A partner is also someone that you know. If you don’t know them, they are not a partner. If you don’t implicitly trust them, then why are you sharing so much of your life with them?
Investigate And Evaluate Every Potential Partner!
If you really need a partner to work with and you don’t already trust someone to do the work, then how do you determine whether someone is worth trusting? I would tell you to use the words of former President Ronald Reagan as a guide: trust but verify. And how do you verify a potential partner? You learn about them. You investigate them. You speak with people that know them. In short, you let their past actions be a guide to how they will make future decisions. And for the casual investigation, you should probably start using OSINT techniques to assess your partner candidates.
What are OSINT techniques?
According to the SecurityTrails blog, “Open source intelligence (OSINT) is information collected from public sources such as those available on the Internet, although the term isn’t strictly limited to the internet, but rather means all publicly available sources.” The key is that OSINT is comprised of readily available intelligence data. So sites like SecurityTrails and Michael Bazzell’s IntelTechniques are fantastic sources for tools and techniques that can collect immense volumes of OSINT data and then reduce it into usable information.
So what is the cost of entry?
OSINT techniques can be used with little to no cost. As a security researcher, you need a reasonable laptop (with sufficient memory) in order to use tools like Maltego. And most of the OSINT tools can run either on Kali Linux or on Buscador (see below). And while some sources of data are free, some of the best sources do require an active subscription to access their data. And the software is almost always open source (and hence readily available). So for a few hundred dollars, you can start doing some pretty sophisticated OSINT investigations.
Protection Against OSINT Investigations
OSINT techniques are amazing – when you use them to conduct an investigation. But they can be positively terrifying when you are the subject of such an investigation. So how can you limit your exposure from potential OSINT investigations?
One of the simplest steps that you can take is to use an operating system designed to protect your privacy. As noted previously, we recommend the use of Linux as a foundation. Further, we recommend using Qubes OS for most of your public ‘surfing’ needs. [We also recommend TAILS on a USB key whenever you are using communal computers.]
Using OSINT To Determine Your Personal Risk
While you can minimize your future exposure to investigations, you first need to determine just how long of a shadow your currently cast. The best means of assessing that shadow is to use OSINT tools and techniques to assess yourself. A simple Google search told me a lot about my career. Of course much of his was easily culled from LinkedIn. But it was nice to see that a simple name search highlighted important (and positive) things that I’ve accomplished.
And then I started to use Maltego to find out about myself. I won’t go into too much detail. But the information that I could easily unearth was altogether startling. For example, I easily found out about past property holdings – and past legal entanglements related to a family member. There was nothing too fancy in my recorded past. While that fact alone was a little discouraging, I was able to find all of these things with little or no effort.
I had hoped that discovering this stuff would be like the efforts which my wife took to unearth our ancestral heritage: difficult and time-consuming. But it wasn’t. I’m sure that it would take some serious digging to find anything that is intentionally hidden. But it takes little or no effort to find out some privileged information. And the keys to unlocking these doors are the simple pieces of data that we so easily share.
Clean Up Your Breadcrumbs
Like the little children in the fairy tale, a trail of breadcrumbs can be followed. So if you want to be immune from casual and superficial searches, then you need to take the information that is casually available and start to clean it up. With each catalogued disclosure, you can contact the data source and request that this data be obscured and not disclosed. With enough diligence, it is possible to clean up the info that you’ve casually strewn in your online wake. And if the task seems altogether too daunting, there are companies (and individuals) who will gladly assist you in your efforts to minimize your online footprints.
Bottom Line
As we use the internet, we invariably drop all sorts of breadcrumbs. And these breadcrumbs can be used for many things. On the innocuous end of the scale, vendors can target you with ads that you don’t want to see. But at the other end of the scale is the opportunity to leverage your past in order to redirect your future. It sounds innocuous when stated like that. So let’s call a spade a spade. There is plenty of information that can be used for kidnapping your data and for “influencing” (i.e., extorting) you. But if you use OSINT techniques to your advantage, then you can identify your risks and you can limit your vulnerabilities. And the good news is that it will only cost you a few shekels – while doing nothing could cost you thousands of shekels.
When I started to manage Windows systems, it was important to understand the definition of ‘transitive trust’. For those not familiar with the technical term, here is the ‘classic’ definition:
Transitive trust is a two-way relationship automatically created between parent and child domains in a Microsoft Active Directory forest. When a new domain is created, it shares resources with its parent domain by default, enabling an authenticated user to access resources in both the child and parent.
But this dry definition misses the real point. A transitive trust relationship (of any kind) is a relationship where you trust some ‘third-party’ because someone that you do trust also trusts that same ‘third-party’. This definition is also rather dry. But let’s look at an example. My customers (hopefully) trust me. And if they trust me enough, then they also trust my choices concerning other groups that help me to deliver my services to them. In short, they transitively trust my provider network because they trust me.
Unfortunately, the Amazon AWS technology platform was compromised. So Capital One should legitimately stop trusting Amazon (and its AWS platform). This should remain true until Amazon verifiably addresses the fundamental causes of this disastrous breach. But what should Capital One’s customers do? [Note: I must disclose that I am a Capital One customer. Therefore, I may be one of their disgruntled customers.]
Most people will blame Capital One. Some will blame them for a lack of technical competence. And that is reasonable as Capital One is reaping financial benefits from their customers and from their supplier network. Many other people will blame the hacker(s). It’s hard not to fume when you realize that base individuals are willing to take advantage of you solely for their own benefit. Unfortunately, only a few people will realize that the problem is far more vexing.
Fundamentally, Capital One trusted a third-party to deliver services that are intrinsic to their core business. Specifically, Capital One offered a trust relationship to their customers. And their customers accepted that offer. Then Capital One chose to use an external platform simply to cut corners and/or deliver features that they were unable to deliver on their own. And apparently that third-party was less capable than Capital One assumed.
Regaining Trust
When a friend or colleague breaks your trust, you are wounded. And in addition to this emotional response, you probably take stock of continuing that relationship. You undoubtedly perform and internal risk/reward calculation. And then you add the emotional element about whether this person would act in a more trustworthy fashion in the future. If our relationship with companies was less intimate, then most people would simply jettison their failed provider. But since we build relationships on a more personal footing, most people will want to give their friend (or their friendly neighborhood Bailey Building & Loan) the benefit of the doubt.
So what should Capital One do? First, they must accept responsibility for their error in judgment. Second, they must pay for the damages that they have caused. [Note: Behind the scenes, they must bring the hammer to their supplier.] Third, they must rigorously assess what really led to these problems. And fourth, they must take positive (and irreversible) steps to resolve the root cause of this matter.
Of course, the last piece is the hardest. Oftentimes, the root cause is difficult to sort out given all of the silt that was stirred upon in the delta when the hurricane passed through. Some people will blame the Capital One culture. And there is merit to this charge. After all, the company did trust others to protect the assets of their customers. As a bank, the fundamental job is to protect customer assets. And only when that is done, should the bank owners use the entrusted funds in order to generate a shared profit for their owners (i.e., shareholders) and their customers.
Trust – But Verify
In the height of the Cold War, President Ronald Reagan exhorted the nation to trust – but then to verify the claims of a long-standing adversary. In the case of Capital One, we should do the very same thing. We should trust them to act in their own selfish interests because the achievement of our interests will be the only way that they can achieve their own interests.
That means that we must be part of a robust and two-way dialog with Capital One and their leadership. Will Capital One be big enough to do this? That’s hard to say. But if they don’t, they will never be able to buy back our trust.
Finally, we have to be bold enough to seek verification. As President Reagan said, “You can’t just say ‘trust me’. Trust must be earned.”
As noted previously, the effort to maintain anonymity while using the Internet is a never-ending struggle. We have been quite diligent about hardening our desktop and laptop systems. This included a browser change, the addition of several browser add-ons, the implementation of a privacy-focused DNS infrastructure, and the routine use of a VPN infrastructure. But while we focused upon the privacy of our static assets, our mobile privacy was still under siege.
Yes, we had done a couple of routine things (e.g., browser changes, add-one, and use of our new DNS infrastructure). But we had not yet spent any focused time upon improving the mobile privacy of our handheld assets. So we have just finished spending a few days addressing quite a few items. We hope that these efforts will help to assure enhanced mobile privacy.
Our Mobile Privacy Goals
Before outlining the key items that we accomplished, it is important to highlight our key goals:
Start fresh. It would be nearly impossible to retrofit a hardened template onto an existing base – especially if you use a BYOD strategy. That’s because the factory images for most phones are designed to leverage existing tools – most of which exact an enormous price in terms of their privacy concessions.
Decide whether or not you wish to utilize open source tools (that have been reviewed) or trust the vendor of the applications which you will use. Yes, this is the Apple iOS v. Android issue. And it is a real decision. If it were just about cost, you would always
Accept the truth that becoming more private (and more anonymous) will require breaking the link to most Google tools. Few of us realize just how much data each and every mobile app collects. And on Android phones, this “tax” is quite high. For Apple phones, the Google “tax” is not as high. But that “good news” is offset by the “bad news” that Apple retains exclusive rights to most of its source code. Yes, the current CEO has promised to be good. [Note: But so did the original Google leaders. And as of today, Google has abandoned its promise to “do no evil”.] But what happens when Mr. Tim Cook leaves?
Act on the truth of the preceding paragraph. That means exchanging Google Apps for apps that are more open and more privacy-focused. If you want to understand just how much risk you are accepting when using a stock Android phone, just install Exodus Privacy and see what your current apps can do. The terrifying truth is that we almost always click the “Allow” button when apps are installed. You must break that habit. And you must evaluate the merits of every permission request. Remember, the power to decide your apps is one of the greatest powers that you have. So don’t take it lightly.
Be aware that Google is not the only company that wishes to use you (and your data) to add profits to their bottom line. Facebook does it. Amazon does it. Apple does it. Even Netflix does it. In fact, almost everyone does it. Can you avoid being exploited by unfeeling corporate masters? Sure, if you don’t use the Internet. But since that is unlikely, you should be aware that you are the most important product that most tech companies sell. And you must take steps to minimize your exploitation risk.
If and where possible, we will host services on our own rather than rely upon unscrupulous vendors. Like most executives, I have tremendous respect for our partner providers. But not every company that we work with is a partner. Some are just vendors. And vendors are the ones who will either exploit your data or take no special interest in protecting your data. On the other hand, no one knows your business better than you do. And no one cares about your business as much as you do. So wherever possible, trust you own teams – or your valued (and trusted) partners.
Our Plan of Attack
With these principles in mind, here is our list of what we’ve done since last week:
Update OS software for mobile devices
Factory reset of all mobile devices
SIM PIN
Minimum 16-character device PIN
Browser: Firefox & TOR Browser
Search Providers: DuckDuckGo
Browser Add-ons
Content Blocking
Ads: uBlock Origin
Scripts: uMatrix
Canvas Elements: Canvas Blocker
WebRTC: Disable WebRTC
CDN Usage: Decentraleyes
Cookie Management: Cookie AutoDelete
Isolation / Containers: Firefox Multi-Account Containers
Mobile Applications
Exodus Privacy
Package Disabler Pro
OpenVPN + VPN Provider S/W
Eliminate Google Tools on Mobile Devices
Google Search -> DuckDuckGo or SearX
GMail -> K-9 Mail
GApps -> "Simple" Tools
Android Keyboard -> AnySoftKeyboard
Stock Android Launcher -> Open Launcher
Stock Android Camera -> Open Camera
Stock Android Contacts / Dialer -> True Phone
Google Maps -> Open Street Maps (OSM)
Play Store -> F-Droid + APKMirror
YouTube -> PeerTube + ???
Cloud File Storage -> SyncThing
Our Results
Implementing the above list took far more time than we anticipated. And some of these things require some caveats. For example, there is no clear competitor for YouTube. Yes, there are a couple of noteworthy challengers (e.g., PeerTube, D-Tube, etc). But none have achieved feature sufficiency. So if you must use YouTube, then please do so in a secure browser.
You might quibble with some of the steps that we took. But we believe that we have a very strong case for each of these decisions and each of these steps. And I will gladly discuss the “why’s” for any of them – if you’re interested. Until then, we have “cranked it up to eleven”. We believe that we are in a better position regarding our mobile privacy. And after today, our current “eleven” will become the new ten! Continuous process improvement, for the win!
Over the past few months, I’ve focused my attention upon how you can be safer while browsing the Internet. One of the most important recommendations that I have made is for you to reduce (or eliminate) the loading and execution of unsafe content. So I’ve recommended ad blockers, a plethora of browser add-ons, and even the hardening of your premise-based services (e.g., routers, NAS systems, IoT devices, and DNS). Of course, this only addresses one side of the equation (i.e., the demand side). In order to improve the ‘total experience’ for your customers, you will also need to harden the services that you provide (i.e., the supply side). And one of the most often overlooked mechanisms for improvement is the proper use of HTTP security headers.
Background
According to the Open Web Application Security Project (OWASP), content injection is still the single largest class of vulnerabilities that content providers must address. When coupled with cross-site scripting (XSS), it is clear that hostile content poses an existential threat to many organizations. Yes, consumers must block all untrusted content as it arrives at their browser. But every site owner should first ensure that they inform every client about the content that they will be sending. Once these declarations are made, the client (i,e, browser) can then act to trust or distrust the content that they receive.
The notion that a web site should declare the key characteristics of its content stream is nothing new. What we now call a content security policy (CSP) has been around for a very long time. Indeed, the fundamental descriptions of content security policies were discussed as early as 2004. And the first version of the CSP standard was published back in 2012.
CSP Standards Exist – But Are Not Universally Used
According to the White Hat 2018 “Website Security Statistics Report”, a number of industries still operate chronically vulnerable websites. White Hat estimates that 52% of Accommodations / Food Services web sites are “Always Vulnerable”. Moreover, an additional 7% of these websites are “Frequently Vulnerable” (ie., vulnerable for at least 263 days a year). Of course, that is the finding for one sector of the broader marketplace. But things are just as bad elsewhere. In the healthcare market, 50% of websites are considered “Always Vulnerable” with an additional 10% classified as “Frequently Vulnerable”.
Unfortunately, few websites actually use one of the most potent elements in their arsenal. Most website operators have established software upgrade procedures. And a large number of them have acceptable auditing and reporting procedures. But unless they are subject to regulatory scrutiny, few organizations have even considered implementing a real CSP.
Where To Start
So let’s assume that you run a small business. And you had your daughter/son, niece/nephew, friend of the family, or kid next door build your website. Chances are good that your website doesn’t have a CSP. To check this out for sure, you should go to https://securityheaders.com and see if you have appropriate security headers for your website.
In my case, I found that my website security posture was unacceptably low. [Note: As a National Merit Scholar and Phi Beta Kappa member, anything below A+ is unacceptable.] Consequently, I looked into how I could get a better security posture. Apart from a few minor tweaks, my major problem was that I didn’t have a good CSP in place.
Don’t Just Turn On A Security Policy
Whether you code the security headers in your .htaccess file or you use software to generate the headers automatically, you will be tempted to just turn on a security policy. While that is a laudable sentiment, I urge you not to do this – unless your site is not live. Instead, make sure that you use your proposed CSP in “report only” mode – as a starting point.
Of course, I chose the engineer’s path and just set up a default-src directive to allow only local content. Realistically, I just wanted to see content blocked. So I activated my CSP in “blocking” mode (i.e., not “report only”) mode. And as expected, all sorts of content was blocked – including the fancy sliders that I had implemented on my front page.
I quickly reset the policy to “report only” so that I could address the plethora of problems. And this time, I worked each problem one at a time. Surprisingly, it really did take some time. I had to determine which features came from which external sources. I then had to add these sources to the CSP. This process was very much like ‘whitelisting’ external sources in an ad blocker. But once I found all of the external sources, I enabled “blocking” mode. This time, my website functioned properly.
Bottom Line
In the final analysis, I learned a few important things.
Security headers are an effective means of informing client browsers about the characteristics of your content – and your content sources. Consequently, they are an excellent means of displaying your content whitelist to any potential customer.
Few website builders automatically generate security headers. There is no “Great and Powerful Oz” who will code all of this from behind the curtains – unless you specifically pay someone to do it. Few hosting platforms do this by default.
Tools do exist to help with coding security headers – and content security policies. In the case of Wrodpress, I used HTTP Headers (by Dimitar Ivanov).
While no single security approach can solve all security issues, using security headers should be added to the quiver of tools that you use when addressing website content security.
Privacy protection is not a state of being; it is not a quantum state that needs to be achieved. It is a mindset. It is a process. And that process is never-ending. Like the movie from the eighties, the never-ending privacy story features an inquisitive yet fearful child. [Yes, I’m casting each of us in the that role.] This child must assemble the forces of goodness to fight the forces of evil. [Yes, in this example, I’m casting the government and corporations in the role of evil doers. But bear with me. This is just story-telling.] The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.
It’s too bad that life is not so simple.
My Never-ending Privacy Battle Begins
There is a tremendous battle going on. Selfish forces are seeking to strip us of our privacy while they sell us useless trinkets that we don’t need. There are a few people who truly know what is going on. But most folks only laugh whenever someone talks about “the great Nothing”. And then they see the clouds rolling in. Is it too late for them? Let’s hope not – because ‘they’ are us.
My privacy emphasis began a very long time ago. In fact, I’ve always been part of the security (and privacy) business. But my professional focus began with my first post-collegiate job. After graduation, I worked for the USAF on the Joint Cruise Missile program. My role was meager. In fact, I was doing budget spreadsheets using both Lotus 1-2-3 and the SAS FS-Calc program. A few years later, I remember when the first MIT PGP key server went online. But my current skirmishes with the forces of darkness started a few years ago. And last year, I got extremely serious about improving my privacy posture.
Since then, I’ve deleted almost all of my social media accounts. Gone are Facebook, Twitter, Instagram, Foursquare, and a laundry list of other platforms. I’ve deleted (or disabled) as many Google apps as I can from my Android phone (including Google Maps). I’ve started my new email service – though the long process of deleting my GMail accounts will not end for a few months.
At the same time, I am routinely using a VPN. And as I’ve noted before, I decided to use NordVPN. I have switched away from Chrome and I’m using Firefox exclusively. I’ve also settled upon the key extensions that I am using. And at this moment, I am using the Tor browser about half of the time that I’m online. Finally, I’ve begun the process of compartmentalizing my online activities. My first efforts were to use containers within Firefox. I then started to use application containers (like Docker) for a few of my key infrastructure elements. And recently I’ve started to use virtual guests as a means of limiting my online exposure.
Never-ending Progress
But none of this should be considered news. I’ve written about this in the past. Nevertheless, I’ve made some significant progress towards my annual privacy goals. In particular, I am continuing my move away from Windows and towards open source tools/platforms. In fact, this post will be the first time that I am publicly posting to my site from a virtual client. In fact, I am using a Linux guest for this post.
For some folks, this will be nothing terribly new. But for me, it marks a new high-water mark towards Windows elimination. As of yesterday, I access my email from Linux – not Windows. And I’m blogging on Linux – not Windows. I’ve hosted my Plex server on Linux – not Windows. So I think that I can be off of Windows by the end of 2Q19. And I will couple this with being off GMail by 4Q19.
Bottom Line
I see my goal on the visible horizon. I will meet my 2019 objectives. And if I’m lucky, I may even exceed them by finishing earlier than I originally expected. So what is the reward at the end of these goals? That’s simple. I get to set a new series of goals regarding my privacy.
At the beginning of this article, I said, “The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.” But the truth is that the story will never end. There will always be individuals and groups who want to invade your privacy to advance their own personal (or collective) advantage. And the only way to combat this will be a never-ending privacy battle.
For years, businesses and governments have used secure file transfer to send sensitive files across the Internet. Their methods included virtual private networks, secure encrypted file transfer (sftp and ftps), and transfers of secure / encrypted files. Today, the “gold standard” probably includes all three of these techniques simultaneously.
But personal file transfer has been quite different. Most people simply attach an un-encrypted file to an email message that is then sent across an un-encrypted email infrastructure. Sometimes, people place an un-encrypted file on a USB stick. These people perform a ‘secure file transfer’ by handing the USB stick to a known (and trusted) recipient. More recently, secure file transfers could be characterized by trusting a third-party data hosting provider. For many people, these kinds of transfers are secure enough.
Are Personal File Transfers Inherently Secure
These kinds of transfers are NOT inherently secure.
In the case of email transfers, the only ‘secure’ element might be a user/password combination on the sender or receiver’s mailbox. Hence, the data may be secure while at rest. But Internet email is completely insecure while in transit. Some enterprising people have exploited secure messages (by using tools like PGP/GPG). Others have secured their SMTP connections across a VPN – or an entirely private network. Unfortunately, email is notorious for being sent across numerous relays – any one of which could forward messages insecurely or even read un-encrypted messages. And there is very little validation performed on email metadata (e.g., no To: or From: field validation).
Placing a file on a USB stick is better than nothing. But there are a few key problems when using physical transfer. First, you have to trust the medium that is being used. And most USB devices can be picked up and whisked away without their absence even being noticed. Yes, you can use encryption to provide protection while the data is on the device. But most folks don’t do this. Second, even if the recipient treats the data with care, the data itself remains on an inherently mobile (and inherently less secure) medium.
Fortunately, modern users have learned not to use email and not to use physical media for secure file transfer. Instead, many people choose to use a cloud-based file hosting service. These services require logins to access the data. And some of these services even encrypt files while on their storage arrays. And if you’re really thorough when selecting your service provider, secure end-to-end transmission of your data may also be available. Of course, the weakest point of securing such transfers is the service provider. Because the data is at rest in their facility, they would have the availability to compromise the data. So this model requires trusting a third-party to protect your assets. Yes, this is just like a bank that protects your demand deposits. But if you aren’t paying for a trustworthy partner, then don’t be surprised if they find some other means to monetize you and your assets.
What Are The Characteristics of Secure File Transfers?
Secure file transfers should have the following characteristics:
The data being transferred should be encrypted by the originator and decrypted by the recipient.
Both the originator and the recipient should be authenticated before access is granted – either to the secure transport mechanism or to the data itself.
All data transfers must be secured from the originator to the recipient.
If possible, there should be no points of relay between the originator and the recipient OR there should be no requirements for a third-party to store and forward the complete message.
What Is Your Threat Model?
Are all of these characteristics required? The paranoid security analyst within me says, “Of course they are all required.” That same paranoid person would also add requirements concerning the strength of all of the ciphers that are to be used as well as the use of multi-factor authentication. But the requirements that you have should be driven by the threats that you are trying to mitigate – not by the coolest or most lauded technologies.
For most people, the threat that they are seeking to mitigate is one or more of the following: a) the seizure and exploitation of data by hackers, b) the seizure and exploitation of data by ruthless criminals and corporations, or c) the seizure and exploitation of data by an obsessive (and/or adversarial) governmental authority – whether foreign or domestic. Of course, some people are trying to protect against corporate espionage. Others are seeking to protect against hostile foreign actors. But for the sake of this discussion, I will be focusing upon the threat model encountered by typical Internet users.
Typical Threats For The Common American Family
While some of us do worry about national defense and corporate espionage, most folks are just trying to live their lives in obscurity – free to do the things that they enjoy and the things that they are called to do. They don’t want some opportunistic thief stealing their identity – and their family’s future. They don’t want some corporation using their browsing and purchasing habits in order to generate corporate ad revenue. And they don’t want a government that could obstruct their freedoms – even if it was meant in a benign (but just) cause.
So what does such a person need in a secure file transfer capability? First, they need file transfers to be encrypted – from their desk to the desk of the ultimate recipient. Second, they don’t want to “trust” any third-party to keep their data “safe”. Third, they want something that can use the Internet for transport – but do so in relative safety.
Onionshare was developed by Micah Lee in 2014. It is an application that sets up a hidden service on the TOR network. TOR is a multi-layered encryption and routing tool that was originally developed by the Department of the Navy. Today, it is the de facto reference implementation for secure, point-to-point connections across the Internet. And while it is not a strictly anonymous service, it offer a degree of anonymity that is well beyond the normal browsing experience. For a detailed description of Tor, take a look here. And for one of my first posts about TOR, look here.
Onionshare sets up a web server. It then establishes that server as a .onion service on the TOR network. The application then generates a page (and a URL) for that service. This URL points to a web page with the file(s) to be transferred. The person hosting the file(s) can then securely send the thoroughly randomized URL to the recipient. Once the recipient receives the URL, the recipient can download the file(s). After the secure file transfer is completed, the service is stopped – and the file(s) no longer available on TOR.
Drawbacks
This secure file transfer model has a few key weaknesses. First and foremost, the URL that is presented must be sent securely to the recipient. This can be done via secure email (e.g., ProtonMail to ProtonMail) or via secure IM (e.g., Signal). But if the URL is sent via insecure methods, the data could be potentially hijacked by a hostile actor. Second, there is no authentication that is performed when the ‘recipient’ connects to the .onion service. Whoever first opens that URL in a TOR browser can access (and later delete) the file(s). So the security of the URL link is absolutely paramount. But as there are no known mechanisms to index hidden .onion servers, this method is absolutely sufficient for most casual users who need to securely send sensitive data back-and-forth.
Ubuntu Applications
Onionshare Startup
Onionshare Directory Dialog
Onionshare Shared Files
Onionshare Sharing Started
Onionshare Sharing
URL Sharing
Onionshare Files Page
Files Received
Onionshare Logo
Bottom Line
If you want to securely send documents back-and-forth between yourself and other individuals, then Onionshare is a great tool. It works on Windows, MacOS, and a variety of Linux distros. And the only client requirement to use the temporary .onion server is a TOR-enabled browser. In short, this is about as ‘fire and forget’ as you could ever expect to find.
Every group has their own collection of stories. In the Judeo-Christian world, the Tower of Babel is one such story. It has come to symbolize both the error of hubris and the reality of human disharmony. Within the open source community, the story of the Cathedral and the Bazaar (a.k.a., CatB) is another such story. It symbolizes the two competing schools of software development. These schools are: 1) the centralized management of software by a priestly class (i.e., the cathedral), and the decentralized competition found in the cacophonous bazaar. In the case of computer-based collaboration, it is hard to tell whether centralized overlords or a collaborative bazaar will eventually win.
Background
When I began my career, collaboration tools were intimate. You either discussed your thoughts over the telephone, you went to someone face-to-face, or you discussed the matter in a meeting . The sum total of tools available were the memorandum, the phone, and the meeting. Yes, the corporate world did have tools like PROFS and DISOSS. But both sets of tools were hamstrung either by their clumsiness (e.g., the computer “green screens”) or by the limitations of disconnected computer networks.
By the mid-eighties, there were dozens of corporate, academic, and public sector email systems. And there were just as many collaboration tools. Even the budding Internet had many different tools (e.g., sendmail, postfix, pine, elm).
The Riotous Babel
As my early career began to blossom (in the mid-nineties), I had the privilege of leading a bright team of professionals. Our fundamental mission was the elimination of corporate waste. And much of this waste came in the form of technological redundancy. So we consolidated from thirteen (13) different email clients to a single client. And we went from six (6) different email backbones to one backbone. At first, we chose to use proprietary tools to accomplish these consolidations. But over time, we moved towards more open protocols (like SMTP, X.500, and XMPP).
Since then, collaboration tools have moved from email and groupware tools (e.g., Lotus Notes) to web-based email and collaboration tools (e.g., Exchange and Confluence/Jira). Then the industry moved to “next-generation” web tools like Slack and even Discord. All of these “waves” of technology had one thing in common: they were managed by a centralized group of professionals who had arcane knowledge AND sufficient funding. Many of these tools relied upon open source components. But in almost every case, the total software solution had some “secret sauce” that ensured dominance through proprietary intellectual property.
The Times, They Are A Changing
Over the past few years, a new kind of collaboration tool has begun to emerge: the decentralized and loosely coupled system. The foremost tool of this kind is Matrix (and clients like Riot). In this model, messages flow between decentralized servers. Data sent between these servers is encrypted. And the set of data transferred between these servers is determined by the “interests” of local accounts/users. Currently, the directory for this network is centralized. There is a comprehensive ‘room’ directory at https://vector.im. But work is underway to build a truly decentralized authentication and directory system.
My Next Steps
One of the most exciting things about having a lab is that you get to experiment with new and innovative technologies. So when Franck Nijhof decided to add a Matrix server into the Hass.io Docker infrastructure, I leaped at the chance to experiment. So as of last night, I added a Matrix instance to my Home Assistant system. After a few hours, I am quite confident that we will see Matrix (or a similar tool) emerge as an important part of the next wave of IoT infrastructure. But until then, I am thrilled that I can blend my past and my future – and do it through a collaborative bazaar.
The modern Internet is a dangerous place. [Note: It has always been ‘dangerous’. But now the dangers are readily apparent.] There are people and institutions that want to seize your private information and use it for their own advantages. You need look no further than Facebook (or China) to realize this simple fact. As a result of these assaults on privacy, many people are finally turning to VPN ‘providers’ as a means of improving their security posture. But free VPN services may not be so free.
Background
In the eighties, universities in the US (funded by the US federal government) and across the globe began to freely communicate – and to share the software that enabled these communications. This kind of collaboration helped to spur the development of the modern Internet. And in the nineties, free and open source software began to seize the imagination (and self-interest) of many corporations.
At that time, there were two schools of thought concerning free software: 1) The RMS school believed that software was totally free (“as in speech”) and should be treated as a community asset, and 2) The ESR school believed that open source was a technical means of accelerating the development of software and improving the quality of software. Both schools were founded upon the notion that free and open software was “‘free’ as in speech, not as in ‘beer’.” [Note: To get a good insight into the discussions of free software, I would encourage you to read The Cathedral and the Bazaar by Eric S. Raymond.]
While this debate raged, consumers had become accustomed to free and open software – when free meant “as in beer”. By using open source or shareware tools, people could get functional software without any licensing or purchasing fees. Some shareware developers nagged you for a contribution. Others just told you their story and let you install/use their product “as is”. So many computer consumers became junkies of the “free” stuff. [Insert analogies of drug dealers (or cigarette companies) freely distributing ‘samples’ of their wares in order to hook customers.]
VPN Services: The Modern Analog
Today, consumers still love ‘free stuff’. Whether this is ‘free’ games for their phones, ‘free’ email services for their families (or their businesses), or free security products (like free anti-virus and free anti-malware tools). And recently, free VPN services have begun to emerge. I first saw them delivered as a marketing tool. A few years ago, the Opera team bundled a fee VPN with their product in the hopes that people would switch from IE (or Firefox) to Opera.
But free VPN services are now available everywhere. You can log into the Apple Store or the Play Store and find dozens of free VPN offers. So when people heard that VPN services offer encryption and they saw that ‘vetted’ VPN services (i.e., apps/services listed in their vendor’s app store) were available for free, people began to exploit these free VPN services.
Who Pays When Free VPN Isn’t Free?
But let’s dig into this a little. Does anyone really believe that free VPN services (or software) are free (i.e., “as in beer”)? To answer this question, we need only look to historical examples. Both FOSS and shareware vendors leveraged the ‘junkie’ impulse. If they could get you to start using their product, they could lock you into their ecosystem – thus guaranteeing massive collateral purchases. But their only costs were their time – measured in the labor that they poured into developing and maintaining their products.
Today, VPN service providers also have to recoup the costs of their infrastructure. This includes massive network costs, replicated hardware costs, and substantial management costs. So someone has to overcome these massive costs. And this is done out of the goodness of their hearts? Hardly.
Only recently have we learned that free social media products are paid for through the resale of our own personal data. When things are ‘free’, we are the product being sold. So this fact begs the question: who is paying for this infrastructure when you aren’t paying for it?
Free – “As In ‘China'” – Paid For It
Recently, Top10VPN (a website operated by London-based Metric Labs Ltd) published a report about free VPN providers listed on the App Store and the Play Store. What they found is hardly surprising.
59% of apps have links to China (17 apps)
86% of apps had unacceptable privacy policies, issues include:
55% of privacy policies were hosted in an amateur fashion Free WordPress sites with ad
64% of apps had no dedicated website – several had no online presence beyond app store listings.
Over half (52%) of customer support emails were personal accounts, ie Gmail, Hotmail, Yahoo etc
83% of app customer support email requests for assistance were ignored
Just because a VPN provider has sketchy operating practices or is somehow loosely associated with Chinese interests does not necessarily mean that the service is compromised. Nor does it mean that your identity has been (or will be) compromised. It does mean that you must double-check your free provider. And you need to know that free is never free. Know what costs your are bearing BEFORE you sign up for that free VPN.
William Chalk (published @ Hackernoon) may have said it best: “In allowing these opaque and unprofessional companies to host potentially dangerous apps in their stores, Google and Apple demonstrate a failure to properly vet the publishers utilizing their platform and curate the software promoted therein.” But resolution of these shortcomings is not up to Apple and Google. It is up to us. We must take action. First, we must tell Apple and Google just how disappointed we are with their product review processes. And then we must vote with our dollars – by using fee-based VPN’s. Why? Because free VPN may not ensure free speech.
**Full Disclosure: I am a paid subscriber of a fee-based VPN service. And at this time, I trust my provider. But even I double-checked my provider after reading this article.