Trustworthiness Ratings for Operating Systems?

Revision 26
© 2012-2018 by Zack Smith. All rights reserved.

How to rate software insecurity

There is no grading system that describes the risk of surveillance when using various operating systems or other mission-critical software such as virtualization or hypervisors. Perhaps there should be.

Yes, we should be concerned about malware and exploits by obvious criminals, but let us also question assumptions:

  • Does the OS we are using have built-in spyware?
  • Is the VM we are running inside of VirtualBox or VMWare being spied upon?
  • Is the code running on Intel ME spying on everything in the system or the surrounding room?

The mantra of some is that government cannot be trusted. But we really need to be wary of the companies that provide our technology as well. They are run by people who just as fallible as those in government.

A color-coded system

Here is my quick attempt at an easy and fun color-coded scheme that might serve to warn users.

High risk of backdoors and spyware Examples: Microsoft Windows; Ubuntu; Android; Cloud platforms.
Medium risk of backdoors and spyware Examples: Slackware; OS/X & iOS; Raspbian.
Low risk of backdoors and spyware Examples: Debian; Gentoo GNU/Linux.
Safe enough for daily use Examples: ReactOS; Haiku; FreeDOS.
Entirely secure No known examples.

A point system

A more precise approach might be to assign points for various risks. My point system (version 2) is below. Obviously these various security problems don't deserve equal weighting and should not be thought of as anything but unweighted points.

  • If a connection to servers of the military industrial complex is detected e.g. during initial startup (Windows Vista) add 1 point.
  • If a connection to a cloud service is enabled (iCloud, Windows 10) add 1 point.
  • If the OS provider is known to have held back zero day vulnerability information from the public but not the NSA (Microsoft), add 1 point.
  • If a computer has an Intelprocessor, add 1 point because of Intel Management Engine (ME). This is a second CPU that exists in all modern Intel processors and has been called a rootkitter's dream. The AMD CPUs are suspected to have an equivalent secondary processor, so add one for AMD CPUs.
  • If the OS provider is a part of the NSA PRISM program, add a point (Apple, Google, Microsoft).
  • If the OS provider's executives or board members formerly worked at a company that makes spyware for the US government, add 1 point (Jane Silber of Canonical).
  • Complexity: The more lines of code, the higher the risk so add 1 point. Complex software (anything but MenuetOS, FreeDOS, and simple RTOSes) generally has more security holes.
  • Backwards compatibility technical debt: Old timey services that only 0.01% of customers use are rampant in backward compatible OSes like Windows and they expand the attack surface.
  • If the OS is closed source or it cannot be reasonably easily compiled by a technically adept user, assume deliberate obfuscation, add 1 point.
  • If the OS provider has inserted backdoors in the past (Windows 98 from Microsoft, suspected: Samsung phones), add another point.
  • If the OS is not built from source code during installation, add 1 point.
  • If the OS is made by one person (single point of failure) instead of a team, assume they have been compromised, add 1 point.

Former tabulations:

Name Badness points
Windows Vista 9
Windows 10 on x86 8
OS/X with iCloud 6
OS/X no iCloud 5
Ubuntu on x86 5
Slackware on x86 4
iOS no iCloud 4
Debian on x86 3
Gentoo on x86 2
Gentoo on ARM 1
MenuetOS on x86 1

Does open-source mean safe?

No. Why would it?

For one thing, there is no reason to assume that the compiled executables and libraries that comprise most of an open-source OS are built from the same source code that is publicly available. The provider may have a set of secret patches that they use to add spying capability.

Second, who says open-source code is reviewed by anyone? Especially the more complicated, bloated code bases like OpenSSL rarely get a detailed code review, becaues that is hard work and volunteers tend to avoid hard work.

The OpenSSL debacle of 2014 (Heartbleed) proved that open-source code is often not code reviewed.

In addition, don't assume open source means that code is proven to be secure, nor that open source implies the code has received a proper security audit.

How to ensure free/open-source safety?

To be perfectly safe with any open-source OS (GNU/Linux or otherwise) you should at least compile it (or any GNU/Linux) from sources yourself.

One can go further:

  1. Obtain a brand new computer to build the OS on.
  2. Obtain the original source code.
  3. Inspect the source code for spyware.
  4. Inspect the source code for planted or made to happen on purpose (MIHOP) vulnerabilities. (Remember Apple's suspicious SSL/TLS bug.)
  5. Compile the OS's source code yourself using a non-corrupted compiler on a non-corrupted system that is itself built from sources (e.g. using Gentoo). A compiler that is bare-bones and without optimizations is more likely to be safe than one that is complex like GCC or LLVM.
  6. Package the compiled OS in the same safe environment.
  7. Install the compiled OS on a safe computer. By safe I mean e.g. one whose hard drive firmware has not be replaced with spyware and whose Intel ME vulnerabilities have been mitigated.
  8. Keep all source code patched with the latest security fixes.


Windows Vista found to be sending information to Department of Defense, Homeland Security, Halliburton

Whitedust.net original detailed account with screenshots of Vista spying

The whitedust page was curiously removed from Archive.org.

Questions and answers

What is it called when a person becomes anxious about an idea that conflicts with what he wants to believe?

Cognitive dissonance.

What is the typical result of cognitive dissonance?

Irrational rejection of the offending idea regardless of its merits. The person may also cease inquiry into related topics.

Does anyone really want to believe that they are using unsafe software?

No. The default position is cognitive dissonance and rejection that the corporation or team that provided the software is trustworthy, because if they aren't, that means the user was a sucker.

What's the term for an automatic preference for ideas that confirm one's beliefs and rejection of ideas that don't?

Confirmation bias.

Do people automatically select information that prove they are safe and ignore data that show they are unsafe?

Yes. Most people don't want bad news that undermines their status quo.

Does anyone have a political, financial, or personal interest in suppressing concerns about spyware-laden operating systems?

Yes, governments and the military-industrial complexes have an interest in making sure there is spyware in operating systems.

Do online forums or comment-areas ever have fake commenters trying to manipulate public opinion?

The existence of paid commenters and commenting bots in some forums is proven.

What terms describe this practice of manipulation?

  • Astroturfing: which means creating a fake grassroots operation.
  • Sock-puppetry: which means creating fake personas online.

What expressions are used to discourage public speculation, while conveniently avoiding debate and investigation of facts?

Conspiracy theorist, tinfoil hat.

Where does the term conspiracy theory originate from?

According the Sharyl Attkission in her book Smear the term was promoted in a 1974 CIA memo directing its operatives to build relationships with the media and use this term to discourage evidence-based theorizing about JFK's assassination.