Why are open source software considered to be more secure?

Some people consider iOS to be more secure than Android but others disagree because Android is open source but iOS is closed. Can someone care to explain why open sourcing makes it more secure?

Basically the more people who can see the code the higher the chance of identifying problems.

5 Likes

Well i dont think that open source is more secure persee.
One of the benefits about open source is that the source code is open.
And that way bugs and issues can be identified and adressed quicker in general.
But i dont think that it does means that open source is fundamentally,
more secure then closed source software persee.
Because that really depends on the individual software application.

Also i personally dont concider Android being open source atall.
Because:

  • A: Wenn you buy a phone you also pay for Android OS.
  • B: Android OS is developed and maintained by Google.
    And Google is still a corperation.
  • C: Android is a commercial OS.
5 Likes

I agree with Angel. Open source software is not necessarily more secure just because it is open source.

It is possible for closed source applications to be more secure. You can get outside feedback without making your software free and open source. For example, Apple publishes many white papers describing iMesaage. Moreover, security researchers typically don’t have to through source code to identify vulnerabilities. In fact, even when source code is available, researchers will still use black box techniques where you don’t look at any code.

3 Likes

You can say closed source is more “secure” by obscurity. Since the original source code is not available, bugs and exploits are much harder to find.

I would say it’s more trustworthy rather than more secure

5 Likes

Not really. I know where you are coming from but I would see it like this:
Android is open source, it is free and you can spin it into whatever you want.
It is the google services that most people want and are willing to pay for.
And those are absolutely running closed source code.

LineageOS is based on android. So is Copperhead. And they didn’t pay anything.
I would say both of those are more trustworthy but only one is also more secure.

Firefox is made by a company that has a very mixed reputation.
Is it secure? Is it trustworthy?

PFSense is open source, made by a company and generally seen as secure and trustworthy.

I think it is a case by case thing.


To blatantly steal other people’s words:

because

2 Likes

You have community contributions, which sometimes equates to having bugs and vulnerabilities patched sooner than proprietary or closed off software. I think to pass an audit, you only need to be evaluated and tested by third party once a year. So, rather than daily or weekly contributions and evaluations like with OSS, you get once a year, maybe once a quarter.

With the Devops movement, this is changing. A good deal of companies are pushing pentesting and auditing to the development process. However, you still have a “we’re different” mentality when it comes to adopting Devops and Agile processes.

Off topic, Nicole Forsgren and Jez Humble have great rebuttals to this “we’re different” comment. Any chance you get an opportunity to hear them out, I’d give it a listen.

Anyway, the truth of open source no one considers is that the code accessible by anyone, so attack simulations can be run all day with minimal effort (provided the attacker knows what they’re doing). Someone with enough cunning can attempt a PR with malicious code. Or, they don’t have to make a commit at all. As soon as they find a vulnerability and a proven way of exploiting it, they can just find someone with the software and take advantage.

Nightmare scenario aside, FOSS all the way, baby. :sunglasses:

4 Likes

In that regard I don’t think that private modifications will make licensed software more secure, if the attacker knows what they are doing that can maybe only slow them down…

Interesting perspective. But, I would think public, readily available source code would do more than speed up an attack vector. You have no theories, no speculation, you have everything you need to see the full product and alter it or exploit it.

I think the second piece that plays into this is popularity and usability. No one is going to spend a day, a week, or a year attacking something only 6,000, 60,000, or 6,000,000 people use when there are common vulnerabilities in something 725,000,000 people use.

So, if the year of the Linux desktop does happen, watch out world :wink:

2 Likes

Yes that is very correct, all I’m saying that if code is not open for everybody, that doesn’t mean that it is not accessible at all… There would be just one or few steps more …

1 Like

Back in 2014 there was a thing about certain Linux distros getting you put on a watch list
https://www.makeuseof.com/tag/interest-privacy-will-ensure-youre-targeted-nsa/

In the 90’s there was the “clipper chip” and PGP backdoors
https://arstechnica.com/information-technology/2015/12/what-the-government-shouldve-learned-about-backdoors-from-the-clipper-chip/
https://www.nytimes.com/1994/06/12/magazine/battle-of-the-clipper-chip.html?pagewanted=all
Zimmerman created PGP
https://www.wired.com/2001/02/pgp-creator-bolts-to-hush/

Quick overview as to why this opinion exist. Others being able to validate and test the code is a good thing but can lead to a “tragedy of the commons” situation.

4 Likes

The problem with this argument is that just because source code is not freely available to redistribute does not mean it isn’t available if you really want to see it. There are thousands of engineers working on Windows. You only need to compromise one to read the source code.

1 Like

This is a perfect example of why people believe open source software is more secure. If we look at something like OpenSSH, which is installed on basically every *nix system, and yet I can’t think of a single time when it’s been caught flat footed in spite of it being the gateway to most of the web servers on the internet.

All of that being said, and moving back to the thread at large, I do think that the assumption that open source software is more secure is a bad assumption. It may have been a relatively safe assumption years ago before the popularity of Linux on the desktop started to really take off. We’ve got a lot of developers building a lot of applications. We can’t just assume that they’re all relatively bug-free. Conversely, as much as I hate them, Microsoft has stepped up their security game.

If we’ve learned anything in the last 5 years, it’s that the assumption of security based on the code’s licensing model is stupid.

Ok lets look at it in an easy example. Lets say the Linux Foundation made PGP (Under GPL) and that Microsoft had something called MGP (Under whatever BXS license they have). The code for PGP can be looked through for bugs and security holes whenever, but with MGP you have 0 clue if Microsoft is saving your private encryption key because its proprietary and closed source. A big business that needs secure encryption is going to pump enough money into microsoft that they have basically the power of opensource with MGP, but a small business that can’t financially or legally risk that is going to go with PGP.

This is an EXAMPLE. BUT, similar things actually exist.

I see where you’re going with that, but I think that’s more the realm of trustworthy, rather than secure. Which are subtly, but importantly different.

Edit:
That could also be an interesting conversation, though maybe not for this thread. Neither open source nor closed source equals secure code. But can closed source applications be considered trustworthy?

To me if I can take wget, look at it, be suspicious, build a part of its code to my own design, and run it on my systems that equals trust and security.

All other things being equal, open source has the potential to be more secure (but no guarantees - that’s your responsibility to check).

But IOS vs. Android is nothing to do with closed vs. open source.

It is a philosophical difference. IOS requires Apple signed code. Sandboxes everything. errs on the side of restriction until it can be locked down, instead of open APIs for code to do stuff that is more flexible.

Swings and roundabouts. You can hack android to do what you want. IOS is more restricted by design - in order for Apple to try to keep user applications doing malicious things.