Fazal Majid's low-intensity blog

Sporadic pontification

Fazal Fazal

PSA: LinkedIn single-sign-on dangers

I have a work-issued computer that I keep rigorously separate from my personal stuff. It belongs to my employer and thus I do not keep personal files on it, or access personal email and certainly don’t save personal passwords on it. I even have it on a separate VLAN on my home network.

This is why I was horrified when I went to the LinkedIn website on my work computer (to look at a colleague’s posting) and it automatically started a single sign-on with my company’s GMail (my work address is of course linked to my LinkedIn profile).

This means a company with Google Apps can potentially access your LinkedIn account without your permission. Considering LinkedIn’s past record of egregious security failures1, it shouldn’t be too surprising, but still…

I couldn’t find any setting to disable SSO, and it seems the only way to prevent this is to turn on two-factor authentication (where the only options are the grossly insecure phone SMS text message method or the equally phishable TOTP Authenticator app codes, not the actually secure Webauthn/FIDO U2F USB keys).


  1. A colleague had built a GPU mining rig for fun and profit, and run the LinkedIn hashed password dump through it using hashcat. He found Donald Trump’s was a variation on “You’re fired!”… ↩︎

Funding the vetting of the Software Supply-Chain

TL:DR A way out of our software supply-chain security mess

As memorably illustrated by XKCD, the way most software is built today is by bolting together reusable software packages (dependencies) with a thin layer of app-specific integration code that glues it all together. Others have described more eloquently than I can the mess we are in, and the technical issues.

XKCD

Crises like the log4j fiasco or the Solarwinds debacle are forcing the community to wake up to something security experts have been warning about for decades: this culture of promiscuous and undiscriminating code reuse is unsustainable. On the other hand, for most software developers without the resources of a Google or Apple behind them, being able to leverage third-parties for 80% of their code is too big an advantage to abandon.

This is fundamentally an economic problem:

  • To secure a software project to commercial standards (i.e. not the standards required for software that operates a nuclear power plant or the NSA’s classified systems, or that requires validation by formal methods like TLA+), some form of vetting and code reviews of each software dependency (and its own dependencies, and the transitive closure thereof) needs to happen.
  • Those code reviews are necessary, difficult, boring, labor-intensive, require expertise and somebody needs to pay for that hard work.
  • We cannot rely entirely on charitable contributions like Google’s Project Zero or volunteer efforts.
  • Each version of a dependency needs to be reviewed. Just because version 11 of foo is secure doesn’t mean a bug or backdoor wasn’t introduced in version 12. On the other hand, reviewing changes takes less effort than the initial review.
  • It makes no sense for every project that consumes a dependency to conduct its own duplicative independent code review.
  • Securing software is a public good, but there is a free-rider problem.
  • Because security is involved, there will be bad actors trying to actively subvert the system, and any solution needs to be robust to this.
  • This is too important to allow a private company to monopolize.
  • It is not just the Software Bill of Materials that needs to be vetted, but also the process. Solarwinds was probably breached because state-sponsored hackers compromised their Continuous Integration infrastructure, and there is Ken Thompson’s classic paper on the risks of Trusting Trust (original ACM article as a PDF).
  • Trust depends on the consumer and the context. I may trust Google on security, but I certainly don’t on privacy.

I believe the solution will come out of insurance, because that is the way modern societies handle diffuse risks. Cybersecurity insurance suffers from the same adverse-selection risk that health insurance does, which is why premiums are rising and coverage shrinking.

If insurers require companies to provide evidence that their software is reasonably secure, that creates a market-based mechanism to fund the vetting. This is how product safety is handled in the real world, with independent organizations like Underwriters Laboratories or the German TÜVs emerging to provide testing services.

Governments can ditch their current hand-wavy and unfocused efforts and push for the emergence these solutions, notably by long-overdue legislation on software liability, and at a minimum use their purchasing power to make them table stakes for government contracts (without penalizing open-source solutions, of course).

What we need is, at a minimum:

  • Standards that will allow organizations like UL or individuals like Tavis Ormandy to make attestations about specific versions of dependencies.
  • These attestations need to have licensing terms associated with them, so the hard work is compensated. Possibly something like copyright or Creative Commons so open-source projects can use them for free but commercial enterprises have to pay.
  • Providers of trust metrics to assess review providers. Ideally this would be integrated with SBOM standards like CycloneDX, SPDX or SWID.
  • A marketplace that allows consumers of dependencies to request audits of a version that isn’t already covered.
  • A collusion-resistant way to ensure there are multiple independent reviews for critical components.
  • Automated tools to perform code reviews at lower cost, possibly using Machine Learning heuristics, even if the general problem can be proven the be computationally untractable.

The fetish for uptime

At one of my previous jobs, the engineers on my team had an informal competition as to who could rack up the longest uptime on their workstation (they all had Sun Solaris or Linux, of course). When the company moved to a new office, one crafty engineer managed to beat all the others by putting his Sun into the seldom-used hibernation mode to preserve his uptime when everyone else was forced to reboot.

I posit that uptime is actually a bad thing. All software has bugs, and a regular maintenance schedule to apply patches, at the very least once a month, should be part of the plan and designed into the architecture. By that token, an uptime greater than 31 days is a “code smell” for infrastructure.

PSA: iCloud Private Relay can make Safari on your iPad unusable

After upgrading my iPad to iPadOS 15.5, Safari became unusable. It would take forever to load the Reddit login page, and many others like Dilbert.com. Opening the same in Firefox Focus had no issues.

Going into Settings / Safari / Privacy & Security / Hide IP Address and disabling it fixed this for me. Alternatively you can disable it only for specific networks (Settings / Wi-Fi / ⓘ / Limit IP Address Tracing / Off).

It seems Apple turned on iCloud Private Relay on by default for Safari in iPadOS 15.5 and presumably iOS 15.5 as well. Macs are probably next.

I can only speculate why turning it off fixes the breakage, but:

  • The feature routes your calls through Akamai then CloudFlare, and for whatever reason CloudFlare doesn’t seem to like my ISP, I often encounter their “prove you are human” challenges.
  • It may also be because Apple overrides your DNS settings for this feature to work, and if your network is locked down with something like Pi-Hole to prevent trackers, those DNS requests may not be getting through. I don’t want IoT devices or the like to bypass my DNS server, which uses Wireguard to my Cloud VPN server to ensure my ISP cannot snoop on my DNS requests (a setup I believe more secure and private than Apple’s), nor CloudFlare, nor the UK Police State. I haven’t blocked DNS-over-HTTPS servers yet as this guy does but it’s on my list. This might be interfering with iCloud Private Relay.
  • It may also be sabotage, as Rui Carmo points out, or as John Oliver memorably calls it, “Cable Company F∗∗∗ery”.

Batch-converting HEIC images to JPEGs on the Mac

TL:DR working around Apple proprietary brain damage

I use Lightroom 6 to manage my photo collection, although it is falling victim to bit rot (e.g. the face recognition module no longer works, apparently due to a licensing logic time bomb in the code). Exploitative pay-forever software subscriptions are simply unacceptable so I will not yield to Adobe’s Creative Clout bondage, and since Lightroom will not work in newer versions of MacOS, that means I am working on migrating to Darktable, albeit very slowly.

My wife does all her photography on her iPhone, and while the image quality is poor, she does take a great many photos and videos of our daughter. I decided to integrate them in my workflow.

To do so, I installed the free and excellent Photobackup app on her iPhone. It allows backing up her photos and videos using rsync to my ZFS backup server, from which I rsync them to my Mac, and then use my linkonce tool to create a parallel file hierarchy that mirrors it, but so that when I delete a photo in Lightroom, it stays deleted. That way I can remove duds without having them pop back up in Lightroom every time I do a sync.

I just realized I was missing a large number of images because they are in Apple’s obnoxious HEIF format, that they switched to around the time they introduced the mostly useless Live Photos misfeature. Lightroom 6 does not recognize the format. While you can batch convert and export HEIC files to JPEG in Preview.app, it is still a manual process.

I investigated what command-line tools are available that could be run from a cron job and there are surprisingly few. GraphicsMagick sensibly refuses to support the format because of patent concerns. Most of the others require compiling an intimidating stack of dependencies first, and because HEIF is based on the H.265 HEVC video codec, an ostensibly open (in name only) ISO format that is heavily encumbered with patents, so is HEIF and it is probably illegal to use those tools like heic2jpeg.

I opted instead to write my own heic2jpeg (no relation to the previous tool). It is a very basic conversion utility using Apple’s CoreImage framework, to piggyback on Apple’s patent licenses, and as a side benefit, it will preserve the image metadata including geoloc. The flip side is that means the tool can only run on a Mac and not on Linux or Illumos, but I can live with that.

It is also my first ever Swift project. A nice expressive language in the vein of Python or Go (except with Apple’s grotesquely long API names), but I do not expect to use it much, as I have grown disillusioned with Apple’s policies and software quality, and have no intention of indenturing myself as a sharecropper in Tim Cook’s plantation any more than to Adobe’s.

The code is in heic2jpeg.swift

To build it, assuming Swift or Xcode is installed on your Mac, just run:

swiftc -O -o heic2jpeg heic2jpeg.swift

My sync script (part of my backup script) then runs something like:

find $HOME/Pictures -name \*.HEIC -print0 | xargs -0 -P 12 -t -n 10 heic2jpeg --delete

This will run 12 processes in parallel, consuming 10 files each until all HEIC are converted (or if already converted, left alone). I find the optimal setting to be 150% to 200% of the actual cores on your system (not including Intel’s fake hyperthreading cores, which do not count).

import Foundation
import CoreImage

var jpegQuality = 0.90
let context = CIContext(options: nil)
let options = NSDictionary(
    dictionary: [kCGImageDestinationLossyCompressionQuality:jpegQuality]
)
var delete:Bool
var filename:String?
delete = false
for i in 1..<Int(CommandLine.argc) {
    filename = CommandLine.arguments[i]
    if filename == "-delete" || filename == "--delete" {
        delete = true
        continue
    }
    let srcURL = URL(fileURLWithPath:filename!)
    let destURL = srcURL.deletingPathExtension().appendingPathExtension("jpg")
    var exists:Bool
    do {
        exists = try destURL.checkResourceIsReachable()
    } catch {
        exists = false
    }
    if exists {
        print("skipping \(filename ?? "???")")
    } else {
        print("converting \(filename ?? "???")")
        let image = CIImage(contentsOf: srcURL)
        try! context.writeJPEGRepresentation(
            of:image!,
            to:destURL,
            colorSpace: image!.colorSpace!,
            options:options as! [CIImageRepresentationOption : Any]
        )
        if delete {
            print("deleting \(filename ?? "???")")
            try! FileManager.default.removeItem(at:srcURL)
        }
    }
}