My unlawyered opinion on why AI will legally survive in the US

In the past few months, there have been a surge of AI projects that allow generating images and text:

  • Stable Diffusion
  • ChatGPT
  • GPT-3 and earlier
  • DALL-E 2 and previous
  • Midjourney
  • GitHub Copilot

These AI programs are amazing – but they were also trained with publicly-available material; and the owners of that material almost certainly did not opt-in to having their material used for AI training, and have occasionally managed to get these services to repeat things very close to copyrighted material. This is almost certainly going to come up in future legal cases.

Putting the current US concept of fair use aside, I think that at this point, AI companies have a vested interest in doing everything they can to get these algorithms entrenched as an industry, because that may actually ensure their legal survival.

Consider a broader view of the US and technology:

  • VCRs upset movie studios tremendously, but were declared legal even if some people would abuse them or copy tapes. Format-shifting was declared officially legal from this decision, whereas before it had been legally grey, much like AI is now. However, there’s another side to the story: In 1983, according to the New York Times, there were approximately 1.2 million VCRs sold that year, with the decision decided in January 1984. (Basically) outlaw the industry? Nah.
  • Photoshop came out, and allowed for the manipulation of images in ways that were unprecedented. Users could also abuse Photoshop to make very… interesting… images of celebrities. Nonetheless, Photoshop was never sued for being liable for anything their users did.
  • CD Drives allowed copying CDs which did not have DRM, and made it easy to share the ripped discs online. This did not ultimately make CD drives, CD ripping, Online File Sharing, BitTorrent, The Internet, or any of the technologies involved illegal despite all of them being abused for copyright infringement. It also didn’t legalize internet censorship of DNS and packets to prevent copyright infringement despite the MPAA’s lawsuits and failed laws (SOPA/PIPA).

If there seems to be a pattern, I would quantify it as this:

US Courts do not enjoy clamping down on any new technology, even if said new technology can and is being used in copyright-infringing ways.

Now, one could make the argument, that none of these really have much to do with AI, or AI’s propensity to regurgitate information it learned with sometimes. I think, however, that this is a “hindsight is 20/20” moment. It’s obvious now, but it wasn’t obvious then. If we had CD ripping declared illegal, or lost VCR recording being legal, or had SOPA/PIPA enforced, our precedent for new technologies and copyright infringement would be very, very different.

Thus, in a weird way, it would seem to my unlawyered thoughts that the more AI can entrench itself, become accepted, widespread, diverse in function, the stronger the legal case will become. If it was just GitHub Copilot, it may be banned. But will courts be interested in hurting Copilot, Midjourney, DALL-E, GPT-3, etc.? I think they would punt the question to Congress before they would dare make a change to the status quo or declare that it isn’t “fair use,” if previous technology/copyright conflicts are anything to go by.

Remote attestation is coming back. How much freedom will it take?

Remote attestation has been a technology around for decades now. Richard Stallman railed about the freedom it would take in 2005, A Senator presented a bill asking for the required chips to become mandatory, and Microsoft prepared Palladium to improve “security” and bring remote attestation (among other things) to the masses. Then it all fell apart – Palladium was canceled, a Senator retired, and TPM chips have been in our PCs for years but have generally been considered benign.

For those who do not know what remote attestation is:

  • Remote attestation lets an external system validate, cryptographically, certain properties about a device.
  • For example, proving to a remote system that Secure Boot is enabled on your Windows PC, with no ability to forge that proof. And by extension, potentially loading a kernel driver that can prove certain installed applications have not been tampered with.
  • TPM chips invented in ~2004 were widely feared because they enabled this capability, but until now they have been primarily used only in corporate networks and for BitLocker hard drive encryption.

When it was first invented, it was widely feared by Linux users and by Richard Stallman, especially after Secure Boot was rolled out. Could an internet network require that users run up-to-date Windows, with Secure Boot on, and thus completely lock out Linux or anyone who is running Windows in a way Microsoft does not intend? With remote attestation, absolutely.

In practice though, only corporate networks adopted remote attestation to join, and only on their business PCs through the TPM chip (no BYOD here). TPMs have a ludicrous amount of certificates needing trusting, many in different formats and algorithms (1,681 right now, to be exact), and almost everything that isn’t a PC doesn’t have a TPM. Because of that, building a remote attestation setup with support for a broad variety of devices was, and is, very difficult. Easy for a business with a predictable fleet on one platform, almost impossibly complicated for the random assortment of general devices. And so, the threat of the TPM and remote attestation in general was dismissed as being fearmongering from 2 decades ago that never became reality.

If only it stayed that way. Remote Attestation is coming back and is, in my opinion, a legitimate threat to user freedom once more, and almost nobody has noticed. Not even on Hacker News or Linux circles like Phoronix where many such new technologies and changes are discussed.

Consider in the past few years:

  • Why is Microsoft building their own chip, the Pluton, into new Intel, AMD, and Qualcomm processors? Why does it matter so much to add a unified root of trust to the Windows PC?
  • Why does Windows 11 require a TPM 2.0 module?
  • Why has every PC since 2016 been mandated to have TPM 2.0 installed and enabled?
  • Why do so many apps on Android, from banking apps to McDonalds, now require SafetyNet checks to ensure your device hasn’t been rooted?
credit
  • What’s with some new video games requiring TPM and Secure Boot on Windows 11?

Remember that remote attestation has been possible for decades, but was overly complicated, unsupported on many devices, and just not practical outside of corporate networks. But in the last few years, things have changed.

  • What was once a fraction of PCs with TPMs, is now approaching 100% because of the 2016 requirement change, and because of the Windows 11 mandate. In ~5 more years, almost all consumer PCs will have a TPM installed.
  • macOS and iOS added attestation already with the DeviceCheck framework in iOS 11 / macOS 10.15. They don’t use a TPM but instead use the Secure Enclave from the T2 or M-series.
  • Google has had SafetyNet around for a while powered by ARM TrustZone, but is tightening the locks. Rooting your device invalidates SafetyNet, requiring complex workarounds that are gradually disappearing.

For the first time, remote attestation will no longer be a niche thing, only on some devices and not others. Within a few years, the amount of devices supporting remote attestation in some form will quickly approach 100%, allowing remote attestation to jump for the first time from corporate networks into public networks. Remote attestation is a technology that doesn’t make sense when only 70%, or 80%, or 90% of devices have it – only when it reaches >99% adoption does it make sense to deploy, and only then do its effects start to be felt.

We’re already seeing the first signs of remote attestation in our everyday lives.

  • macOS 13 and iOS 16 will use remote attestation to prove that you are a legitimate user, allowing you to bypass Cloudflare CAPTCHAs. How? By using remote attestation to cryptographically prove you are running iOS/macOS, without a jailbreak, on a valid device, with a digitally signed web browser.
  • Some video games are already requiring Secure Boot and TPM on Windows 11. According to public reports, they have not fully locked out users without these features, as they still allow virtualized TPMs, Windows 10, and so forth. However, they absolutely do not have to, and can disable virtualized (untrusted) TPMs and loading without Secure Boot as soon as adoption of Windows 11 and TPM is great enough. Once they shut the door, Windows 11 + Secure Boot + Unaltered Kernel Driver will be the only way to connect to online multiplayer, and it will be about as cryptographically secure against cheating as your PlayStation.
  • Cisco Meraki powers an insane amount of corporate networks. Even in my own life, it was my school’s WiFi, my library’s WiFi, the McDonalds WiFi, even my grandparent’s assisted living WiFi. Cisco is also a member of the Trusted Computing Group that developed the original TPM and Remote Attestation to begin with. All they have to do, once adoption becomes great enough, is update their pre-existing “AnyConnect” app to use TPM/Pluton on Windows, DeviceCheck on iOS/macOS, and SafetyNet on Android/ChromeOS before you join the network. Anyone with an unlocked or rooted device need not apply.
Credit Citrix Endpoint

I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.

What is more concerning is how many freedoms it will take in a more terrifying and unappreciable direction. For example, when I was in college, we had to jump through many, many hoops to connect to school WiFi. WPA2 Enterprise, a special private key, a custom client connection app, it wasn’t fun and even for me was almost impossible without the IT desk. If remote attestation was ready back then, they would have absolutely deployed it. Cloudflare has already shown it is possible for websites to use it to verify the humanity of a user and skip CAPTCHAs on macOS. What happens when Windows gains that ability? Linux users will be left out in the cold completely, as it is simply not practical to digitally approve every Linux distribution, kernel version, distribute a kernel module for them all, and then use the kernel module to verify if the browser is signed in the same way with all of its variations, without leaving any holes.

Thus, for Linux users, it will start with having to complete CAPTCHAs that their Windows and Mac-using friends will not. But will it progress beyond that? Will websites mandate it more? On an extremely paranoid note, will our government or a large corporation require a driver’s license for the internet, with a digital attestation binding a device to your digital ID in an unfalsifiable way? Microsoft is already requiring a Microsoft Account for Windows 11, including the Pro version. Will a grand cyberattack send deployment of this technology everywhere, and lock out Linux and rooted/jailbroken/Secure-Boot-disabled devices from most of the internet? Will you be able to use a de-Googled phone without being swarmed with CAPTCHAs and having countless apps deny access?

This is a major change of philosophy from the copy protection and DRM systems of yesteryear. Old copy protection systems tried to control what your PC could do, and were always defeated. Remote attestation by itself permits your PC to do almost anything you want, but ensures your PC can’t talk to any services requiring attestation if they don’t like what your PC is doing or not doing. This wouldn’t have hurt nearly as much back in 2003 as it does now. What if Disney+ decides you can’t watch movies without Secure Boot on? With remote attestation, they could.

I think I’ll end with a reference to Palladium again, Microsoft’s failed first attempt at a security chip from ~2003, cancelled from backlash. It had an architecture that looked like this:

Now compare that diagram with Microsoft’s own FASR (Firmware Attack Surface Reduction). FASR is a “Secured Core” PC technology that is not mandatory yet and not necessarily part of Pluton, but very likely will be required in the future.

All they did was flip the sides around, have a hypervisor instead of separate hardware abstraction layers, and rename NEXUS to “Secure Kernel.” Otherwise it is almost entirely the exact same diagram as from 2003 that was cancelled from backlash. They just waited ~20 years to try again and updated the terminology. (Also, of note, is the use of the word “Trustlet,” plagiarized from ARM TrustZone which powers Android’s SafetyNet remote attestation system.)

Some things never change.

The dangers of Microsoft Pluton (updated)

In upcoming Intel, Qualcomm, and AMD processors, there is going to be a new chip, built-in to the CPU/SoC silicon die, co-developed by Microsoft and AMD called the Pluton. Originally developed for the Xbox One as well as the Azure Sphere, the Pluton is a new security (cynical reader: DRM) chip that will soon be included in all new Windows PCs, and is already shipping in mobile Ryzen 6000 chips.

This new chip was announced by Microsoft in 2020, however details of what it was actually capable of, and what it actually means for the Windows ecosystem were kept frustratingly vague. Now with Pluton rolling out in some AMD chips, it is possible to put together a cohesive story of what Pluton can do from several disparate sources.

Because Microsoft’s details are sparse, this article will attempt to summarize all that we now know regarding Pluton. It may contain inaccuracies or speculation, but any potential inaccuracy or speculation will be called out where possible. If there are inaccuracies that result in more, better information being found, so be it.

What’s inside Pluton?

Pluton encompasses several functions. I’ll be throwing out the acronyms first and some of their meanings and effects later in the article:

  • A full TPM 2.0 implementation, developed by Trusted Computing Group (TCG)
  • SHACK (Secure Hardware Cryptography Key) implementation
  • DICE (Device Identifier Composition Engine) implementation, also designed by TCG
  • Robust Internet of Things (RIoT) specification compliance, a specification developed by Microsoft and announced with almost no fanfare all the way back in 2016

However, besides these functions, Pluton implements the full breadth of security improvements that Microsoft used to only have on the Windows 10 Secured-Core PC systems. A Pluton system is a superset of the Secured-Core PC specification which was previously only on select systems. A Secured-Core PC requires the following additional technology measures that were not previously required for a standard PC:

  • Dynamic Root of Trust for Measurement (DRTM)
  • System Management Mode (SMM) (edit: with Device Guard, regular computers have long had SMM)
  • Memory Access Protection (Kernel DMA Protection, protects against Plundervolt Thunderspy)
  • Hypervisor Code Integrity (HVCI)

Edit: See update at the bottom of post, the overlap of Secure Core and Pluton is currently somewhat unclear.

Also, starting this year, new Secured-Core PCs (and Pluton PCs by extension?) will also be required to drop support for the Microsoft 3rd-party UEFI Secure Boot Certificate Authority by default. This means that the shim bootloader used for booting some Linux OSs will no longer be trusted without flipping a switch in the UEFI firmware, which may partly be why Lenovo and Dell have announced they are keeping Pluton disabled by default. However, this might not do much in the long run, as will be explored below.

It is important to note that Pluton is very much like the Secure Enclave or TrustZone systems on macOS/iOS/Android systems, with a full (secure) CPU core, its own small onboard RAM, ROM, RNG, fuse bank, and so forth. For (obvious) security reasons, Pluton only boots officially-signed Microsoft firmware and carries anti-downgrade protections inherited from the Xbox. On non-Windows systems like Linux, Pluton quietly degrades into only a generic TPM 2.0 implementation.

A lot of acronyms, but what is the big picture?

In a nutshell, Microsoft believes they need to exercise more control over PC Security than previously. This came up with Windows 11, which infamously required 8th Gen or newer CPUs, TPM 2.0, and Secure Boot capability. At the time, there was (and still is) much concern regarding the almost arbitrary nature of the requirements.

However, while Microsoft was terrible at defining why certain CPUs made the cut and others didn’t (like why no Zen 1?), Ars Technica noticed a pattern:

Windows 11 (and also Windows 10!) uses virtualization-based security, or VBS, to isolate parts of system memory from the rest of the system. VBS includes an optional feature called “memory integrity.” That’s the more user-friendly name for something called Hypervisor-protected code integrity, or HVCI. HVCI can be enabled on any Windows 10 PC that doesn’t have driver incompatibility issues, but older computers will incur a significant performance penalty because their processors don’t support mode-based execution control, or MBEC.

And that acronym seems to be at the root of Windows 11’s CPU support list. If it supports MBEC, generally, it’s in. If it doesn’t, it’s out. MBEC support is only included in relatively new processors, starting with the Kaby Lake and Skylake-X architectures on Intel’s side, and the Zen 2 architecture on AMD’s side—this matches pretty closely, albeit not exactly, with the Windows 11 processor support lists.

Windows 11, by (almost) requiring MBEC, TPM 2.0, Secure Boot capability, and so forth is in every way trying to get people used to a Pluton-lite experience, as “Pluton-y” as possible without actually having Pluton yet. Windows 11 is the “stepping stone” to Pluton with security requirements to match. With Windows rumored to return to the 3-year version cycle with Windows 12 in ~2024, and with Microsoft clearly being less afraid of cutting off large swaths of old PCs, it would not shock me if Windows 12 makes Pluton a system requirement. Windows 10 for old systems, Windows 11 for systems not quite there, Windows 12 for the endgame.

Anything more I should know about what Pluton aims to do?

I’ve thrown out the acronyms for those interested in further reading, but what are the design goals behind those acronyms?

Microsoft originally developed Pluton for the Xbox, but also for the Azure Sphere. When they developed the Azure Sphere chip for secure IoT Devices, they designed it to be in compliance with their “Seven Properties of Highly Secure Devices“:

Microsoft also in that document shares how Pluton was integrated with this MediaTek IoT Chip, which looks probably pretty similar to how it is being integrated in Intel/AMD/Qualcomm:

It is not perfectly possible to implement Azure Sphere levels of security in a Windows PC. On Azure Sphere, a device becomes permanently locked to one manufacturer’s account permanently and only runs one app, with absolutely no ability to boot alternative apps or operating systems. Microsoft is no doubt going to need to compromise Pluton for general-purpose computing, but by how much…

All put together, what are the effects on me when Pluton arrives?

  1. You will no longer be able to install Linux with Pluton enabled unless the Microsoft 3rd-party UEFI Certificate is enabled in your UEFI Firmware. See Microsoft’s 7 Principles #5. (Also see update below at bottom of post – this status is ambiguous as Pluton and Secured Core kind of overlap.)
  2. Pluton will integrate with Windows Update at least for system firmware, potentially allowing for some forms of drivers to be updated as well as potentially having downgrade prevention. This may partly be why Windows 11 has new driver requirements (DCH compliance). See Microsoft’s 7 Principles #3, #6, and possibly #7.
  3. With SHACK, Secret Keys will be able to be stored in hardware and be able to encrypt and decrypt material without the key ever being exposed to firmware or software. This allows for a potentially stronger BitLocker… or just plain old DRM. See Microsoft’s 7 Principles #2.
  4. DICE+RIoT is where the rubber really hits the road. In Microsoft’s documentation of what RIoT can do:

Meanwhile, DICE appears to share a lot of the same goals as RIoT:

DICE and RIoT appear to be different parts of the same solution: Providing a device-specific key and assertion capabilities. What’s more, consider when it is combined with SMM and DRTM:

Why is a DICE+RIoT+SMM+DRTM sandwich so potentially extremely dangerous? Imagine if you are a game developer who wants to prevent cheating. According to IEEE’s interpretation, it could be possible to use Pluton to irrefutably verify (aka “assert”) that:

  1. The device is running Windows,
  2. The device is up-to-date or recently updated,
  3. The device has not had Secure Boot disabled or tampered with

Part #3 is most important. When you combine #3, the ability to have the Pluton security processor assert that the device has booted with Secure Boot in accordance with Microsoft’s 7 Principles #5 using the sandwich, and a potential custom kernel module for anti-cheat, you have successfully proven cryptographically:

  1. The device has securely booted,
  2. Your kernel module has loaded,
  3. Your kernel module, and Windows itself, has absolutely not been tampered with in any way
  4. Windows is up-to-date with all or most security features enabled,
  5. By having Hypervisor-powered Code Integrity through HVCI/MBEC, injecting code will be extremely difficult even if the code is flawed or contains exploits

If you were a DRM designer, you would probably be drooling. No more workarounds or hacks – its an on-silicon, Xbox-proven solution, now on Windows. Microsoft, of course, doesn’t want to talk about that use-case and says that it will primarily be used by their Azure Attestation Service and allowing businesses to keep devices up-to-date and secure, but the road to Hell is paved with good intentions. As IEEE has noted citing personal conversations with Microsoft Engineers, any attestation service could use Pluton for their own ends, as DICE+RIoT are both open standards for this kind of thing. In fact, Microsoft’s use of open standards for assertions might make this more dangerous in my opinion.

(Note: Please see later in post for edits regarding TPM capabilities)

Doomsday and “Fearmongering” Speculations Below – Objectively verified knowledge ends here

What is to prevent school WiFi from one day requiring a Pluton assertion that your Windows PC hasn’t been tampered with before you can join the network? As far as I can tell from the above specifications, there is no reason, assuming that the school had the ability to provide a connection client app before connecting.

Microsoft’s other use of DICE+RIoT, in their own words, is to enable “Zero Trust Computing.” By giving every device the ability to have secret keys completely out of reach of the main processor (see 7 Security Principles #2 and #5), it is theoretically possible to create documents, messages, and other content that is completely unreadable except by a specific device using a key that cannot be extracted from that device.

Imagine thus, a different scenario from the game developer. Imagine a (maybe corrupt) government agency or business. In the not-too-distant future, the following could be possible with Pluton (with some custom app development to streamline everything together):

  • All devices in the network have Pluton and are enrolled in Azure.
  • Every time a document is created and added to the network, it is added with a Pluton certificate verifying who created the document. Anonymous documents are kept off the network.
  • Every user in the organization is in Azure through Active Directory, and has specific devices attached to their User. Their User is enrolled in specific groups, such as Accounting or Legal.
  • Documents are encrypted through Azure to be only readable on specific client devices using the device-specific public key.
  • Thus, employees can read approved documents, but only on authorized systems.

To put this together, imagine this hypothetical scenario. A user in Legal creates a document. When the user uploads it, Azure verifies it against Pluton to both verify the document as being likely clean, and also to firmly establish who created it. When another user wants to download the document, Azure only provides a version that has been encrypted with the user’s Pluton public key if that user belonged in the right department, and thus only readable by that user.

  • These authorized systems could contain MDM (Mobile Device Management) measures that, thanks to Pluton with Secure Boot and physical attack protection, cannot be disabled. It also cannot easily have code injected or bypasses installed due to the Hypervisor. Pluton will also, in this situation, likely enforce BitLocker with an unknown unlock key.
  • The system is tamper-resistant and constantly updated, meaning that should a strict MDM policy be in place, extracting documents from a system without authorization could be potentially extraordinarily difficult to impossible.

Now, Microsoft might look at the above and laugh this off as fear mongering, as that is much further than what Pluton is being pitched as right now, as a firmware security device to prevent malware. However, how far away is it actually? If you can have Pluton verify a system as Securely booted without tampering, encrypt and decrypt material with an unextractable key, and connect to Azure for firmware updates, how much harder is it to add a secure mode to Microsoft Office that pulls it all together? You can’t hack Microsoft Office’s read-only or other protection modes if your MDM blocks external apps, the hypervisor prevents code injection, you can’t mess with Secure Boot without your keys getting invalidated, and your document is encrypted with a key only your device has that cannot be extracted.

The road to Hell is paved with good intentions. It looks as though Microsoft’s Next-Gen Secure Computing Base / Palladium project never really died.

Updates

A Linux Kernel engineer on Hacker News (@mjg59) claims that “Secured Core” PCs and Pluton are not synonymous. This would mean that certain features mentioned in the “Secured Core” list would not be present on all systems with Pluton.

I am currently unable to verify this claim (or my original view that the two were inseparable), due to the lack of hardware with Pluton out there, and also because it appears that (all?) hardware with Pluton also implements Secured Core right now, but perhaps Pluton w/o Secured Core systems will emerge. I am open to being incorrect on this though – as the lack of information regarding Pluton means I am stumbling in the dark for information.

I also currently do not buy his argument fully as of yet because he also argues most of what Pluton can do could also be implemented with just a TPM 2.0 chip. This may be true – however, this also leaves the actual purpose of Pluton unclear and possibly very redundant, and it doesn’t address what Microsoft means in their blog post by “chip-to-cloud” security that wasn’t possible before. If it wasn’t possible before, and is now, what changed if nothing changed? Is this Microsoft making a huge amount of fuss over just a remotely-updatable TPM 3.0?

The Kernel Engineer responded to that by stating that Pluton is a more-secure TPM, because it is built-in to the silicon, and because Microsoft wanted a more secure, easily updatable security chip than the Intel ME / AMD PSP which have had issues previously and are harder to update. This doesn’t make much sense to me – is it really hard to implement a separate TPM outside of ME/PSP that just does TPM things and gets Windows Updates? It’s really easier and more trustworthy just to add an entire new security processor to an actual chip CPU and adjust that design to various node sizes because Intel and AMD can’t implement any secure interfaces of their own? I don’t buy it. Intel would probably have vastly preferred that, then it could have just been marketed as a new vPro feature.

@mjg59 takes the view that Pluton is not (currently) a threat to user freedom, stating to my doomsday scenario:

(Edit: it’s been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you’re running. There’s various reasons I don’t think this is realistic – one is that there’s just way too much variability in measurements for it to be practical to write a policy that’s strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

I do not agree with him on that and Hacker News considers that to be naive, and to me it screams “it could – but it won’t!,” but there is no reason he couldn’t be right. Read it as an optimistic scenario. It’s hard to be optimistic though when SafetyNet exists.

On that note however, there are some things that he has stated that are accurate that weren’t in the original story. This is not malice on my part (as, well, why would I update this blog to refer to his views?) but rather a lack of information:

  • Original version of this post believed that Pluton is a de-facto Secured Core PC implementation. This is corrected, we don’t know whether this is the case, and do not currently know of proof either way. I personally doubt that Microsoft intends “Secured Core” to remain around forever and won’t mandate parts of it slowly.
  • I misread the checkbox on the Microsoft documentation regarding SMM. SMM has been around forever, but Secured Core PCs require SMM with Device Guard whereas standard Windows does not – and Microsoft’s documentation for Secured Core then just has SMM (with Device Guard) unchecked on the requirements which I misread as no SMM. Oops.
  • Apparently DICE and extensions are a… simpler way of doing the exact same things as a TPM? I can’t quite follow why anyone would want that. He did not have much explanation for what Pluton needed RIoT for and (it appears) initially did not believe RIoT was in Pluton, but then he said it is probably just for IoT scenarios. I’m a skeptic. Could PCs one day be managed like IoT?
  • I think the engineer’s main criticism is that I got some details regarding TPM wrong (in that TPM is more capable than I thought), and that much of what is stated above can already be accomplished with just a TPM but hasn’t been (and, in his view, probably never will be, though I disagree and believe Pluton makes it easier in the future). In which case my “fear and despair” part was may have actually been underselling what Pluton specifically can do and means long-term.
  • Because of that, I think my new criticism of Pluton would be the following (which he has not responded to despite responding to others later multiple times):

At this point, even if a TPM can recreate much of Pluton’s functionality, I still believe some fear regarding Pluton is still necessary and healthy, although I do not dispute that for some uses it may be useful – after all, why was my fear mongering section explicitly labeled “Fearmongering and Doomsday speculations”? Microsoft can still screw people over, but Pluton is different from a TPM and should still be (generally) regarded with caution where possible, and more caution than a standard TPM.

This is mainly because, at this point,

A. A TPM’s level of access and capabilities to a system is well-known at this point. Pluton, we do not know with certainty what all of its capabilities are.

B. Microsoft has explicitly stated Pluton will have functionality added to it in the future though software updates, most likely that cannot be downgraded, that are not present yet. It’s not that Pluton might have stuff added later – Microsoft has said stuff will be added later. What these upgrades entail or are capable of is also unknown.

C. Because of the above, Pluton requires a previously-unknown level of trust for Microsoft, because Pluton almost certainly has anti-downgrade procedures. Microsoft could, potentially, send out an update just blocking Linux and if Pluton received the update, it would be irreversible. Maybe this isn’t within Pluton’s abilities, but we just don’t know. Just that Microsoft (or a hacker of Microsoft – I’m more concerned about a rogue employee than Microsoft at the moment) could have permanent effects on the security of a system is worth paying attention over.

D. Because of the reasons above, Pluton should be regarded with extra skepticism as it is a magical black box, with unknown capabilities, that it is not clear whether it can actually be disabled. (Already on my blog, there’s a user talking about how Pluton briefly boots and then disables itself if the UEFI says that it should be disabled, not that it never starts, so theoretically a Pluton update could ignore its own disable switch.) I don’t have verification of that, but until we know more… TPM is known, TPM can screw people, Pluton has the potential to extremely screw people over, and while many of my doomsday speculations can actually be recreated with just a TPM if TPMs are widely adopted, perhaps it could be enhanced with more Pluton-specific ones. Perhaps my doomsday predictions actually weren’t far enough.

Thus, your point that Pluton doesn’t add too much might be completely valid right now. That doesn’t mean Pluton isn’t also a potential Trojan horse that Microsoft updates as they please with new things that we didn’t expect or ask for with no ability to undo them.

Edit: Removed a previous edit, and adding that, to complement the above notes, it does not help instill confidence that Microsoft isn’t telling what Pluton can and cannot do at a hardware level. They’ve said a few things it can do right now, and just said more stuff will be coming in the future, but they won’t talk about where its limits are. So… trust the black box without questions please. To be fair, this isn’t the first time (Intel ME, AMD PSP?), but it is unsettling to have another one.

A Beginner’s Guide to Blu-ray Player Firmware

The world is full of IoT (Internet of Things) devices, and they all run their own firmware – software that isn’t meant to be updated often, if ever. It’s often Linux-based, often insecure, and often a quickly-hacked-together mess with the goal to get it to work and then immediately ship, regardless of how maintainable or well-written the code behind it is.

I picked up some Blu-ray players from Goodwill that were manufactured from 2010-2013 from Sony and LG, and was curious to see, a little bit, how they worked…

What is running behind the scenes here?

Now, I’m not going to attempt to truly “reverse-engineer” the firmware. I’m basically clueless at understanding disassembled ARM (let alone 32-bit ARM EABI 5). Also, there is going to be a point where the protections massively increase – after all, this is a Blu-ray player and keeping the decryption and copy-protection implementations secret is a high priority for the designers, at least in theory.

Blu-ray Copy Protection is not going to be explored much here. For a quick recap, there are two main technologies used for protecting Blu-ray Discs: AACS and BD+. BD+ is used on relatively few discs, while AACS is mandated on all pressed discs (and costs a 4 cent license fee per disc). AACS and BD+ together were expected to be resilient for about 10 years according to their designers when they launched in 2006, but in practice, the scheme was quite broken by 2008-2009. There was also the massive 09 F9 controversy in 2007, which goes to show that (in my opinion) DMCA Section 1201 is just flat-out unconstitutional and unworkable.

Constitutional or not, 1201 has been a disaster encouraging the installation of DRM schemes everywhere, while not succeeding in preventing the cracking of DRM, ultimately annoying the living daylights out of legitimate buyers while only slightly inconveniencing pirates. (Also, fun fact, BD+ is a big reason why movies studios supported Blu-ray, as both HD-DVD and Blu-ray had AACS. They backed Blu-ray for what ultimately turned out to be a disappointing protection measure that didn’t last long. I wish HD-DVD won just because it was a better, more self-explanatory name.)

I’m not going to specify the exact model of Blu-ray firmware I downloaded, but I went and got a copy of some firmware for a LG Blu-ray player:

That… doesn’t tell us much. It’s just a giant “.ROM” file, what on earth could be inside?

Well, the answers will come from a tool called binwalk. It’s open-source, freely-available, and you can get it from Homebrew on macOS. It’s also a great entry-point for any firmware, as long as it is not encrypted or weirdly formatted. binwalk is excellent at breaking apart how a file is constructed, and if we run binwalk against the firmware, we see:

This actually tells us quite a bit about the system. At the beginning of the firmware, we see two entries for a Mediatek Bootloader. Mediatek is a Taiwanese chip design company that offers several chips designed exclusively for Blu-ray players, and is very popular with cheaper Android devices and, well, multiple Blu-ray manufacturers.

Next are two certificates in DER format – which is a little unfortunate. It means something is digitally signed. It’s not immediately clear what, and there are ways to work around digital signatures, but it is not easy. It is easier on these older systems which have less-advanced hardware root of trust systems than, say, a modern iPhone which is currently impregnable, but it does show there is some sort of protection against running arbitrary code on the system startup.

Next, we see some CRC32s. These are checksums, likely to verify that certain parts of the image are not corrupt, maybe even by the software updater.

Below that is where things get actually interesting. Combined, we see a Linux 2.6.35 Operating System image, 2 file systems (one for recovery, one for playback?), 2 encrypted areas with an unknown algorithm (though binwalk could be misunderstanding them), and a PNG image.

The PNG image is, surprise… the boot screen.

Seems a little unnecessarily low-res at 720×480 for a Full HD 1080p Blu-ray player, but whatever.

Now, if we run binwalk again with an -e flag (and have certain other utilities for uncompressing SquashFS installed), it will actually extract what it can out of the firmware into a nice folder structure:

squashfs-root-0 is the much-smaller partition that, I believe, is used for only recovery or some factory setup, while squashfs-root is the interesting one.

But there’s more to the story than that. When you run the extraction:

When you look at the logs, there are actually a bunch of symbolic links to an encrypted mount point at /mnt/rootfs_enc_it which, as far as binwalk can tell, doesn’t exist, so it replaces them with links to /dev/null to avoid a security risk.

This is very interesting and is some of that copy-protection I mentioned earlier. If you look at the files that were replaced with /dev/null links, look at their names:

  • libaacs.so
  • libbdplus.so
  • ca-bundle.crt

The first two are obviously the libraries that implement the AACS and BD+ copy-protection schemes. CA-Bundle might be for a web browsing component, or it could maybe contain the device-specific key used for decrypting Blu-rays, which is a big deal to keep locked down and secret.

These files were symbolic links to a partition that doesn’t exist. Remember there are two encrypted (likely) file systems in the firmware with mcrypt, so it is likely the code for AACS and BD+ is in one of those encrypted blocks, and then is decrypted on boot and mounted into Linux so that they can be securely used without being transparent on a firmware dump.

If we observe those Files in Finder, they are indeed links to nowhere:

Now, you might be wondering if the key to unlock the mcrypt areas containing those decryption files can be found in the firmware download, and then these files could be read. I doubt that because, let’s say I run a search for that /mnt/rootfs_enc_it folder:

Code referring to rootfs_enc_it occurs in three other files. If we look in a hex editor at them, they look generally like this:

It appears to be a map of what the internal structure will look like, as they all list other partitions and not just that partition or code to mount it, in particular.

I suspect that this is code for the Mediatek Bootloader and boot system, before the system starts Linux, though I could be wrong on that. It has instructions for where to put things for when the Linux image starts (at least, what it appears to me), and mentions that there is an encrypted mount point there. Maybe the key is blended in the surrounding hex code, but I doubt that the designers of this would have been that stupid.

Instead, I suspect that the key for unlocking the mcrypt areas containing the copy protection and decryption code is locked with a device-key hidden inside the chip itself, possibly programmed in during manufacturing. The chip has the key most likely in its own silicon, which it can decrypt that firmware area and load it into Linux with. It’s what I would do if I was building a copy protection system – I wouldn’t make it this easy to retrieve.

On the other hand, this does leave a fairly significant weakness. A Linux system running Linux 2.6.35, with networking abilities, with that likely hardware-encrypted mount point mounted and unlocked. If one were to find a root vulnerability, I would assume it to be very possible to dump those protected files for disassembly.

I’m not going to go that far, at least not in this article. However, I would expect cracking a Linux 2.6.35 system to be fairly ~easy considering the wide attack surface and over a decade of new exploits later.

Looking at what else is in the dump though, we’ve surprisingly got all our basic utilities:

A little surprising that BusyBox isn’t used, but this isn’t a low-memory system so maybe it was easier. However, there is something suspicious with how many of them just-so-happen to be exactly 585 KB in size.

Running a file:

If I wanted to build a cross-compiler, that’s pretty important information.

Continuing a probe around the firmware:

A folder full of unencrypted wifi-related shell scripts. Written in English too. Fun.

This is more telling:

/lib is full of what appear to be references or shims for Linux. directfb is mentioned at the top (and elsewhere in the files not shown), indicating that this CPU actually does not have a GPU. Everything is software-rendered, except for the video stream which is decoded using the embedded H.264 Decode block.

This is pretty common – not licensing a GPU makes development simpler, cheaper, and easier to produce. It also explains why there are so few animations in the user interfaces of most Blu-ray players, and why the few animations there are seem to run at 8FPS.

Qt is also mentioned, and lower down it’s got a ton of libraries:

Qt and WebKit is a pretty predictable choice for something like this, but it’s cool to see.

In a /res folder (the only non-Linux standard root folder), there’s what appears to be images or binaries of images:

There’s also a folder with some Pulse-Code-Modulated Audio (think CD format) files for something called “fanfare”:

However, PCM is kind of a difficult file to play if you don’t know the exact Khz, Mono/Stereo, Start position, and all those other factors. Trying in Audacity made nothing but static, but 100_fanfare.pcm is 2.5MB in size and likely playable with the right setup.

/usr/local/bin is where things get fun:

Meet bdpprog. It’s a massive, 18.6MB executable for everything. There’s no shell like bash or sh here (at least, not easily accessible on startup) – it just boots into bdpprog for everything as far as I can tell.

bdpprog is massive, most likely responsible behind everything, and also appears to be derived from a Mediatek-written original version. bdpprog also appears on Samsung, Oppo, Panasonic, and Sony Blu-ray players. It’s also what crashed and caused boot-looping when Samsung sent out a malformed XML file to some of their players a while back. As mentioned there on Samsung firmware (even though the player I am looking at is an LG device):

“After the crash, the main program, bdpprog, is terminated by the kernel,” said Gray. “Since bdpprog is the main program, its termination results in a reboot by init. Even less fortunately for Samsung, the code for parsing the logging policy XML file is hard-coded to run at every boot. The result is that the player is stuck in a permanent boot loop as has recently been experienced by thousands of users worldwide.”

Still though, if you are a Blu-ray player manufacturer, Mediatek has it all down for you. They’ve got this custom chip, extra security for the libraries that handle the copy protection with the encrypted folder, and a mostly-written Blu-ray player boilerplate you can apparently just tweak for your branding and features and ship out.

While this would seem ingenious… that is also why a lot of Blu-ray players (not just this one, all three of my Goodwill players as well) are stuck on Linux 2.6.35, and are likely vulnerable to the exact same vulnerabilities as discovered on other brands.

Scrolling down on the window, you can see this interesting bit:

Some code for the Vudu client, and for some reason a script to launch the client. (Why not launch it directly from bdpprog? 🤷🏻‍♂️). Note the commented-out #LD_PRELOAD=/lib/libSegFault.so. Here it’s been commented out on the latest software version from 2015, with good reason. In 2014, a security researcher took a look at some Blu-ray players, and found a very similar line in a similar file called browser.sh was not commented out and instead read:

export LD_PRELOAD=/mnt/sda1/bbb/libSegFault.so

Note the /mnt/sda1 there, and you’ll realize the stupidity of the mistake. /mnt/sda1 on this system is not the root filesystem – it’s the mount point for external USB Flash devices. So, just make a fake libSegFault.so, launch “Browser” (which was used for Vudu in earlier versions) and you’d have an easy root exploit. Whoops. Too bad he didn’t dump the decrypted /mnt/rootfs_enc_it, whatever those files said.

Not that it would be that hard depending on how involved on this reverse-engineering I go (and depending on what’s legal, of course). This thing has network access with a stack that’s super old – probably a bunch of bugs there. These things have less-advanced hardware root of trust, and region-free mod kits require flashing custom firmware, so there is doubtlessly a way to fool it into doing something stupid. Maybe there’s another USB exploit or a bug in the media stack.

For now, this looks interesting:

Looks almost like a way to load apps from a USB stick. Another curiosity, I’m not sure what LG’s intentions with this code were:

Another possible stupid entry point, let’s connect to a non-HTTPS server for the NetCast App Store

It appears, at least for now, the built-in NetCast App Store is the most obvious way in, with it appearing to allow loading Apps from a USB stick and downloading apps over what appears to be an unencrypted connection without a pinned certificate. All without any actual decompilation or reverse-engineering, just in plain English scripts.

That’s how far I’ll go for now. I’m not sure what the legalities are for going deeper beyond, well, just reorganizing information without decrypting or decompiling anything (which this is). It also is probably far more work with little benefit… but who knows.

This weekend, I had some fun and fixed a Blu-ray Player

Recently, I’ve come across an interesting conundrum: I’ve been paying for Disney+ which is $7.99/mo. and we’ve still been renting some other movies, but I’ve been wondering how long this would go on. We watch a lot of the same movies a lot, so if I just bought all the movies we watch… that would pay off on investment in like 2-3 years considering the cheap prices on eBay.

So I began acquiring movies several months ago, and I’ve been acquiring them only on Blu-ray where possible. DVD really looks 1996 on even a moderately-sized 4K TV. (Fun fact, we wouldn’t even get flat-screens until almost a decade after DVD was released. DVD was designed for the fat-thick-TV era.) We don’t have a 4K TV yet, but the next TV will almost certainly be one because that’s what they sell nowadays and I prefer my movies to look crispy, not muddy. A Blu-ray is 6 times sharper than a DVD, and on a 4K display, even though it is only 1080p instead of 4K, it still looks way better than the old 480p DVD.

So I’ve been acquiring Blu-rays, but we have some family friends. They aren’t up-and-up on technology. The still have the 32″ screen in the basement and used to have a bigger screen upstairs until it shut down one day and began releasing toxic smoke, though they will be replacing that one soon. They only have DVD players. I introduced them to streaming a few months ago by buying a Roku stick and it was a revelation, as to that point all their movie watching was DVDs they bought or DVDs at the library, and that was it.

My younger sister visits this family often, and they love sharing movies between us. But there’s a problem: All the movies I’ve bought are Blu-rays. They still have DVD players. So they can share movies with us, but half the movies we have (the ones I bought) can’t be shared with them. The only way out of this conundrum is to get them some Blu-ray players.

Except… that’s not easily solved. This family is… on the poorer side, where buying a cheap TV takes a month of planning for the budget. Splurging $100 for 2 Blu-ray players so we can share movies is really out of the question. And now my sister is “angry” (half irritation, half Gabriel what did you do?) that she can’t share movies with her friends that I bought. (She’s not actually angry, it’s just an annoying but slightly funny situation.) Though I guess this annoyance is greater considering I’ve been gave her a movie she liked on Blu-ray for Christmas that she can’t share, shame on me.

With this in mind, I began plotting how to fix this Blu-ray problem. The family would need 2 players to be comfortable, but saying that I spent $100 on Blu-ray players would strike them as being an overly generous and strange Christmas gift, maybe, and that’s not a solution when you’ve passed Christmas and its February. I needed something cheaper. Something so cheap it wouldn’t even scream gift as long as I told them how little money I spent.

There’s only one place for that: Thrift Shops. The machines that get donated there are often broken, but, well, maybe they could be fixed? If they could be fixed, then I could save money and raise my reputation in the family for being able to fix them (I say this for laughs, I really don’t care about reputation, I wouldn’t slouch if I did). It’s a win-win if I can find something fixable. I ran down to Goodwill and managed to get these 2 machines for $12 pre-tax:

2 machines, $12, no guarantees they work, no remotes, and no power cables. Just the bare machines, as-is, take it or leave it. I figured if just one of the machines actually worked, I’d call it a $12 expense and that would be that. More like $20 after buying a Universal remote from Walmart to control them. That’s now regular gift territory instead of being awkward even on Christmas.

There’s only one problem, and that’s that they are old

I knew that they were old on first glance when I saw the colorful analog jacks on the back. Manufacturers stopped including them on Blu-ray players in 2013 partly because many people didn’t know you should use HDMI and the color jacks were much lower in video quality. This meant these both were at least nine years old when I bought them.

I went to work on the Sony one first (the other was LG). Booting it up brought a very, surprisingly, PS3/PS4-like UI:

I tried playing a DVD. Worked like a charm! Then a Blu-ray, total failure, couldn’t recognize the disc. I then, well, ripped apart the drive, removed the laser safety cover (don’t look at the laser or go anywhere near it if you do this!), disc shield, and got the board down to this:

Just looking at this, I knew it was an old player because this is a extremely… overengineered design compared to modern Blu-ray players. That heatsink is almost as wide as my hand, and I have large hands. No modern Blu-ray player looks like this on the inside, but it’s just… wow. I was after something in particular:

This is the read head, and you’ll notice two little eyes, one Bigger and one smaller. The bigger one is a combination lens for the infrared laser (CDs) and Red Laser (DVDs). The small one is for the Blue Laser (Blu-Ray). I grabbed a Q-Tip and a bunch of this stuff that smells funny:

I was working on a desk when then I accidentally poured too much on the Q-Tip and spilled a giant splash of IPA on my pants. Thankfully it is not harmful to humans and evaporates quickly (it was gone in like 5 minutes), but my pants might be super-flammable until I wash them. After reminding myself to take chemical safety more considerately, I rubbed the Q-Tip over both lenses. This removes dust on the lens, which can massively help discs play. (It’s depressing to think about how many disc players have been thrown out over the decades when they just needed a quick de-dusting).

After that, some discs played… but rarely. I observed the laser on the disc (with eye protection, it’s a laser, not a toy!)…

The laser was constantly re-focusing and failing over and over to read the discs I put in it, though once in a while it would lock on and start playing. I was disappointed because, having worked on Blu-ray players previously, the laser looked a bit dim. This meant that 13 years of age (it was built in 2009) had slowly worn the laser down to such a degree that, even with a perfectly clean lens and disc, it just wasn’t bright enough to consistently read discs. This player is also unclear where it get’s the laser voltage from, but I don’t want to be the person who overclocks a laser.

As a last-ditch effort, I put the drive in the top-secret Service Mode, and dug through the menus. Sadly nothing in Service mode was that useful, but this was interesting:

At 560 hours of Blu-ray playback and 5,828 hours of DVD playback (what a champion Red laser for the DVDs!), this thing is quite ready to be retired and is almost certainly not worth salvaging.

I might revisit that player, but in the meantime, I turned to the other one. It’s an LG, I opened it up and got this photo:

Despite being just a year newer, the design has simplified quite a bit. The control board is way smaller… and booting it up shows a simpler, duller, LG design:

So, I begin testing it. Same issue: DVDs work, Blu-ray is dead. This player is from 2010, so it’s 12 years old (1 year newer). I removed the laser shield and found this laser inside:

So, I grabbed the IPA, doused the laser lenses, and tried discs. They (mostly) worked! I called it the night on Saturday. Then Sunday came, and almost none of my discs would work. What gives I thought?

Well… they worked, but not well. They would always play, but I had to forcibly restart the machine, change the disc position in the drive, and fiddle with it for about ~10 minutes to get it to play a disc. It would play any disc, just after ~5-10 minutes of fiddling and begging it to play.

Then, I noticed something… look closer at the image:

Extremely, extremely tiny adjustment dials labeled “D,” “C,” and “B”. Clearly “D” for “DVD” (red laser), “C” for CD (infrared laser), and “B” for “Blu-Ray” (blue laser). Online, even though I didn’t find any documentation for this player, they were likely voltage adjustment dials to adjust with differences in laser power after manufacturing. Thus, if I turned the “B” screw just a little, I could increase the power that went to the laser. (The Sony player didn’t have any screws like this, I went back and looked.)

I carefully went a 1/8th-turn to the left. No discs would play, and on careful observation, the laser appeared visibly dimmer. I put the screw back, and went 1/8th-turn to the right. The laser was brighter and played any disc I threw in it – any. It was happy. I tweaked it a little more to be just powerful enough to play but not too powerful (don’t want to burn out the laser quicker), and sealed everything shut again.

All in all, this was a two-day project. The Sony Player is almost certainly dead. The LG one, with lens cleaning, lots of tweaking, de-dusting, and laser power adjustment, now plays Blu-Rays and DVDs like a charm despite being 12 years old. And now I have a Blu-Ray player that works for… $12 if you ignore that the family in question already has a universal remote. $12 and 2 days of time. Now I need to go to Goodwill again and find another Blu-Ray player that is fixable…

The Lockdown Browser is not very good at locking down

I’m taking my second semester of classes at Inver Hills, and in my Chemistry class, we have this awful piece of software called the “Respondus Lockdown Browser.” It’s job is to lockdown the computer so you can’t use other programs, prevents copy-paste, and in theory prevents cheating.

I understand the motivation. Cheating is a scourge upon faculty and faithful students. But the methods Respondus uses (specifically the Monitor add-on) were, in my view, unacceptably invasive. Scan my Student ID or Driver’s License even though a similar company had 440,000 students hacked? Freak at me if I dare to stretch or move my head or bury my face in my palms? The ways browsers like this are unfair, discriminatory, and invasive have been well-documented in The New York Times, The Verge, and in a particularly scathing article from the MIT Technology Review.

Software that monitors students during tests perpetuates inequality and violates their privacy.

– MIT Technology Review.

Even though I took the first exam with Respondus, I got more and more angry about it. In a particular moment of frustration upon realizing Chem Exam II would use the same browser, I wrote a fairly angry email to my professor. Even though my professor had earlier admitted the browser was “draconian,” it was to prevent cheating. In my view, I did not pay $1000+ for this class to potentially have my Student ID stolen, my face recordings potentially kept for up to 5 years and resold to third-parties for AI training, or to support privacy-invasive technology. In my mind, I cheered for the 1200 students of the University of Massachusetts who successfully protested to have Lockdown Browser banned.

My professor didn’t receive my email well, but I managed to get a deal through where he would monitor me over Zoom with some other students. I have yet to take that exam, but that’s much better than the risks this software has and my ethical qualms about using it.

Now, with a safe academically-honest way to take the test, I wondered how secure Lockdown Browser actually is. I’m a Computer Programmer by trade (90th percentile on AngelList!), but the Respondus TOS states that I’m not allowed to reverse-engineer, disassemble, modify, blah blah blah boilerplate EULA. So, without using any computer programming skill, I wondered how Lockdown Browser might be defeated.

The answer: In short, it’s bad. In less than 5 minutes, without using Google, I thought of a potential solution and in 5 minutes more got it working. I can’t say they didn’t try, but if a 19-year-old can think of a solution to beat your software in 5 minutes without using Google, that’s really really bad.

That’s Google Chrome and Microsoft Word, open in a fully locked-down browser mode (just without a test loaded, but all of the system lockdown functionality is in effect.) I’m not going to explain at all how I did it (also because people have been sued for finding bugs in similar programs and reporting about them), but for the computer programmers out there, this image gives a big hint as to what the flaw is. I’m also not going to explain how I took a screenshot when Respondus blocks screenshots. Also, Respondus doesn’t do screen recording, so this hack is completely undetectable except for the student’s facial expressions, glare on glasses, or sound of typing.

So, to anyone on my campus (or in the broader world) who thinks that Respondus is virtually cheat-proof on a system level but has problems with student privacy, that’s not the case. The software can absolutely be defeated, even without programming skill. What’s frustrating too is that, when I looked at my experience shown above, I could think of multiple different methods the programmers could have used to block what I just did. They just didn’t put together what I did as being possible.

So… if a 19-year-old manages to defeat a major corporation’s anti-cheat software in 10 minutes with a unique flaw, why are we giving them student IDs again?

I think this is enough to prove my point. For anyone out there who thinks that I cheated or am posting this in bad faith, I can only say that I took the exam completely honestly – and that I am posting this publicly because it removes the temptation to keep the problem secretly to myself, and thereafter use it for all of my Chemistry exams as awesome as my grades would be. 😉

Update: I passed that test just fine with my instructor watching over Zoom call. I also held a meeting with my college about the security problems, which they acknowledged, but claimed that because it was a state-wide contract, they couldn’t stop using Respondus, and it would probably be “secure enough” for most students. I can’t help but wonder, though, how many wealthier student’s parents would be interested in purchasing my methods… am I really the only one who has figured out bypasses…

Trying out AngelList’s new Assessments

I’m a 19-year-old entirely self-taught programmer, and I’ve documented by experiences with Triplebyte’s Quizzes to get a good estimate at my skill. On Triplebyte, I scored 60th-80th percentile for a Generalist Engineer and 80th-100th Percentile among Entry-Level Generalist Engineers. Pretty good for having absolutely no academic training in Computer Science or programming!

Some time after I took the Triplebyte exam, I heard that AngelList had a new assessments system of their own. Upon creating an AngelList account, I found out AngelList had 5 different available assessments:

  • Frontend
  • Backend
  • Full-stack
  • Android
  • iOS

I decided to start with the Backend Quiz, because that’s what I feel I am strongest in. Unlike Triplebyte, using a search engine is allowed and encouraged. In AngelList’s view, it doesn’t matter as much if you don’t know a subject if you are good at Googling it and understanding what you found.

The results speak for themselves. Awesome.

24/30. OK… except that the average is only 15/30, so I’m 90th-100th percentile from AngelList’s perspective for Backend. Awesome.

Next up, because I was encouraged from this result… Full-Stack. Full-Stack being a mashup between Backend (which I’m very good at) and Frontend (which I’m wasn’t sure how good I was at yet). The results:

18/30 is weaker… but it’s still higher than the 13 average, and is 80th-90th percentile. From AngelList’s perspective, I have every right to call myself a “Full-Stack developer.” How about Front-end? How good am I at that, from AngelList’s perspective?

Also 18/30, but the average is 15/30 instead of 13/30. Because of that, I’m 70th-80th percentile.

Overall, the AngelList assessments show I’m better at software engineering than I thought – or AngelList is easier than it should be. However, considering the percentiles, it’s an amazing feeling, and I’d heartily recommend that other software engineers try out the AngelList Assessments. The more engineers that take the exam, the more accurate the percentiles get.

Another Certification: Arduino Fundamentals

I was bored this Wednesday when I remembered that Arduino launched their first certification, called the “Arduino Fundamentals Certificate.” With my recent bid to become Co-President of the Inver Hills Engineering Club and my experience using Arduino since Christmas 2011, I didn’t think I would have much difficulty passing. Also, it was only $30 and perhaps my Engineering teacher would be interested. If not, at least I have another certificate on my wall.

I passed. 91/100. It wasn’t terribly difficult, but there were definitely still a few things in there that forced me to think. In particular, what does a digitalWrite(13, LOW) actually do? And of course, some circuit diagrams to interpret, which were pretty fun. Overall, it won’t get me a job, but it was fun to try and it never expires.

The Parent Portal project: Part One

First, some backstory: For this summer, I tried to get a certain Triplebyte Externship, but it fell through, and I was left with either working for my dad’s masonry restoration and fireplace business, or to find my own job.

It started with my dad deciding to sell his scaffolding, and lots of it, because of his decision to move wholly into the fireplace business. With over 60 frames, over 100 plank, a whole set of tube and clamp, there was plenty to sell – and my dad agreed to give me a commission on the stuff I managed to sell.

With some cash in pocket (and frankly, it is amazing how many people are on Craigslist), I contacted Homeschool Connections to see if they had any work for me. I’ve done multiple freelance projects for them in the past, most of them not on this blog yet, and was curious to see what they wanted.

They did have work for me, and it was pretty simple: A Parent Portal. A place where parents could sign in and easily check their student’s grades by pulling data from the Moodle LMS… and that’s it. I made them a deal for 1 week of work, flat-rate, to build this portal.

A week later…

They were amazed. It had everything they wanted and more: Parents could sign in with just their email address, no passwords to remember. They could link unlimited students and click one button to see their scores in every course.

It was so good, that I managed to convince them to extend the contract. Four weeks, for a full-blown Course Registration system to replace GoSignMeUp. Automatic enrollment of purchased courses, a simplified UI, better search… a whole wishlist.

To put it simply, I worked on it for those four weeks, and then renewed it for another week and a half to add a few more features that weren’t part of the four week agreement.

It’s Monday of the last week, and here’s a taste of what I’ve got so far.

For technology, I’m using… surprise… PHP 7 with the Laravel 7 framework. The UI is built with Laravel Blade, except for the components which are made using Laravel Livewire. Livewire gives all of the UI components AJAX-style reloading, so the UI feels almost as fast as a React or Node application despite using a purely PHP backend.

I have to give credit where it is do: Livewire is astounding. I can do insanely simple stuff like this:

<a href="#" wire:click="myPHPFunction()">{{ $name }}</a>

And on the server:

public function myPHPFunction() {
    $this->name = "Something else";
}

And just like that, the name of my link will change on a click. If you are building a Laravel application, Livewire is by far the fastest way to get the user experience of AJAX without writing a single line of JS. Super cool.

I will delve into much more of the technical details in Part Two, coming soon.

Just passed the CompTIA A+ exams

A few days ago, I got an email from CompTIA offering a 20% off coupon on their A+ Certification. I had been mulling over the CompTIA A+ certification for a while: After all, the book has been on my shelf for, like, 3 years, but I just never thought it was worth the money to take the test. $226/exam, with 2 exams, means that I would be in for $452 assuming I didn’t fail either exam. If I failed one, I would have to pay $678+tax, which I simply couldn’t justify.

I didn’t accept the email deal, but it woke me up to something: Through my college, I can purchase vouchers at a massively discounted rate. More than 54% off, actually. The $226 exam was only $103 with my college discount, and this reduced my total price to $206 assuming I didn’t fail either exam, which is less than the standard MSRP for just one voucher.

With a sudden influx of cash from my summer freelancing (being 18 years old) and the discovery of the discount, I spontaneously decided to take the exams… 2 days later. I figured I wouldn’t need much practice, because I had been working with computers in my free time for years. I entered into the 220-1001 exam after only an hour of review of an older 220-901 book from Mike Meyers and 4 of the 20 ExamCompass practices quizzes. Spontaneous and stupid? A little…

I entered the Core 1 exam and was shocked by the number of questions about printers. It felt like 1/3 of the questions were printer-related. I had not spent much time reading about printers and was actually getting really nervous. When you walk into a test expecting questions about Windows 10, and are instead asked questions about what’s causing faint colors, I began fearing that I would fail.

Unexpectedly, despite the sheer number of printer questions, I passed: 683/900, with 675 required. Only 8 points above the minimum. Even though that hurts, it is still a pass. Somehow encouraged, I immediately purchased for $103 another exam voucher for 220-1002 (Core 2), and scheduled it for what was only 90 minutes after finishing Core 1.

So, after my barely-passed 12:00 online exam, I took Core 2 at 2:30. I passed that one much better and the questions were much closer to what I was expecting: 788/900 with a 700 minimum.

I’m posting this at 4:00 PM, and I haven’t received my certificate yet. But that’s OK, I will get that in a few days and will be really happy to add this to my website and profile:

I just don’t know what else to say. Taking a proctored exam online was nerve-racking at first, but I quickly got used to it and didn’t have any problems. If you have to take it online, well, that works perfectly well and is more convenient than driving to a testing center.

Next up, the Network+, which has had a year old book sitting on the shelf…