Well, besides the obvious awful consequences on basically everything in every industry, I can sure think of some extremely low-cost, easilypreventable technical consequences which would make rebuilding unnecessarily difficult:
How many people would have maps?
How many people would have survival information?
We had PCs before we had the internet. What happens when you can’t set up a PC without the internet?
Many platforms don’t support offline updates. What happens when you have a Switch game card for your desperate kids, but don’t have the update for the Switch?
How would education continue, if so many books and resources gone digital no longer exist – and the physical material that exists is now in great danger of theft?
Now… I will admit, what is the likelihood of such a scenario? Not very high… but it’s more amazing that we have successfully digitized so much knowledge, we now have capacity to widely distribute this knowledge and make ourselves more resilient to outages, and we don’t.
Imagine my following (very early, not set in stone, probably have loopholes or other issues, they are just sketches, hopefully somewhat common-sense) proposals:
Every internet-connected device should be capable of being set up, and updated, without an internet connection, from stored offline files.
Devices should be capable of exporting their own newer firmware to an offline image, to update other devices on older firmware offline. If my PlayStation is on v37, and my friend is on v32, and my game requires v34, I should be able to help my friend update to v37 and play, especially because we’re going to need it during those difficult times.
App developers on closed ecosystems, such as the Apple App Store, should have the option to allow their apps to be installed offline. Apple can still certify the app to their standards, but if I’m the developer of an open-source application, I should have the option to let my users export my app to a signed file, stash it on a flash drive somewhere, and install it on random people’s iPhones in case of emergency. (I’m not making a point against the App Store here – the application would still have been signed by Apple at some point, and it could be double-checked if internet is available.)
Right now, people can self-certify up to $300 of charitable giving to the IRS without receipts. Why can’t the government grant, say, a $20 Tax Credit for self-certifying you are storing a full set of Project Gutenberg? Or for storing a database of emergency survival information with images? Or for storing full copies of OpenStreetMap for your state? Or for storing an offline copy of Wikipedia (~120GB)? If even 10 million people did it, it would cost up to $80 million – a pittance by government budget standards (and our $700+Billion national defense budget), but it could make a ludicrous and disproportionate difference on outcome to have the knowledge so widely distributed. If people widely cheated on it and 100 million claimed to be doing it… is even $8 billion with some fraud here and there that big of a deal compared to our national defense budget and the benefits provided?
Emergency situations are unpredictable – that’s why every phone is legally required to support 911, even without a carrier plan. But we have smartphones now, so why aren’t we raising the bar? Would it really kill us to store a database of just written information on how to survive various situations on every phone? Why can’t I ask Siri, without an internet connection, how to do CPR? It would probably take 10MB at most… and save many lives.
Many films and TV Shows are becoming streaming-exclusive, and as many fans are finding out, this is very dangerous for archival purposes. Just ask fans of “Final Space,” who had the series completely erased from all platforms, even if they purchased it, for accounting reasons. I wonder if the relationship between creators and fans should be reconsidered slightly. If you are a major corporation, and you get fans invested in a series, do you perhaps have a moral obligation to provide a copy of your content on physical media for those interested, so as to prevent a widespread loss of culture? (Also because, all it takes is a few Amazon data centers to blow up and a ton of streaming-exclusive movies might no longer exist…) Perhaps this should be called a Cultural Security issue.
Recently, I had to set up a Windows 10 computer for one specific application in a semi-embedded use case. Anything else that Windows does or comes with is unnecessary for this. While there are plenty of internet scripts and apps for de-bloating Windows, I have found the easiest (and little known) way to debloat Windows without running any internet scripts is as follows:
Open Powershell. (NOTE: Strongly recommend using fresh Windows install, and trying in a VM first to see if this method works for your use-case.)
Type Get-AppxPackage | Remove-AppxPackage. (See note about Windows 11 below – this is for 10 only.)
Ignore any error messages about packages that can’t be removed, it’s fine.
This is my Start Menu, after installing my CAD software:
After running the command, you will just have the Windows folders, Microsoft Edge, and Settings. And that’s literally it – no Microsoft Store, no Apps, just Windows and a Web Browser. Also, even though the command sounds extreme, almost nothing in Windows actually breaks after you run it (Windows Search, Timeline, Action Center, all work fine)*. If you want to try it yourself, I’d advise using a virtual machine and giving it a try, it works shockingly well for my use case.
After that, if I want to further de-bloat a PC for an embedded use case, I use Edit Group Policy on Windows 10 Pro. It’s a mess to navigate, but almost everything can be found there. Don’t want Windows Search to use the internet? Or something niche, like disabling Windows Error reporting? It’s almost certainly there.
Will this work for everyone? No, of course not, but it’s a great one-line, easily memorable tool for cleaning up a PC quickly for an industrial use case without any security risks caused by online scripts.
FAQs from Hacker News discussion:
Q. What about Windows 11?
A. Windows 11 is far, far more dependent on AppX than Windows 10 and will continue to be even more dependent on it in the future, most likely. Windows 10, at this point, is unlikely to change in this regard. Running these instructions on Windows 11 is far more likely to leave you in a bag of hurt down the road than Windows 10.
Q. What about .NET Frameworks, VCLibs, and some other important-sounding packages?
A. This will remove them, but despite their important-sounding names, they aren’t as important as you may think. The .NET packages (in Appx, not to be confused with the unpackaged “classic” .NET Frameworks) and VCLibs in my experience are primarily for Microsoft Store applications and Desktop Converter Bridge applications (Win32 in Store package), which if you don’t have the Store, probably won’t affect you. (This may sound optimistic, I say probably because I can’t try every application, but if Steam, FreeCAD, and Fusion 360 can run without issue, you’ll probably not have issues.) Try in a Virtual Machine or old computer first if this is concerning.
Q. Can I undo this?
A. Technically yes, but it’s hard. Reinstalling Windows is easier. Plan accordingly.Actually, you can, with this command in a PowerShell Administrator window according to Microsoft documents: Get-AppxPackage -allusers | foreach {Add-AppxPackage -register "$($_.InstallLocation)\appxmanifest.xml" -DisableDevelopmentMode}. I still recommend using a VM first just in case and only using a fresh install. After running this reinstall command, get updates through the Microsoft Store, and restart. This should work and in my testing it does, but the Weather app was complaining about Edge WebView2 being missing (but provided download links).
Q. But it might rip out XYZ which I need (e.g. Microsoft Store).
A. I recommend, in that case, using a VM first or an old computer to see if you actually need it.
Q. Security risks?
A. Most likely not, and actually, likely less than if you didn’t de-bloat (lower attack surface). You will lose many libraries used for primarily running Windows Store apps (and the apps themselves), but Windows Update and Windows Defender are not affected by the command in any way I can discern. YMMV though.
Q. But de-bloating might damage Windows. (Also in this category, “this is stupid and could destroy your PC!”)
A. It’s the risk we all take whenever attempting to de-bloat Windows in any way Microsoft doesn’t sanction (the risk comes with the territory). But if you are still interested in de-bloating, I think that it’s good to have an option that doesn’t need downloads. There might be downloadable options that are better. Any criticism (even valid) about de-bloating would almost certainly apply to other programs and scripts and not just mine. It can’t be worse than businesses who go and use Windows 10 Ameliorated.
Also, use case should be considered. Consider mine: CNC and CAD. CNC Software is stuck in the 90s for some machines, and if literally anything goes wrong, you could actually lose hundreds of dollars of material from a botched cutting job. Is it really so dumb to risk some stability, for the greater stability of having less bloat, from a PC that will rarely if ever touch the internet (and cost me $150, and has all the data storage on a separate dedicated NAS)? I think it’s a fair trade. The last thing I need is the (normally not removable) Windows Game Bar popping up over Mach3 CNC Control Software and blocking the Emergency Stop button. Your situation is almost certainly different.
Q. But what about the Chris Titus Tech debloater, or O&O AppBuster?
A. They’re probably great solutions. The main appeal of this one is that it is memorable, can be used immediately, and requires no downloads. If you are OK with downloading scripts from the internet (which, I am, but not everyone is), there are great, more granular options out there. Because of the requirement of a download, I don’t see them as being comparable to this command (different use cases).
Q. But Windows 10 clearly wasn’t made to work this way!
A. Well… there’s always Windows 10 LTSC. Which is awfully close to this, having very few AppX packages, and no Microsoft Store. It’s only for sale to Enterprise users though. You could say this is the closest thing to a “poor man’s” LTSC-ifier for standard Windows 10.
I saw the news by Bloomberg (a questionable source) about how Apple was getting ready to comply with the European Digital Markets Act, at last, by allowing sideloading among other things. However, this quote caught my eye:
If similar laws are passed in additional countries, Apple’s project could lay the groundwork for other regions, according to the people, who asked not to be identified because the work is private. But the company’s changes are designed initially to just go into effect in Europe.
I have one question: How?
This might seem like a dumb question, but consider the following:
GDPR applies to European citizens. Companies like Apple are bound by GDPR even if said citizen is currently physically located in the United States or another country (making it a extraterritorial law). If the DMA is similar in this way (which I currently cannot find a certain answer for), Apple would be required to allow sideloading outside the European Union if the user is an EU Citizen (for example, if they flew to the US for a week). But how do you tell, without ID, if a user is European? Vice versa, how do you tell that a US user didn’t just fly to Europe for a week?
The DMA appears to be a retroactive law, applying to all iPhones that currently exist as part of the “platform” (i.e. anything currently supported). If so, there are no doubt phones in Europe that were purchased in the US. What happens to them? Let’s say 5% are not what Apple would call European-sold phones. Is updating 95% of phones to comply, and not 100%, legally kosher? Or could Apple be sued stepping on people’s rights for not getting everyone?
The first point would suggest a geolocation-based block to be ineffective and potentially illegal. The second point would seem to also make a unique serial-number-based (or other point-of-sale-based) check also illegal and ineffective. iPhones don’t require an Apple ID and the DMA doesn’t have exceptions for one, so the country on an Apple ID would not be usable either. It doesn’t seem, to me, like Apple has many options for actually restricting sideloading fully to Europe without technically-knowledgeable users being able to join in on the fun.
Thus my open question: Any thoughts how they’ll do it? Comment below.
Right to Repair: Almost everyone supports it, it will make our devices more repairable, but if you look closely: the definition of what Right to Repair actually is and entails constantly changes based on who you talk to.
Note: This table is an oversimplification of their definitions of R2R and does not include all necessary nuances to make a point. I apologize for any errors.It is also possible I’m just splitting hairs, though I think some real differences do appear near the end of the article.
OEM Parts for sale
Schematics, Diagrams Publicly Available
Board-level parts
Repairable Design Requirements*
Aftermarket or 3rd-party Parts OK
MKBHD
✓
✗ Not addressed in definition of R2R.
✓ Considered Rossmann’s opinion below a “good take.”
✗ Considered Rossmann’s opinion below a “good take.”
✗ Not addressed in definition of R2R.
Louis Rossmann
✓
✓ Schematics and diagrams for board-level repair should be publicly available.
✓ “Don’t tell the company that made this part they can’t sell it.” [paraphrased]
✗ “I don’t want right to repair to push my personal preference for design on consumers.”
✓ If it’s cheaper, and customer chooses, should be OK.
iFixit
✓
✓ “repair information like software tools and schematics should be available”
✗ Not addressed. Primary focus is OEM parts – as evidenced by iPhone 14 obtaining 7/10 repairability.
✓ Repair score penalizes companies that make difficult-to-repair items. “companies block repair in all kinds of other sneaky ways. Sometimes they glue in batteries with industrial-strength adhesives… etc.”
✓ “legalize modifying your own property to suit your own purposes.” (Also tests and resells batteries.) Obviously against counterfeiting, but also explicitly against locking out 3rd-party parts.
Hugh Jeffreys
✓
✗ Not addressed.
✗ Not addressed.
✓ Against digital locks (like others), talks about physical repairability requirements as being an R2R issue.
✗ “has become a big market for scammers.” Not addressed in R2R definition.
Linus Tech Tips
✓
✗ Calls for parts and “components,” never mentions schematics. Seems to almost explicitly say this is not Right to Repair, as one should not be able to replicate a patented product.
✓ “Right to access manufacturer components and resources to repair their devices when required.”
✗ Defined in one video as being “Beyond Right to Repair”.
✗ “So it sure is a good thing that no-one is calling for that either!” [False?]
✓ = Included in public definition they give, ✗ = Not mentioned in their definition (though they may not be opposed, they just didn’t mention it as an R2R issue)
Note that the table above only considers how they defined Right-to-Repair, even though all of them would almost certainly support the following in other capacities even if it wasn’t in their R2R definition:
“OEM Parts Publicly for Sale” means if the manufacturer sells batteries, screens, sensors – those should be available for sale to anyone.
“Schematics Publicly Available” means that the manufacturer should provide all circuit information about how a device is assembled.
Board-level parts refers to the sale of proprietary chips and other unique parts, down to the level they are indivisible. (I.e. a specific power management chip, not a whole logic board.)
Design Requirements refers to a manufacturer being forced to make easier-to-repair products, not just make products that have parts available. Some call this Right to Repairable Design. (Both views do not consider digital locks acceptable, so an ✗ does not mean they are ignoring blocking digital locks as a design requirement.)
Aftermarket Parts OK refers to whether Right-to-Repair should explicitly prevent companies from blocking aftermarket Batteries or Screens. Others believe it is tolerable for aftermarket or 3rd-party parts.
Now, the above table lacks nuance and is not perfect, I know, and I’m probably going to get corrections (that’s fine, check back later for some, I’m not perfect at this). However, if I can boil down the schisms, it appears to be this:
R2R activists aren’t sure whether manufacturers should be simply required to provide OEM parts that they themselves have; or to be required to provide all proprietary individual components as well.
R2R activists don’t know whether companies like Apple should be prevented from blocking 3rd party batteries or similar; though they are unanimously against preventing swapping or installing OEM parts.
R2R activists don’t agree on whether (outside of digital locks) a manufacturer should be forced to make certain design decisions to make repairability easier.
Until we can unanimously define what Right to Repair actually entails, success is going to be hampered with confusion. I would argue, personally, the following:
Right to Repair should be the ability to obtain OEM parts and manuals, and to not have digital locks preventing repair without the manufacturer’s consent.
Right to Repairable Design should be restrictions on part serialization (for preventing counterfeiting), use of Phillips or other common screws, restrictions on excessive adhesive, etc.
Right to Advanced Repair should be right to obtain schematics, proprietary information, and individual components that is not attempted by the OEM’s own repair policies (i.e. most OEMs don’t do board-level repair). Basically, Right to Repair beyond what the OEM would attempt.
Splitting these issues up helps clarify exactly what each term means, and we should arguably fight for them all regardless, but without overlap. But that’s just my opinion on how to make things clearer.
In the past few months, there have been a surge of AI projects that allow generating images and text:
Stable Diffusion
ChatGPT
GPT-3 and earlier
DALL-E 2 and previous
Midjourney
GitHub Copilot
These AI programs are amazing – but they were also trained with publicly-available material; and the owners of that material almost certainly did not opt-in to having their material used for AI training, and have occasionally managed to get these services to repeat things very close to copyrighted material. This is almost certainly going to come up in future legal cases.
Putting the current US concept of fair use aside, I think that at this point, AI companies have a vested interest in doing everything they can to get these algorithms entrenched as an industry, because that may actually ensure their legal survival.
Consider a broader view of the US and technology:
VCRs upset movie studios tremendously, but were declared legal even if some people would abuse them or copy tapes. Format-shifting was declared officially legal from this decision, whereas before it had been legally grey, much like AI is now. However, there’s another side to the story: In 1983, according to the New York Times, there were approximately 1.2 million VCRs sold that year, with the decision decided in January 1984. (Basically) outlaw the industry? Nah.
Photoshop came out, and allowed for the manipulation of images in ways that were unprecedented. Users could also abuse Photoshop to make very… interesting… images of celebrities. Nonetheless, Photoshop was never sued for being liable for anything their users did.
CD Drives allowed copying CDs which did not have DRM, and made it easy to share the ripped discs online. This did not ultimately make CD drives, CD ripping, Online File Sharing, BitTorrent, The Internet, or any of the technologies involved illegal despite all of them being abused for copyright infringement. It also didn’t legalize internet censorship of DNS and packets to prevent copyright infringement despite the MPAA’s lawsuits and failed laws (SOPA/PIPA).
If there seems to be a pattern, I would quantify it as this:
US Courts do not enjoy clamping down on any new technology, even if said new technology can and is being used in copyright-infringing ways.
Now, one could make the argument, that none of these really have much to do with AI, or AI’s propensity to regurgitate information it learned with sometimes. I think, however, that this is a “hindsight is 20/20” moment. It’s obvious now, but it wasn’t obvious then. If we had CD ripping declared illegal, or lost VCR recording being legal, or had SOPA/PIPA enforced, our precedent for new technologies and copyright infringement would be very, very different.
Thus, in a weird way, it would seem to my unlawyered thoughts that the more AI can entrench itself, become accepted, widespread, diverse in function, the stronger the legal case will become. If it was just GitHub Copilot, it may be banned. But will courts be interested in hurting Copilot, Midjourney, DALL-E, GPT-3, etc.? I think they would punt the question to Congress before they would dare make a change to the status quo or declare that it isn’t “fair use,” if previous technology/copyright conflicts are anything to go by.
For those who do not know what remote attestation is:
Remote attestation lets an external system validate, cryptographically, certain properties about a device.
For example, proving to a remote system that Secure Boot is enabled on your Windows PC, with no ability to forge that proof. And by extension, potentially loading a kernel driver that can prove certain installed applications have not been tampered with.
TPM chips invented in ~2004 were widely feared because they enabled this capability, but until now they have been primarily used only in corporate networks and for BitLocker hard drive encryption.
When it was first invented, it was widely feared by Linux users and by Richard Stallman, especially after Secure Boot was rolled out. Could an internet network require that users run up-to-date Windows, with Secure Boot on, and thus completely lock out Linux or anyone who is running Windows in a way Microsoft does not intend? With remote attestation, absolutely.
In practice though, only corporate networks adopted remote attestation to join, and only on their business PCs through the TPM chip (no BYOD here). TPMs have a ludicrous amount of certificates needing trusting, many in different formats and algorithms (1,681 right now, to be exact), and almost everything that isn’t a PC doesn’t have a TPM. Because of that, building a remote attestation setup with support for a broad variety of devices was, and is, very difficult. Easy for a business with a predictable fleet on one platform, almost impossibly complicated for the random assortment of general devices. And so, the threat of the TPM and remote attestation in general was dismissed as being fearmongering from 2 decades ago that never became reality.
If only it stayed that way. Remote Attestation is coming back and is, in my opinion, a legitimate threat to user freedom once more, and almost nobody has noticed. Not even on Hacker News or Linux circles like Phoronix where many such new technologies and changes are discussed.
Consider in the past few years:
Why is Microsoft building their own chip, the Pluton, into new Intel, AMD, and Qualcomm processors? Why does it matter so much to add a unified root of trust to the Windows PC?
Why does Windows 11 require a TPM 2.0 module?
Why has every PC since 2016 been mandated to have TPM 2.0 installed and enabled?
Why do so many apps on Android, from banking apps to McDonalds, now require SafetyNet checks to ensure your device hasn’t been rooted?
What’s with some new video games requiring TPM and Secure Boot on Windows 11?
Remember that remote attestation has been possible for decades, but was overly complicated, unsupported on many devices, and just not practical outside of corporate networks. But in the last few years, things have changed.
What was once a fraction of PCs with TPMs, is now approaching 100% because of the 2016 requirement change, and because of the Windows 11 mandate. In ~5 more years, almost all consumer PCs will have a TPM installed.
macOS and iOS added attestation already with the DeviceCheck framework in iOS 11 / macOS 10.15. They don’t use a TPM but instead use the Secure Enclave from the T2 or M-series.
Google has had SafetyNet around for a while powered by ARM TrustZone, but is tightening the locks. Rooting your device invalidates SafetyNet, requiring complex workarounds that are gradually disappearing.
For the first time, remote attestation will no longer be a niche thing, only on some devices and not others. Within a few years, the amount of devices supporting remote attestation in some form will quickly approach 100%, allowing remote attestation to jump for the first time from corporate networks into public networks. Remote attestation is a technology that doesn’t make sense when only 70%, or 80%, or 90% of devices have it – only when it reaches >99% adoption does it make sense to deploy, and only then do its effects start to be felt.
We’re already seeing the first signs of remote attestation in our everyday lives.
macOS 13 and iOS 16 will use remote attestation to prove that you are a legitimate user, allowing you to bypass Cloudflare CAPTCHAs. How? By using remote attestation to cryptographically prove you are running iOS/macOS, without a jailbreak, on a valid device, with a digitally signed web browser.
Some video games are already requiring Secure Boot and TPM on Windows 11. According to public reports, they have not fully locked out users without these features, as they still allow virtualized TPMs, Windows 10, and so forth. However, they absolutely do not have to, and can disable virtualized (untrusted) TPMs and loading without Secure Boot as soon as adoption of Windows 11 and TPM is great enough. Once they shut the door, Windows 11 + Secure Boot + Unaltered Kernel Driver will be the only way to connect to online multiplayer, and it will be about as cryptographically secure against cheating as your PlayStation.
Cisco Meraki powers an insane amount of corporate networks. Even in my own life, it was my school’s WiFi, my library’s WiFi, the McDonalds WiFi, even my grandparent’s assisted living WiFi. Cisco is also a member of the Trusted Computing Group that developed the original TPM and Remote Attestation to begin with. All they have to do, once adoption becomes great enough, is update their pre-existing “AnyConnect” app to use TPM/Pluton on Windows, DeviceCheck on iOS/macOS, and SafetyNet on Android/ChromeOS before you join the network. Anyone with an unlocked or rooted device need not apply.
I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.
What is more concerning is how many freedoms it will take in a more terrifying and unappreciable direction. For example, when I was in college, we had to jump through many, many hoops to connect to school WiFi. WPA2 Enterprise, a special private key, a custom client connection app, it wasn’t fun and even for me was almost impossible without the IT desk. If remote attestation was ready back then, they would have absolutely deployed it. Cloudflare has already shown it is possible for websites to use it to verify the humanity of a user and skip CAPTCHAs on macOS. What happens when Windows gains that ability? Linux users will be left out in the cold completely, as it is simply not practical to digitally approve every Linux distribution, kernel version, distribute a kernel module for them all, and then use the kernel module to verify if the browser is signed in the same way with all of its variations, without leaving any holes.
Thus, for Linux users, it will start with having to complete CAPTCHAs that their Windows and Mac-using friends will not. But will it progress beyond that? Will websites mandate it more? On an extremely paranoid note, will our government or a large corporation require a driver’s license for the internet, with a digital attestation binding a device to your digital ID in an unfalsifiable way? Microsoft is already requiring a Microsoft Account for Windows 11, including the Pro version. Will a grand cyberattack send deployment of this technology everywhere, and lock out Linux and rooted/jailbroken/Secure-Boot-disabled devices from most of the internet? Will you be able to use a de-Googled phone without being swarmed with CAPTCHAs and having countless apps deny access?
This is a major change of philosophy from the copy protection and DRM systems of yesteryear. Old copy protection systems tried to control what your PC could do, and were always defeated. Remote attestation by itself permits your PC to do almost anything you want, but ensures your PC can’t talk to any services requiring attestation if they don’t like what your PC is doing or not doing. This wouldn’t have hurt nearly as much back in 2003 as it does now. What if Disney+ decides you can’t watch movies without Secure Boot on? With remote attestation, they could.
I think I’ll end with a reference to Palladium again, Microsoft’s failed first attempt at a security chip from ~2003, cancelled from backlash. It had an architecture that looked like this:
Now compare that diagram with Microsoft’s own FASR (Firmware Attack Surface Reduction). FASR is a “Secured Core” PC technology that is not mandatory yet and not necessarily part of Pluton, but very likely will be required in the future.
All they did was flip the sides around, have a hypervisor instead of separate hardware abstraction layers, and rename NEXUS to “Secure Kernel.” Otherwise it is almost entirely the exact same diagram as from 2003 that was cancelled from backlash. They just waited ~20 years to try again and updated the terminology. (Also, of note, is the use of the word “Trustlet,” plagiarized from ARM TrustZone which powers Android’s SafetyNet remote attestation system.)
In upcoming Intel, Qualcomm, and AMD processors, there is going to be a new chip, built-in to the CPU/SoC silicon die, co-developed by Microsoft and AMD called the Pluton. Originally developed for the Xbox One as well as the Azure Sphere, the Pluton is a new security (cynical reader: DRM) chip that will soon be included in all new Windows PCs, and is already shipping in mobile Ryzen 6000 chips.
This new chip was announced by Microsoft in 2020, however details of what it was actually capable of, and what it actually means for the Windows ecosystem were kept frustratingly vague. Now with Pluton rolling out in some AMD chips, it is possible to put together a cohesive story of what Pluton can do from several disparate sources.
Because Microsoft’s details are sparse, this article will attempt to summarize all that we now know regarding Pluton. It may contain inaccuracies or speculation, but any potential inaccuracy or speculation will be called out where possible. If there are inaccuracies that result in more, better information being found, so be it.
What’s inside Pluton?
Pluton encompasses several functions. I’ll be throwing out the acronyms first and some of their meanings and effects later in the article:
A full TPM 2.0 implementation, developed by Trusted Computing Group (TCG)
DICE (Device Identifier Composition Engine) implementation, also designed by TCG
Robust Internet of Things (RIoT) specification compliance, a specification developed by Microsoft and announced with almost no fanfare all the way back in 2016
However, besides these functions, Pluton implements the full breadth of security improvements that Microsoft used to only have on the Windows 10 Secured-Core PC systems. A Pluton system is a superset of the Secured-Core PC specification which was previously only on select systems. A Secured-Core PC requires the following additional technology measures that were not previously required for a standard PC:
Dynamic Root of Trust for Measurement (DRTM)
System Management Mode (SMM) (edit: with Device Guard, regular computers have long had SMM)
Memory Access Protection (Kernel DMA Protection, protects against PlundervoltThunderspy)
Hypervisor Code Integrity (HVCI)
Edit: See update at the bottom of post, the overlap of Secure Core and Pluton is currently somewhat unclear.
It is important to note that Pluton is very much like the Secure Enclave or TrustZone systems on macOS/iOS/Android systems, with a full (secure) CPU core, its own small onboard RAM, ROM, RNG, fuse bank, and so forth. For (obvious) security reasons, Pluton only boots officially-signed Microsoft firmware and carries anti-downgrade protections inherited from the Xbox. On non-Windows systems like Linux, Pluton quietly degrades into only a generic TPM 2.0 implementation.
A lot of acronyms, but what is the big picture?
In a nutshell, Microsoft believes they need to exercise more control over PC Security than previously. This came up with Windows 11, which infamously required 8th Gen or newer CPUs, TPM 2.0, and Secure Boot capability. At the time, there was (and still is) much concern regarding the almost arbitrary nature of the requirements.
However, while Microsoft was terrible at defining why certain CPUs made the cut and others didn’t (like why no Zen 1?), Ars Technica noticed a pattern:
Windows 11 (and also Windows 10!) uses virtualization-based security, or VBS, to isolate parts of system memory from the rest of the system. VBS includes an optional feature called “memory integrity.” That’s the more user-friendly name for something called Hypervisor-protected code integrity, or HVCI. HVCI can be enabled on any Windows 10 PC that doesn’t have driver incompatibility issues, but older computers will incur a significant performance penalty because their processors don’t support mode-based execution control, or MBEC.
And that acronym seems to be at the root of Windows 11’s CPU support list. If it supports MBEC, generally, it’s in. If it doesn’t, it’s out. MBEC support is only included in relatively new processors, starting with the Kaby Lake and Skylake-X architectures on Intel’s side, and the Zen 2 architecture on AMD’s side—this matches pretty closely, albeit not exactly, with the Windows 11 processor support lists.
Windows 11, by (almost) requiring MBEC, TPM 2.0, Secure Boot capability, and so forth is in every way trying to get people used to a Pluton-lite experience, as “Pluton-y” as possible without actually having Pluton yet. Windows 11 is the “stepping stone” to Pluton with security requirements to match. With Windows rumored to return to the 3-year version cycle with Windows 12 in ~2024, and with Microsoft clearly being less afraid of cutting off large swaths of old PCs, it would not shock me if Windows 12 makes Pluton a system requirement. Windows 10 for old systems, Windows 11 for systems not quite there, Windows 12 for the endgame.
Anything more I should know about what Pluton aims to do?
I’ve thrown out the acronyms for those interested in further reading, but what are the design goals behind those acronyms?
Microsoft originally developed Pluton for the Xbox, but also for the Azure Sphere. When they developed the Azure Sphere chip for secure IoT Devices, they designed it to be in compliance with their “Seven Properties of Highly Secure Devices“:
Microsoft also in that document shares how Pluton was integrated with this MediaTek IoT Chip, which looks probably pretty similar to how it is being integrated in Intel/AMD/Qualcomm:
It is not perfectly possible to implement Azure Sphere levels of security in a Windows PC. On Azure Sphere, a device becomes permanently locked to one manufacturer’s account permanently and only runs one app, with absolutely no ability to boot alternative apps or operating systems. Microsoft is no doubt going to need to compromise Pluton for general-purpose computing, but by how much…
All put together, what are the effects on me when Pluton arrives?
You will no longer be able to install Linux with Pluton enabled unless the Microsoft 3rd-party UEFI Certificate is enabled in your UEFI Firmware. See Microsoft’s 7 Principles #5. (Also see update below at bottom of post – this status is ambiguous as Pluton and Secured Core kind of overlap.)
Pluton will integrate with Windows Update at least for system firmware, potentially allowing for some forms of drivers to be updated as well as potentially having downgrade prevention. This may partly be why Windows 11 has new driver requirements (DCH compliance). See Microsoft’s 7 Principles #3, #6, and possibly #7.
With SHACK, Secret Keys will be able to be stored in hardware and be able to encrypt and decrypt material without the key ever being exposed to firmware or software. This allows for a potentially stronger BitLocker… or just plain old DRM. See Microsoft’s 7 Principles #2.
DICE and RIoT appear to be different parts of the same solution: Providing a device-specific key and assertion capabilities. What’s more, consider when it is combined with SMM and DRTM:
Why is a DICE+RIoT+SMM+DRTM sandwich so potentially extremely dangerous? Imagine if you are a game developer who wants to prevent cheating. According to IEEE’s interpretation, it could be possible to use Pluton to irrefutably verify (aka “assert”) that:
The device is running Windows,
The device is up-to-date or recently updated,
The device has not had Secure Boot disabled or tampered with
Part #3 is most important. When you combine #3, the ability to have the Pluton security processor assert that the device has booted with Secure Boot in accordance with Microsoft’s 7 Principles #5 using the sandwich, and a potential custom kernel module for anti-cheat, you have successfully proven cryptographically:
The device has securely booted,
Your kernel module has loaded,
Your kernel module, and Windows itself, has absolutely not been tampered with in any way
Windows is up-to-date with all or most security features enabled,
By having Hypervisor-powered Code Integrity through HVCI/MBEC, injecting code will be extremely difficult even if the code is flawed or contains exploits
If you were a DRM designer, you would probably be drooling. No more workarounds or hacks – its an on-silicon, Xbox-proven solution, now on Windows. Microsoft, of course, doesn’t want to talk about that use-case and says that it will primarily be used by their Azure Attestation Service and allowing businesses to keep devices up-to-date and secure, but the road to Hell is paved with good intentions. As IEEE has noted citing personal conversations with Microsoft Engineers, any attestation service could use Pluton for their own ends, as DICE+RIoT are both open standards for this kind of thing. In fact, Microsoft’s use of open standards for assertions might make this more dangerous in my opinion.
(Note: Please see later in post for edits regarding TPM capabilities)
Doomsday and “Fearmongering” Speculations Below – Objectively verified knowledge ends here
What is to prevent school WiFi from one day requiring a Pluton assertion that your Windows PC hasn’t been tampered with before you can join the network? As far as I can tell from the above specifications, there is no reason, assuming that the school had the ability to provide a connection client app before connecting.
Microsoft’s other use of DICE+RIoT, in their own words, is to enable “Zero Trust Computing.” By giving every device the ability to have secret keys completely out of reach of the main processor (see 7 Security Principles #2 and #5), it is theoretically possible to create documents, messages, and other content that is completely unreadable except by a specific device using a key that cannot be extracted from that device.
Imagine thus, a different scenario from the game developer. Imagine a (maybe corrupt) government agency or business. In the not-too-distant future, the following could be possible with Pluton (with some custom app development to streamline everything together):
All devices in the network have Pluton and are enrolled in Azure.
Every time a document is created and added to the network, it is added with a Pluton certificate verifying who created the document. Anonymous documents are kept off the network.
Every user in the organization is in Azure through Active Directory, and has specific devices attached to their User. Their User is enrolled in specific groups, such as Accounting or Legal.
Documents are encrypted through Azure to be only readable on specific client devices using the device-specific public key.
Thus, employees can read approved documents, but only on authorized systems.
To put this together, imagine this hypothetical scenario. A user in Legal creates a document. When the user uploads it, Azure verifies it against Pluton to both verify the document as being likely clean, and also to firmly establish who created it. When another user wants to download the document, Azure only provides a version that has been encrypted with the user’s Pluton public key if that user belonged in the right department, and thus only readable by that user.
These authorized systems could contain MDM (Mobile Device Management) measures that, thanks to Pluton with Secure Boot and physical attack protection, cannot be disabled. It also cannot easily have code injected or bypasses installed due to the Hypervisor. Pluton will also, in this situation, likely enforce BitLocker with an unknown unlock key.
The system is tamper-resistant and constantly updated, meaning that should a strict MDM policy be in place, extracting documents from a system without authorization could be potentially extraordinarily difficult to impossible.
Now, Microsoft might look at the above and laugh this off as fear mongering, as that is much further than what Pluton is being pitched as right now, as a firmware security device to prevent malware. However, how far away is it actually? If you can have Pluton verify a system as Securely booted without tampering, encrypt and decrypt material with an unextractable key, and connect to Azure for firmware updates, how much harder is it to add a secure mode to Microsoft Office that pulls it all together? You can’t hack Microsoft Office’s read-only or other protection modes if your MDM blocks external apps, the hypervisor prevents code injection, you can’t mess with Secure Boot without your keys getting invalidated, and your document is encrypted with a key only your device has that cannot be extracted.
A Linux Kernel engineer on Hacker News (@mjg59) claims that “Secured Core” PCs and Pluton are not synonymous. This would mean that certain features mentioned in the “Secured Core” list would not be present on all systems with Pluton.
I am currently unable to verify this claim (or my original view that the two were inseparable), due to the lack of hardware with Pluton out there, and also because it appears that (all?) hardware with Pluton also implements Secured Core right now, but perhaps Pluton w/o Secured Core systems will emerge. I am open to being incorrect on this though – as the lack of information regarding Pluton means I am stumbling in the dark for information.
I also currently do not buy his argument fully as of yet because he also argues most of what Pluton can do could also be implemented with just a TPM 2.0 chip. This may be true – however, this also leaves the actual purpose of Pluton unclear and possibly very redundant, and it doesn’t address what Microsoft means in their blog post by “chip-to-cloud” security that wasn’t possible before. If it wasn’t possible before, and is now, what changed if nothing changed? Is this Microsoft making a huge amount of fuss over just a remotely-updatable TPM 3.0?
The Kernel Engineer responded to that by stating that Pluton is a more-secure TPM, because it is built-in to the silicon, and because Microsoft wanted a more secure, easily updatable security chip than the Intel ME / AMD PSP which have had issues previously and are harder to update. This doesn’t make much sense to me – is it really hard to implement a separate TPM outside of ME/PSP that just does TPM things and gets Windows Updates? It’s really easier and more trustworthy just to add an entire new security processor to an actual chip CPU and adjust that design to various node sizes because Intel and AMD can’t implement any secure interfaces of their own? I don’t buy it. Intel would probably have vastly preferred that, then it could have just been marketed as a new vPro feature.
(Edit: it’s been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you’re running. There’s various reasons I don’t think this is realistic – one is that there’s just way too much variability in measurements for it to be practical to write a policy that’s strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)
I do not agree with him on that and Hacker News considers that to be naive, and to me it screams “it could – but it won’t!,” but there is no reason he couldn’t be right. Read it as an optimistic scenario. It’s hard to be optimistic though when SafetyNet exists.
On that note however, there are some things that he has stated that are accurate that weren’t in the original story. This is not malice on my part (as, well, why would I update this blog to refer to his views?) but rather a lack of information:
Original version of this post believed that Pluton is a de-facto Secured Core PC implementation. This is corrected, we don’t know whether this is the case, and do not currently know of proof either way. I personally doubt that Microsoft intends “Secured Core” to remain around forever and won’t mandate parts of it slowly.
I misread the checkbox on the Microsoft documentation regarding SMM. SMM has been around forever, but Secured Core PCs require SMM with Device Guard whereas standard Windows does not – and Microsoft’s documentation for Secured Core then just has SMM (with Device Guard) unchecked on the requirements which I misread as no SMM. Oops.
Apparently DICE and extensions are a… simpler way of doing the exact same things as a TPM? I can’t quite follow why anyone would want that. He did not have much explanation for what Pluton needed RIoT for and (it appears) initially did not believe RIoT was in Pluton, but then he said it is probably just for IoT scenarios. I’m a skeptic. Could PCs one day be managed like IoT?
I think the engineer’s main criticism is that I got some details regarding TPM wrong (in that TPM is more capable than I thought), and that much of what is stated above can already be accomplished with just a TPM but hasn’t been (and, in his view, probably never will be, though I disagree and believe Pluton makes it easier in the future). In which case my “fear and despair” part was may have actually been underselling what Pluton specifically can do and means long-term.
Because of that, I think my new criticism of Pluton would be the following (which he has not responded to despite responding to others later multiple times):
At this point, even if a TPM can recreate much of Pluton’s functionality, I still believe some fear regarding Pluton is still necessary and healthy, although I do not dispute that for some uses it may be useful – after all, why was my fear mongering section explicitly labeled “Fearmongering and Doomsday speculations”? Microsoft can still screw people over, but Pluton is different from a TPM and should still be (generally) regarded with caution where possible, and more caution than a standard TPM.
This is mainly because, at this point,
A. A TPM’s level of access and capabilities to a system is well-known at this point. Pluton, we do not know with certainty what all of its capabilities are.
B. Microsoft has explicitly stated Pluton will have functionality added to it in the future though software updates, most likely that cannot be downgraded, that are not present yet. It’s not that Pluton might have stuff added later – Microsoft has said stuff will be added later. What these upgrades entail or are capable of is also unknown.
C. Because of the above, Pluton requires a previously-unknown level of trust for Microsoft, because Pluton almost certainly has anti-downgrade procedures. Microsoft could, potentially, send out an update just blocking Linux and if Pluton received the update, it would be irreversible. Maybe this isn’t within Pluton’s abilities, but we just don’t know. Just that Microsoft (or a hacker of Microsoft – I’m more concerned about a rogue employee than Microsoft at the moment) could have permanent effects on the security of a system is worth paying attention over.
D. Because of the reasons above, Pluton should be regarded with extra skepticism as it is a magical black box, with unknown capabilities, that it is not clear whether it can actually be disabled. (Already on my blog, there’s a user talking about how Pluton briefly boots and then disables itself if the UEFI says that it should be disabled, not that it never starts, so theoretically a Pluton update could ignore its own disable switch.) I don’t have verification of that, but until we know more… TPM is known, TPM can screw people, Pluton has the potential to extremely screw people over, and while many of my doomsday speculations can actually be recreated with just a TPM if TPMs are widely adopted, perhaps it could be enhanced with more Pluton-specific ones. Perhaps my doomsday predictions actually weren’t far enough.
Thus, your point that Pluton doesn’t add too much might be completely valid right now. That doesn’t mean Pluton isn’t also a potential Trojan horse that Microsoft updates as they please with new things that we didn’t expect or ask for with no ability to undo them.
Edit: Removed a previous edit, and adding that, to complement the above notes, it does not help instill confidence that Microsoft isn’t telling what Pluton can and cannot do at a hardware level. They’ve said a few things it can do right now, and just said more stuff will be coming in the future, but they won’t talk about where its limits are. So… trust the black box without questions please. To be fair, this isn’t the first time (Intel ME, AMD PSP?), but it is unsettling to have another one.
The world is full of IoT (Internet of Things) devices, and they all run their own firmware – software that isn’t meant to be updated often, if ever. It’s often Linux-based, often insecure, and often a quickly-hacked-together mess with the goal to get it to work and then immediately ship, regardless of how maintainable or well-written the code behind it is.
I picked up some Blu-ray players from Goodwill that were manufactured from 2010-2013 from Sony and LG, and was curious to see, a little bit, how they worked…
Now, I’m not going to attempt to truly “reverse-engineer” the firmware. I’m basically clueless at understanding disassembled ARM (let alone 32-bit ARM EABI 5). Also, there is going to be a point where the protections massively increase – after all, this is a Blu-ray player and keeping the decryption and copy-protection implementations secret is a high priority for the designers, at least in theory.
Blu-ray Copy Protection is not going to be explored much here. For a quick recap, there are two main technologies used for protecting Blu-ray Discs: AACS and BD+. BD+ is used on relatively few discs, while AACS is mandated on all pressed discs (and costs a 4 cent license fee per disc). AACS and BD+ together were expected to be resilient for about 10 years according to their designers when they launched in 2006, but in practice, the scheme was quite broken by 2008-2009. There was also the massive 09 F9 controversy in 2007, which goes to show that (in my opinion) DMCA Section 1201 is just flat-out unconstitutional and unworkable.
Constitutional or not, 1201 has been a disaster encouraging the installation of DRM schemes everywhere, while not succeeding in preventing the cracking of DRM, ultimately annoying the living daylights out of legitimate buyers while only slightly inconveniencing pirates. (Also, fun fact, BD+ is a big reason why movies studios supported Blu-ray, as both HD-DVD and Blu-ray had AACS. They backed Blu-ray for what ultimately turned out to be a disappointing protection measure that didn’t last long. I wish HD-DVD won just because it was a better, more self-explanatory name.)
I’m not going to specify the exact model of Blu-ray firmware I downloaded, but I went and got a copy of some firmware for a LG Blu-ray player:
That… doesn’t tell us much. It’s just a giant “.ROM” file, what on earth could be inside?
Well, the answers will come from a tool called binwalk. It’s open-source, freely-available, and you can get it from Homebrew on macOS. It’s also a great entry-point for any firmware, as long as it is not encrypted or weirdly formatted. binwalk is excellent at breaking apart how a file is constructed, and if we run binwalk against the firmware, we see:
This actually tells us quite a bit about the system. At the beginning of the firmware, we see two entries for a Mediatek Bootloader. Mediatek is a Taiwanese chip design company that offers several chips designed exclusively for Blu-ray players, and is very popular with cheaper Android devices and, well, multiple Blu-ray manufacturers.
Next are two certificates in DER format – which is a little unfortunate. It means something is digitally signed. It’s not immediately clear what, and there are ways to work around digital signatures, but it is not easy. It is easier on these older systems which have less-advanced hardware root of trust systems than, say, a modern iPhone which is currently impregnable, but it does show there is some sort of protection against running arbitrary code on the system startup.
Next, we see some CRC32s. These are checksums, likely to verify that certain parts of the image are not corrupt, maybe even by the software updater.
Below that is where things get actually interesting. Combined, we see a Linux 2.6.35 Operating System image, 2 file systems (one for recovery, one for playback?), 2 encrypted areas with an unknown algorithm (though binwalk could be misunderstanding them), and a PNG image.
The PNG image is, surprise… the boot screen.
Seems a little unnecessarily low-res at 720×480 for a Full HD 1080p Blu-ray player, but whatever.
Now, if we run binwalk again with an -e flag (and have certain other utilities for uncompressing SquashFS installed), it will actually extract what it can out of the firmware into a nice folder structure:
squashfs-root-0 is the much-smaller partition that, I believe, is used for only recovery or some factory setup, while squashfs-root is the interesting one.
But there’s more to the story than that. When you run the extraction:
When you look at the logs, there are actually a bunch of symbolic links to an encrypted mount point at /mnt/rootfs_enc_it which, as far as binwalk can tell, doesn’t exist, so it replaces them with links to /dev/null to avoid a security risk.
This is very interesting and is some of that copy-protection I mentioned earlier. If you look at the files that were replaced with /dev/null links, look at their names:
libaacs.so
libbdplus.so
ca-bundle.crt
The first two are obviously the libraries that implement the AACS and BD+ copy-protection schemes. CA-Bundle might be for a web browsing component, or it could maybe contain the device-specific key used for decrypting Blu-rays, which is a big deal to keep locked down and secret.
These files were symbolic links to a partition that doesn’t exist. Remember there are two encrypted (likely) file systems in the firmware with mcrypt, so it is likely the code for AACS and BD+ is in one of those encrypted blocks, and then is decrypted on boot and mounted into Linux so that they can be securely used without being transparent on a firmware dump.
If we observe those Files in Finder, they are indeed links to nowhere:
Now, you might be wondering if the key to unlock the mcrypt areas containing those decryption files can be found in the firmware download, and then these files could be read. I doubt that because, let’s say I run a search for that /mnt/rootfs_enc_it folder:
Code referring to rootfs_enc_it occurs in three other files. If we look in a hex editor at them, they look generally like this:
It appears to be a map of what the internal structure will look like, as they all list other partitions and not just that partition or code to mount it, in particular.
I suspect that this is code for the Mediatek Bootloader and boot system, before the system starts Linux, though I could be wrong on that. It has instructions for where to put things for when the Linux image starts (at least, what it appears to me), and mentions that there is an encrypted mount point there. Maybe the key is blended in the surrounding hex code, but I doubt that the designers of this would have been that stupid.
Instead, I suspect that the key for unlocking the mcrypt areas containing the copy protection and decryption code is locked with a device-key hidden inside the chip itself, possibly programmed in during manufacturing. The chip has the key most likely in its own silicon, which it can decrypt that firmware area and load it into Linux with. It’s what I would do if I was building a copy protection system – I wouldn’t make it this easy to retrieve.
On the other hand, this does leave a fairly significant weakness. A Linux system running Linux 2.6.35, with networking abilities, with that likely hardware-encrypted mount point mounted and unlocked. If one were to find a root vulnerability, I would assume it to be very possible to dump those protected files for disassembly.
I’m not going to go that far, at least not in this article. However, I would expect cracking a Linux 2.6.35 system to be fairly ~easy considering the wide attack surface and over a decade of new exploits later.
Looking at what else is in the dump though, we’ve surprisingly got all our basic utilities:
A little surprising that BusyBox isn’t used, but this isn’t a low-memory system so maybe it was easier. However, there is something suspicious with how many of them just-so-happen to be exactly 585 KB in size.
Running a file:
If I wanted to build a cross-compiler, that’s pretty important information.
Continuing a probe around the firmware:
A folder full of unencrypted wifi-related shell scripts. Written in English too. Fun.
This is more telling:
/lib is full of what appear to be references or shims for Linux. directfb is mentioned at the top (and elsewhere in the files not shown), indicating that this CPU actually does not have a GPU. Everything is software-rendered, except for the video stream which is decoded using the embedded H.264 Decode block.
This is pretty common – not licensing a GPU makes development simpler, cheaper, and easier to produce. It also explains why there are so few animations in the user interfaces of most Blu-ray players, and why the few animations there are seem to run at 8FPS.
Qt is also mentioned, and lower down it’s got a ton of libraries:
Qt and WebKit is a pretty predictable choice for something like this, but it’s cool to see.
In a /res folder (the only non-Linux standard root folder), there’s what appears to be images or binaries of images:
There’s also a folder with some Pulse-Code-Modulated Audio (think CD format) files for something called “fanfare”:
However, PCM is kind of a difficult file to play if you don’t know the exact Khz, Mono/Stereo, Start position, and all those other factors. Trying in Audacity made nothing but static, but 100_fanfare.pcm is 2.5MB in size and likely playable with the right setup.
/usr/local/bin is where things get fun:
Meet bdpprog. It’s a massive, 18.6MB executable for everything. There’s no shell like bash or sh here (at least, not easily accessible on startup) – it just boots into bdpprog for everything as far as I can tell.
bdpprog is massive, most likely responsible behind everything, and also appears to be derived from a Mediatek-written original version. bdpprog also appears on Samsung, Oppo, Panasonic, and Sony Blu-ray players. It’s also what crashed and caused boot-looping when Samsung sent out a malformed XML file to some of their players a while back. As mentioned there on Samsung firmware (even though the player I am looking at is an LG device):
“After the crash, the main program, bdpprog, is terminated by the kernel,” said Gray. “Since bdpprog is the main program, its termination results in a reboot by init. Even less fortunately for Samsung, the code for parsing the logging policy XML file is hard-coded to run at every boot. The result is that the player is stuck in a permanent boot loop as has recently been experienced by thousands of users worldwide.”
Still though, if you are a Blu-ray player manufacturer, Mediatek has it all down for you. They’ve got this custom chip, extra security for the libraries that handle the copy protection with the encrypted folder, and a mostly-written Blu-ray player boilerplate you can apparently just tweak for your branding and features and ship out.
While this would seem ingenious… that is also why a lot of Blu-ray players (not just this one, all three of my Goodwill players as well) are stuck on Linux 2.6.35, and are likely vulnerable to the exact same vulnerabilities as discovered on other brands.
Scrolling down on the window, you can see this interesting bit:
Some code for the Vudu client, and for some reason a script to launch the client. (Why not launch it directly from bdpprog? 🤷🏻♂️). Note the commented-out #LD_PRELOAD=/lib/libSegFault.so. Here it’s been commented out on the latest software version from 2015, with good reason. In 2014, a security researcher took a look at some Blu-ray players, and found a very similar line in a similar file called browser.sh was not commented out and instead read:
export LD_PRELOAD=/mnt/sda1/bbb/libSegFault.so
Note the /mnt/sda1 there, and you’ll realize the stupidity of the mistake. /mnt/sda1 on this system is not the root filesystem – it’s the mount point for external USB Flash devices. So, just make a fake libSegFault.so, launch “Browser” (which was used for Vudu in earlier versions) and you’d have an easy root exploit. Whoops. Too bad he didn’t dump the decrypted /mnt/rootfs_enc_it, whatever those files said.
Not that it would be that hard depending on how involved on this reverse-engineering I go (and depending on what’s legal, of course). This thing has network access with a stack that’s super old – probably a bunch of bugs there. These things have less-advanced hardware root of trust, and region-free mod kits require flashing custom firmware, so there is doubtlessly a way to fool it into doing something stupid. Maybe there’s another USB exploit or a bug in the media stack.
For now, this looks interesting:
Looks almost like a way to load apps from a USB stick. Another curiosity, I’m not sure what LG’s intentions with this code were:
Another possible stupid entry point, let’s connect to a non-HTTPS server for the NetCast App Store…
It appears, at least for now, the built-in NetCast App Store is the most obvious way in, with it appearing to allow loading Apps from a USB stick and downloading apps over what appears to be an unencrypted connection without a pinned certificate. All without any actual decompilation or reverse-engineering, just in plain English scripts.
That’s how far I’ll go for now. I’m not sure what the legalities are for going deeper beyond, well, just reorganizing information without decrypting or decompiling anything (which this is). It also is probably far more work with little benefit… but who knows.
Recently, I’ve come across an interesting conundrum: I’ve been paying for Disney+ which is $7.99/mo. and we’ve still been renting some other movies, but I’ve been wondering how long this would go on. We watch a lot of the same movies a lot, so if I just bought all the movies we watch… that would pay off on investment in like 2-3 years considering the cheap prices on eBay.
So I began acquiring movies several months ago, and I’ve been acquiring them only on Blu-ray where possible. DVD really looks 1996 on even a moderately-sized 4K TV. (Fun fact, we wouldn’t even get flat-screens until almost a decade after DVD was released. DVD was designed for the fat-thick-TV era.) We don’t have a 4K TV yet, but the next TV will almost certainly be one because that’s what they sell nowadays and I prefer my movies to look crispy, not muddy. A Blu-ray is 6 times sharper than a DVD, and on a 4K display, even though it is only 1080p instead of 4K, it still looks way better than the old 480p DVD.
So I’ve been acquiring Blu-rays, but we have some family friends. They aren’t up-and-up on technology. The still have the 32″ screen in the basement and used to have a bigger screen upstairs until it shut down one day and began releasing toxic smoke, though they will be replacing that one soon. They only have DVD players. I introduced them to streaming a few months ago by buying a Roku stick and it was a revelation, as to that point all their movie watching was DVDs they bought or DVDs at the library, and that was it.
My younger sister visits this family often, and they love sharing movies between us. But there’s a problem: All the movies I’ve bought are Blu-rays. They still have DVD players. So they can share movies with us, but half the movies we have (the ones I bought) can’t be shared with them. The only way out of this conundrum is to get them some Blu-ray players.
Except… that’s not easily solved. This family is… on the poorer side, where buying a cheap TV takes a month of planning for the budget. Splurging $100 for 2 Blu-ray players so we can share movies is really out of the question. And now my sister is “angry” (half irritation, half Gabriel what did you do?) that she can’t share movies with her friends that I bought. (She’s not actually angry, it’s just an annoying but slightly funny situation.) Though I guess this annoyance is greater considering I’ve been gave her a movie she liked on Blu-ray for Christmas that she can’t share, shame on me.
With this in mind, I began plotting how to fix this Blu-ray problem. The family would need 2 players to be comfortable, but saying that I spent $100 on Blu-ray players would strike them as being an overly generous and strange Christmas gift, maybe, and that’s not a solution when you’ve passed Christmas and its February. I needed something cheaper. Something so cheap it wouldn’t even scream gift as long as I told them how little money I spent.
There’s only one place for that: Thrift Shops. The machines that get donated there are often broken, but, well, maybe they could be fixed? If they could be fixed, then I could save money and raise my reputation in the family for being able to fix them (I say this for laughs, I really don’t care about reputation, I wouldn’t slouch if I did). It’s a win-win if I can find something fixable. I ran down to Goodwill and managed to get these 2 machines for $12 pre-tax:
2 machines, $12, no guarantees they work, no remotes, and no power cables. Just the bare machines, as-is, take it or leave it. I figured if just one of the machines actually worked, I’d call it a $12 expense and that would be that. More like $20 after buying a Universal remote from Walmart to control them. That’s now regular gift territory instead of being awkward even on Christmas.
There’s only one problem, and that’s that they are old…
I knew that they were old on first glance when I saw the colorful analog jacks on the back. Manufacturers stopped including them on Blu-ray players in 2013 partly because many people didn’t know you should use HDMI and the color jacks were much lower in video quality. This meant these both were at least nine years old when I bought them.
I went to work on the Sony one first (the other was LG). Booting it up brought a very, surprisingly, PS3/PS4-like UI:
I tried playing a DVD. Worked like a charm! Then a Blu-ray, total failure, couldn’t recognize the disc. I then, well, ripped apart the drive, removed the laser safety cover (don’t look at the laser or go anywhere near it if you do this!), disc shield, and got the board down to this:
Just looking at this, I knew it was an old player because this is a extremely… overengineered design compared to modern Blu-ray players. That heatsink is almost as wide as my hand, and I have large hands. No modern Blu-ray player looks like this on the inside, but it’s just… wow. I was after something in particular:
This is the read head, and you’ll notice two little eyes, one Bigger and one smaller. The bigger one is a combination lens for the infrared laser (CDs) and Red Laser (DVDs). The small one is for the Blue Laser (Blu-Ray). I grabbed a Q-Tip and a bunch of this stuff that smells funny:
I was working on a desk when then I accidentally poured too much on the Q-Tip and spilled a giant splash of IPA on my pants. Thankfully it is not harmful to humans and evaporates quickly (it was gone in like 5 minutes), but my pants might be super-flammable until I wash them. After reminding myself to take chemical safety more considerately, I rubbed the Q-Tip over both lenses. This removes dust on the lens, which can massively help discs play. (It’s depressing to think about how many disc players have been thrown out over the decades when they just needed a quick de-dusting).
After that, some discs played… but rarely. I observed the laser on the disc (with eye protection, it’s a laser, not a toy!)…
The laser was constantly re-focusing and failing over and over to read the discs I put in it, though once in a while it would lock on and start playing. I was disappointed because, having worked on Blu-ray players previously, the laser looked a bit dim. This meant that 13 years of age (it was built in 2009) had slowly worn the laser down to such a degree that, even with a perfectly clean lens and disc, it just wasn’t bright enough to consistently read discs. This player is also unclear where it get’s the laser voltage from, but I don’t want to be the person who overclocks a laser.
As a last-ditch effort, I put the drive in the top-secret Service Mode, and dug through the menus. Sadly nothing in Service mode was that useful, but this was interesting:
At 560 hours of Blu-ray playback and 5,828 hours of DVD playback (what a champion Red laser for the DVDs!), this thing is quite ready to be retired and is almost certainly not worth salvaging.
I might revisit that player, but in the meantime, I turned to the other one. It’s an LG, I opened it up and got this photo:
Despite being just a year newer, the design has simplified quite a bit. The control board is way smaller… and booting it up shows a simpler, duller, LG design:
So, I begin testing it. Same issue: DVDs work, Blu-ray is dead. This player is from 2010, so it’s 12 years old (1 year newer). I removed the laser shield and found this laser inside:
So, I grabbed the IPA, doused the laser lenses, and tried discs. They (mostly) worked! I called it the night on Saturday. Then Sunday came, and almost none of my discs would work. What gives I thought?
Well… they worked, but not well. They would always play, but I had to forcibly restart the machine, change the disc position in the drive, and fiddle with it for about ~10 minutes to get it to play a disc. It would play any disc, just after ~5-10 minutes of fiddling and begging it to play.
Then, I noticed something… look closer at the image:
Extremely, extremely tiny adjustment dials labeled “D,” “C,” and “B”. Clearly “D” for “DVD” (red laser), “C” for CD (infrared laser), and “B” for “Blu-Ray” (blue laser). Online, even though I didn’t find any documentation for this player, they were likely voltage adjustment dials to adjust with differences in laser power after manufacturing. Thus, if I turned the “B” screw just a little, I could increase the power that went to the laser. (The Sony player didn’t have any screws like this, I went back and looked.)
I carefully went a 1/8th-turn to the left. No discs would play, and on careful observation, the laser appeared visibly dimmer. I put the screw back, and went 1/8th-turn to the right. The laser was brighter and played any disc I threw in it – any. It was happy. I tweaked it a little more to be just powerful enough to play but not too powerful (don’t want to burn out the laser quicker), and sealed everything shut again.
All in all, this was a two-day project. The Sony Player is almost certainly dead. The LG one, with lens cleaning, lots of tweaking, de-dusting, and laser power adjustment, now plays Blu-Rays and DVDs like a charm despite being 12 years old. And now I have a Blu-Ray player that works for… $12 if you ignore that the family in question already has a universal remote. $12 and 2 days of time. Now I need to go to Goodwill again and find another Blu-Ray player that is fixable…
I’m taking my second semester of classes at Inver Hills, and in my Chemistry class, we have this awful piece of software called the “Respondus Lockdown Browser.” It’s job is to lockdown the computer so you can’t use other programs, prevents copy-paste, and in theory prevents cheating.
I understand the motivation. Cheating is a scourge upon faculty and faithful students. But the methods Respondus uses (specifically the Monitor add-on) were, in my view, unacceptably invasive. Scan my Student ID or Driver’s License even though a similar company had 440,000 students hacked? Freak at me if I dare to stretch or move my head or bury my face in my palms? The ways browsers like this are unfair, discriminatory, and invasive have been well-documented in The New York Times, The Verge, and in a particularly scathing article from the MIT Technology Review.
Software that monitors students during tests perpetuates inequality and violates their privacy.
– MIT Technology Review.
Even though I took the first exam with Respondus, I got more and more angry about it. In a particular moment of frustration upon realizing Chem Exam II would use the same browser, I wrote a fairly angry email to my professor. Even though my professor had earlier admitted the browser was “draconian,” it was to prevent cheating. In my view, I did not pay $1000+ for this class to potentially have my Student ID stolen, my face recordings potentially kept for up to 5 years and resold to third-parties for AI training, or to support privacy-invasive technology. In my mind, I cheered for the 1200 students of the University of Massachusetts who successfully protested to have Lockdown Browser banned.
My professor didn’t receive my email well, but I managed to get a deal through where he would monitor me over Zoom with some other students. I have yet to take that exam, but that’s much better than the risks this software has and my ethical qualms about using it.
Now, with a safe academically-honest way to take the test, I wondered how secure Lockdown Browser actually is. I’m a Computer Programmer by trade (90th percentile on AngelList!), but the Respondus TOS states that I’m not allowed to reverse-engineer, disassemble, modify, blah blah blah boilerplate EULA. So, without using any computer programming skill, I wondered how Lockdown Browser might be defeated.
The answer: In short, it’s bad. In less than 5 minutes, without using Google, I thought of a potential solution and in 5 minutes more got it working. I can’t say they didn’t try, but if a 19-year-old can think of a solution to beat your software in 5 minutes without using Google, that’s really really bad.
That’s Google Chrome and Microsoft Word, open in a fully locked-down browser mode (just without a test loaded, but all of the system lockdown functionality is in effect.) I’m not going to explain at all how I did it (also because people have been sued for finding bugs in similar programs and reporting about them), but for the computer programmers out there, this image gives a big hint as to what the flaw is. I’m also not going to explain how I took a screenshot when Respondus blocks screenshots. Also, Respondus doesn’t do screen recording, so this hack is completely undetectable except for the student’s facial expressions, glare on glasses, or sound of typing.
So, to anyone on my campus (or in the broader world) who thinks that Respondus is virtually cheat-proof on a system level but has problems with student privacy, that’s not the case. The software can absolutely be defeated, even without programming skill. What’s frustrating too is that, when I looked at my experience shown above, I could think of multiple different methods the programmers could have used to block what I just did. They just didn’t put together what I did as being possible.
So… if a 19-year-old manages to defeat a major corporation’s anti-cheat software in 10 minutes with a unique flaw, why are we giving them student IDs again?
I think this is enough to prove my point. For anyone out there who thinks that I cheated or am posting this in bad faith, I can only say that I took the exam completely honestly – and that I am posting this publicly because it removes the temptation to keep the problem secretly to myself, and thereafter use it for all of my Chemistry exams as awesome as my grades would be. 😉
Update: I passed that test just fine with my instructor watching over Zoom call. I also held a meeting with my college about the security problems, which they acknowledged, but claimed that because it was a state-wide contract, they couldn’t stop using Respondus, and it would probably be “secure enough” for most students. I can’t help but wonder, though, how many wealthier student’s parents would be interested in purchasing my methods… am I really the only one who has figured out bypasses…