Sorry Purism, I’m not investing. It’s (possibly) not even legal. (updated once)

Update 1: Purism states on their website in a March blog post that there is one production run that covers the entirety of late 2019 – mid 2021. So, it is actually possible to accomplish. https://puri.sm/posts/where-is-my-librem-5-part-3/ This also makes some of the production timing mentioned below much more innocent even if calling it “In Stock” within two months in a pitch to investors seems to be a stretch. If Purism was willing to keep the “52 week lead time” visible to customers out of uncertainty, investors shouldn’t receive much better-looking alternative statements. (This also does not address whether the original 2019, or 2021 sales, were more questionable than the current 2023 sale for what they would be used for.) However, it does not address that the requirement of Accredited Investor status is not a “good faith” requirement as far as I can find, and this still could be an illegal securities sale.

Original article below

I just got this email (and an email like it) for the second time in just over a week from Purism:

Purism Supporter [sic],

5% bonus on any investment into Purism [sic], helping advance our social purpose mission.

We are contacting you either because you have directly asked us about investing in Purism, are on our newsletter, or a customer whom we thought would be interested in hearing about our investment opportunity. If you are not interested and don’t want any more emails from us, please let us know and we will quickly remove you from this private mailing list.

For the next two months you can earn an additional 5% immediate bonus on any investment.

Products in stock with less than 10 day shipping time:

  • Librem 14 laptop
  • Librem 5 USA phone (with Made in USA Electronics)
  • Librem AweSIM cellular service
  • Librem Key security token
  • Librem Mini mini desktop computer
  • Librem Server a 1U rackable server


Products shipping through backorders and in stock in July, 2023:

  • Librem 5 phone


Products planned to arrive within the year:

  • Librem 16 laptop
  • Librem 11 tablet


With this investment opportunity we are accepting increments starting at $1000 and allow for easy cart checkout to invest. We invite you to get more information on this investment round including the immediate 5% bonus. Find out how to invest, where we will use the funds, and our current progress in this round at our private investment page at https://puri.sm/ir/convertible-note/.

Sincerely,

Todd Weaver  
CEO and Founder  
Purism, SPC

OK… first off. Going anecdotally from what I’ve heard through the grapevine, claiming that the Librem 5 will be in stock by July seems extremely ambitious, if not impossible. Anecdotes on Hacker News talk about having received orders from 2019 just weeks ago. Heck, the Purism website right now lists it as having a 52 week lead time. So why does the email to their investors say it will be in stock in just 8 weeks, whereas their website says 52 weeks? It can’t be confusion with the USA model either – that has a backlog according to the website of just 10 days.

So, putting that potentially illegally misleading statement to potential investors aside, look at this next bit from their investment page:

Has there been previous investment?

Purism has grown mostly from revenue, however, Purism announced closing $2.5m in notes in December 2019. Purism has raised over $10m in total all under convertible note terms.

That’s… incomplete. Purism also did this in 2021, which they disclose in the actual legal document and earlier on the web page if you have your eyes open, but not the FAQ. And people were (anecdotally) complaining then about having not received four-year-old orders. Why is this such a big deal? Look at what they are going to do with your investment:

What are the funds used for?

As stated above, we will use the investment funds for parts procurement in preparation for large production run of stock, as well as continuing development of all our freedom respecting revolutionary software stack, and for more convergent applications in PureOS for the Librem 5 phone.

Excuse me… does this almost look like some form of Ponzi scheme if the anecdotes are true? Purism raised $2.1 million from orders from the Librem 5. Then they sold this form of “stock” to get more cash in 2019, and 2021, and now 2023. They are openly saying right now the cash raised will go to ordering parts for a large production run, which will complete orders from 2019. As this community shipping date estimation thread on their own forum shows:

Now, I can’t go on anything more than a hunch. But my hunch is that Purism is using investor funds to subsidize orders, and selling “convertible notes” to do the job. Is that illegal? I am not a lawyer, and at least it’s pretty clear if you really go digging, so it probably is legal. But is it shady? Or at least Unsustainable? Plus, if I am an investor… how does it feel, knowing your cash is most likely just going to dig them out of a money pit and not actually work on growing the company otherwise? Is that not just a tiny bit misleading, for being a morally superior “Social Purpose Company”?

But then there’s one more problem. That email I got. Once again in the FAQ:

Am I an Accredited Investor?

For US Citizens, this is a good faith requirement, since there is no way for Purism to validate your accredited investor status, by investing you are stating you are an accredited investor that is defined as meeting any one of the following: earned income that exceeded $200,000 (or $300,000 together with a spouse or spousal equivalent) in each of the prior two years, and reasonably expects the same for the current year; OR has a net worth over $1 million, either alone or together with a spouse or spousal equivalent (excluding the value of the person’s primary residence); OR holds in good standing a Series 7, 65 or 82 license; OR any trust, with total assets in excess of $5 million, not formed specifically to purchase the subject securities, whose purchase is directed by a sophisticated person; OR certain entity with total investments in excess of $5 million, not formed to specifically purchase the subject securities; OR any entity in which all of the equity owners are accredited investors.

Purism, there’s no way for me to invest legally, even if I wanted to. So why are you emailing me soliciting investment? Why don’t your emails clearly say “US Citizens who make under $200K yearly cannot invest”? How does that even jam with the SEC rules on soliciting as this helpful Reddit thread points out?

It’s my understanding that if Purism offers/advertises the investment in a public manner (a “general solicitation” … and I think that this counts as a general solicitation https://puri.sm/ir/convertible-note/ ), they must satisfy Rule 506c or Rule 506b and must take reasonable steps to verify the “accredited investor” status ( https://www.sec.gov/smallbusiness/exemptofferings/rule506c ):

Some requirements of Rule 506c: 

all purchasers in the offering are accredited investors

the issuer takes reasonable steps to verify purchasers’ accredited investor status and

While Rule 506b doesn’t require everyone to be an accredited investor, they can only have up to 35 investors (in a calendar year), but Purism “must reasonably believe” the non-accredited investors have “such knowledge and experience in financial and business matters that he is capable of evaluating the merits and risks of the prospective investment”

(Here is the text for Rule 506:  https://www.law.cornell.edu/cfr/text/17/230.506 ).

My guess, though, is that they might be trying to fall under Rule 504 … where the disclosure and verification rules are more lax. However you can’t fall under Rule 504 if the offer is public … and I think that having the offer on a publicly accessible website is a violation. That said, I’m not 100% sure about what counts as a “general solicitation”. See: https://kkoslawyers.com/what-to-be-aware-of-in-the-friends-and-family-round-of-financing/ . The actual rule is https://www.law.cornell.edu/cfr/text/17/230.504

So, let’s say I could get past all of that. Let’s say I could get past all these red flags and questions, and the fact that the solicitation might be illegal, and the lack of actual checking who the buyers are may also be illegal. The elephant in the room:

What is my note worth?

The note is worth the amount you invested, it is debt owed to you. It also earns 3% annually, and upon conversion will earn an additional 8%, at which point you will be a shareholder of Purism, SPC.

There are a lot better ways to make 3% interest. My bank account with Discover gets 3.75%. I could do an 18-month CD with them to get 4.75%. I will get 8% when my note converts… in stock for the company, if I’m understanding it correctly; so I’d have my $1000 investment in Purism become worth $1,080 of stock at an unclear valuation. However, if all investments are just going to fill a backlog, how much is the company actually worth? Is my $1,080 stock going to be calculated based on how much other people invested, resulting in (arguably) a very inflated valuation?

Consider this thought experiment. Not being an economist or lawyer myself, just doing my best to understand, consider a company named X. X is worth, normally, $1 million. X sells $5 million worth of product for $2 million by mistake. X sells $3 million worth of stock to cover the gap, and convinces everyone that the company is now worth $4 million because of the prior $1 million valuation + $3 million worth of stock sold, even though basically nothing changed with the company’s actual value from before once the orders are finished. If there was a free market, that would quickly be discovered and tank the valuation back to being much closer to $1 million and shred the equity value. Which might be why Purism really doesn’t want you selling your notes:

Can I sell my note?

Not easily. The best way to look at convertible notes is to consider them long term investment in the future growth of a social purpose company you desire to see grow and reap the future benefits from its success. It is possible to transfer (e.g. sell) the note to other parties but that would be done separately and independently by you, notifying Purism of the legal transfer of ownership.

Now, is this all of this combined illegal? I don’t know. But icky? Definitely feels like it. Directly going to people trying to sell “convertible notes” with a misleading statement about your ludicrous backlog, and no notice in the solicitation that US individuals cannot buy them, and open admission that they do not themselves check if their buyers are from the US despite what the SEC rules appear to be (almost begging for people to ignore the notice if they read it), all looks as sketchy as heck to me.

And so, while this is not financial advice, and I know that saying I am not a financial advisor has very little legal merit, I would advise that anyone investing in Purism view it equivalently as a Moody’s C or an S&P D. View it as a donation, not a investment.

The “Location Off” switch on your phone is a lie.

If you’re going somewhere anonymously, or attending a politically unpopular protest, or visiting a sensitive client, you might want to turn Location Services to Off in your smartphone’s settings. Great – now you can go and do whatever it is without worrying.

Well, that would be the case if we lived in an ideal world, but that switch is more of a polite “please don’t” than an actual deterrent. There are many other ways of getting your location, some of which you may not have considered, but I’m going to focus on the biggest oversight I regularly see even privacy-focused people ignorant of. This will be nothing new for privacy experts, but… it’s your carrier.

Think about it. To join their network, you are literally logging in with your carrier account, which is (most likely) tied to your identity and also has your payment method attached. Maybe you were clever and got a prepaid card with cash, but that’s another step. But consider what happens next: If you are communicating with the network, your phone and the cell tower quickly become aware of how much time it takes for a message to go back and forth between them. Say, a few hundred nanoseconds. It doesn’t take much math, because the amount of time is consistent for the distance you add, to establish a radius for how far away you are. Add in two or three weaker towers in the area that aren’t as preferable when your phone is looking for a better signal, and the carrier’s got a pretty good idea of where you are.

Which, is also why buying prepaid with cash is overrated. All they have to do is look at where you are between 9PM and 5AM for most days, and they’ll have a pretty good idea of where you live. What’s the point of paying with cash if they can easily find your home address?

This is just one way that the carrier could find your location. And there’s nothing you can do about it. If you are thinking that downloading GrapheneOS and only using the stock apps makes you immune… no, it doesn’t. Every line of code could be handwritten by yourself, but the moment your phone talks to a cell tower, there’s no privacy.

If you want to learn more ways you may be identified, look into IMSI Catchers; and also consider that your phone regularly talks to cellphone towers even from other carriers if you don’t have a SIM card installed, to deliver E911 support. No phone in the US needs a cellular plan to call 911, but that means that even a SIM-free phone is still talking to towers.

Better to leave the phone at home. Or, at least in a Faraday cage you can remove it from if you are desperate.

Nintendo Switch modding is illegal in the US, full stop.

Note: I am not a lawyer – but hear me out.

Considering the recent squabble between Pointcrow and Nintendo, almost everyone has heard of the “DMCA Takedown.” The DMCA is a huge (and arguably unconstitutional and 70% stupid) law that has a ton of sections, with Section 512 dealing with takedowns.

However, there’s another section in the DMCA many people don’t know: DMCA Section 1201. It deals with what it calls “Technological Protection Measures.” It’s basically a 90s term for what we would now call Digital Rights Management, or DRM, but a little more widely-applied. The Library of Congress summarizes the section:

The Digital Millennium Copyright Act (“DMCA”), codified in part in 17 U.S.C. § 1201, makes it unlawful to circumvent technological measures used to prevent unauthorized access to copyrighted works, including copyrighted books, movies, video games, and computer software. Section 1201, however, also directs the Librarian of Congress, upon the recommendation of the Register of Copyrights following a rulemaking proceeding, to determine whether the prohibition on circumvention is having, or is likely to have an adverse effect on users’ ability to make noninfringing uses of particular classes of copyrighted works. Upon such a determination, the Librarian may adopt limited temporary exemptions waiving the general prohibition against circumvention for such users for the ensuing three-year period.

So, there you have it, in short. Breaking any digital lock / TPM / DRM, without an exception being created during the rule-making every three years, is illegal. Even for fair use cases, like repairing a tractor, or jailbreaking your smartphone. DMCA Section 1201 takes precedence before any “Fair Use” claim. This point cannot be overstated – even if everything you do is otherwise legal and even protected by law as Fair Use, if you cross DMCA Section 1201, it’s illegal.

You might ask – wait a minute, jailbreaking my iPhone is illegal? Well, it actually used to be, but an exception was created for jailbreaking smartphones and tablets. However, guess what doesn’t have an exception yet: video game consoles. Well, they do have one exception – you can break digital locks, only to replace a broken disk drive, as long as you then put the digital lock back afterwards.

So, believe it or not, modding your Nintendo Switch in any capacity, under DMCA Section 1201, is actually illegal in the United States. There is historical precedent for Section 1201 enforcement as well, making it not just a theoretical issue. RealNetworks lost a lawsuit for Section 1201 violations when they made DVD ripping software, and Psystar went bankrupt partially from violating DMCA 1201 for making macOS run on unapproved hardware by bypassing Apple’s lockout. Guess which law Gary Bowser was convicted (among others) of violating, that sent him to prison with a 40 month sentence, for selling Nintendo Switch modchips. He now owes Nintendo about $14.5 million in part for violating this law, and he will have his wages garnished by about 30% until the debt is paid in full (which, almost certainly, will never happen).

This leaves Pointcrow and game modders like him in a quandary, legally, before even getting into copyright issues, or whether Nintendo’s Terms of Use (which say no reverse-engineering) are enforceable. How did he obtain a copy of the game to start modding? There’s only two ways:

  • Piracy (Copyright infringement – illegal)
  • Jailbreaking his Switch to get a game dump (DMCA Section 1201 – illegal)

So, before even talking about whether he’s violated Nintendo’s copyrights, or violated Nintendo’s Terms of Service that came with the game, he could have committed a crime with up to five years in prison and $500,000 in criminal penalties. This is also why anyone saying, “but he was clearly Fair Use, Nintendo is just using illegal DMCA takedown notices!” doesn’t know what they are talking about – this exact thing is what DMCA takedowns were originally designed for!

You might also be thinking right now, “but wait a minute, what about where it started? With NES and SNES modding?” Well, curiously enough, that’s not a Section 1201 violation because the NES and SNES don’t have encrypted ROMs or qualifying TPMs / DRM. This is also why you can rip a CD with your computer legally (because it has no encryption), but cannot legally rip a DVD in most cases despite the encryption algorithm being breakable with just 7 lines of Perl.

Welcome to the United States, land of “freedom.”

Perhaps something was rotten in Skylake

Here’s yet another theory that could partially explain why Windows 11 doesn’t support anything below Intel 8th Gen: Something’s really borked in Skylake (Intel 6th Gen), and operating systems are eager to get away from it. And, when possible, the refresh (7th gen).

Consider:

There are good alternative motives for all of those events (Windows 11 wanting HVCI and MBEC, macOS trying to phase out Intel, Microsoft trying to heavily push Windows 10)… but when you add them all up, and factor in an ex-Intel engineer saying Skylake pushed Apple over the edge due to “abnormally bad” QA, it begins to look like Skylake is something everyone wants to drop as soon as possible. Or, at a minimum, claim they are not responsible for supporting.

Now, what is this bug? We know there’s a major hyperthreading bug, but it most likely is whatever requires the most invasive fixes… or maybe it’s just the sheer abundance of tiny little paper cuts (Apple allegedly finding more bugs than Intel themselves) that becomes the issue. This has been backed up by Microsoft commentators as potentially being the source of the particularly buggy-at-first Surface Book and Surface Pro 4 (aka “Surfacegate”).

Thoughts?

Tech’s over-reliance on the internet is a preventable national security issue

What would happen if the internet suffered a prolonged and serious outage, reason irrelevant (cyberattack, zero days, P = NP with a simple and fast algorithm, solar superstorms, major vendor compromise, AWS KMS shredded from attack or mistake, total BGP meltdown, take your pick), but we still had electricity, gas, mail, mostly functioning government, and basically everything we used to have in the ~80s, in most areas?

Well, besides the obvious awful consequences on basically everything in every industry, I can sure think of some extremely low-cost, easily preventable technical consequences which would make rebuilding unnecessarily difficult:

  • How many people would have maps?
  • How many people would have survival information?
  • We had PCs before we had the internet. What happens when you can’t set up a PC without the internet?
  • Many platforms don’t support offline updates. What happens when you have a Switch game card for your desperate kids, but don’t have the update for the Switch?
  • How would education continue, if so many books and resources gone digital no longer exist – and the physical material that exists is now in great danger of theft?

Now… I will admit, what is the likelihood of such a scenario? Not very high… but it’s more amazing that we have successfully digitized so much knowledge, we now have capacity to widely distribute this knowledge and make ourselves more resilient to outages, and we don’t.

Imagine my following (very early, not set in stone, probably have loopholes or other issues, they are just sketches, hopefully somewhat common-sense) proposals:

  • Every internet-connected device should be capable of being set up, and updated, without an internet connection, from stored offline files.
  • Devices should be capable of exporting their own newer firmware to an offline image, to update other devices on older firmware offline. If my PlayStation is on v37, and my friend is on v32, and my game requires v34, I should be able to help my friend update to v37 and play, especially because we’re going to need it during those difficult times.
  • App developers on closed ecosystems, such as the Apple App Store, should have the option to allow their apps to be installed offline. Apple can still certify the app to their standards, but if I’m the developer of an open-source application, I should have the option to let my users export my app to a signed file, stash it on a flash drive somewhere, and install it on random people’s iPhones in case of emergency. (I’m not making a point against the App Store here – the application would still have been signed by Apple at some point, and it could be double-checked if internet is available.)
  • Right now, people can self-certify up to $300 of charitable giving to the IRS without receipts. Why can’t the government grant, say, a $20 Tax Credit for self-certifying you are storing a full set of Project Gutenberg? Or for storing a database of emergency survival information with images? Or for storing full copies of OpenStreetMap for your state? Or for storing an offline copy of Wikipedia (~120GB)? If even 10 million people did it, it would cost up to $80 million – a pittance by government budget standards (and our $700+Billion national defense budget), but it could make a ludicrous and disproportionate difference on outcome to have the knowledge so widely distributed. If people widely cheated on it and 100 million claimed to be doing it… is even $8 billion with some fraud here and there that big of a deal compared to our national defense budget and the benefits provided?
  • Emergency situations are unpredictable – that’s why every phone is legally required to support 911, even without a carrier plan. But we have smartphones now, so why aren’t we raising the bar? Would it really kill us to store a database of just written information on how to survive various situations on every phone? Why can’t I ask Siri, without an internet connection, how to do CPR? It would probably take 10MB at most… and save many lives.
  • Many films and TV Shows are becoming streaming-exclusive, and as many fans are finding out, this is very dangerous for archival purposes. Just ask fans of “Final Space,” who had the series completely erased from all platforms, even if they purchased it, for accounting reasons. I wonder if the relationship between creators and fans should be reconsidered slightly. If you are a major corporation, and you get fans invested in a series, do you perhaps have a moral obligation to provide a copy of your content on physical media for those interested, so as to prevent a widespread loss of culture? (Also because, all it takes is a few Amazon data centers to blow up and a ton of streaming-exclusive movies might no longer exist…) Perhaps this should be called a Cultural Security issue.

Thoughts?

Debloating Windows 10 with one command and no scripts

Recently, I had to set up a Windows 10 computer for one specific application in a semi-embedded use case. Anything else that Windows does or comes with is unnecessary for this. While there are plenty of internet scripts and apps for de-bloating Windows, I have found the easiest (and little known) way to debloat Windows without running any internet scripts is as follows:

  1. Open Powershell. (NOTE: Strongly recommend using fresh Windows install, and trying in a VM first to see if this method works for your use-case.)
  2. Type Get-AppxPackage | Remove-AppxPackage. (See note about Windows 11 below – this is for 10 only.)
  3. Ignore any error messages about packages that can’t be removed, it’s fine.

This is my Start Menu, after installing my CAD software:

After running the command, you will just have the Windows folders, Microsoft Edge, and Settings. And that’s literally it – no Microsoft Store, no Apps, just Windows and a Web Browser. Also, even though the command sounds extreme, almost nothing in Windows actually breaks after you run it (Windows Search, Timeline, Action Center, all work fine)*. If you want to try it yourself, I’d advise using a virtual machine and giving it a try, it works shockingly well for my use case.

After that, if I want to further de-bloat a PC for an embedded use case, I use Edit Group Policy on Windows 10 Pro. It’s a mess to navigate, but almost everything can be found there. Don’t want Windows Search to use the internet? Or something niche, like disabling Windows Error reporting? It’s almost certainly there.

Will this work for everyone? No, of course not, but it’s a great one-line, easily memorable tool for cleaning up a PC quickly for an industrial use case without any security risks caused by online scripts.

FAQs from Hacker News discussion:

Q. What about Windows 11?

A. Windows 11 is far, far more dependent on AppX than Windows 10 and will continue to be even more dependent on it in the future, most likely. Windows 10, at this point, is unlikely to change in this regard. Running these instructions on Windows 11 is far more likely to leave you in a bag of hurt down the road than Windows 10.

Q. What about .NET Frameworks, VCLibs, and some other important-sounding packages?

A. This will remove them, but despite their important-sounding names, they aren’t as important as you may think. The .NET packages (in Appx, not to be confused with the unpackaged “classic” .NET Frameworks) and VCLibs in my experience are primarily for Microsoft Store applications and Desktop Converter Bridge applications (Win32 in Store package), which if you don’t have the Store, probably won’t affect you. (This may sound optimistic, I say probably because I can’t try every application, but if Steam, FreeCAD, and Fusion 360 can run without issue, you’ll probably not have issues.) Try in a Virtual Machine or old computer first if this is concerning.

Q. Can I undo this?

A. Technically yes, but it’s hard. Reinstalling Windows is easier. Plan accordingly. Actually, you can, with this command in a PowerShell Administrator window according to Microsoft documents: Get-AppxPackage -allusers | foreach {Add-AppxPackage -register "$($_.InstallLocation)\appxmanifest.xml" -DisableDevelopmentMode}. I still recommend using a VM first just in case and only using a fresh install. After running this reinstall command, get updates through the Microsoft Store, and restart. This should work and in my testing it does, but the Weather app was complaining about Edge WebView2 being missing (but provided download links).

Q. But it might rip out XYZ which I need (e.g. Microsoft Store).

A. I recommend, in that case, using a VM first or an old computer to see if you actually need it.

Q. Security risks?

A. Most likely not, and actually, likely less than if you didn’t de-bloat (lower attack surface). You will lose many libraries used for primarily running Windows Store apps (and the apps themselves), but Windows Update and Windows Defender are not affected by the command in any way I can discern. YMMV though.

Q. But de-bloating might damage Windows. (Also in this category, “this is stupid and could destroy your PC!”)

A. It’s the risk we all take whenever attempting to de-bloat Windows in any way Microsoft doesn’t sanction (the risk comes with the territory). But if you are still interested in de-bloating, I think that it’s good to have an option that doesn’t need downloads. There might be downloadable options that are better. Any criticism (even valid) about de-bloating would almost certainly apply to other programs and scripts and not just mine. It can’t be worse than businesses who go and use Windows 10 Ameliorated.

Also, use case should be considered. Consider mine: CNC and CAD. CNC Software is stuck in the 90s for some machines, and if literally anything goes wrong, you could actually lose hundreds of dollars of material from a botched cutting job. Is it really so dumb to risk some stability, for the greater stability of having less bloat, from a PC that will rarely if ever touch the internet (and cost me $150, and has all the data storage on a separate dedicated NAS)? I think it’s a fair trade. The last thing I need is the (normally not removable) Windows Game Bar popping up over Mach3 CNC Control Software and blocking the Emergency Stop button. Your situation is almost certainly different.

Q. But what about the Chris Titus Tech debloater, or O&O AppBuster?

A. They’re probably great solutions. The main appeal of this one is that it is memorable, can be used immediately, and requires no downloads. If you are OK with downloading scripts from the internet (which, I am, but not everyone is), there are great, more granular options out there. Because of the requirement of a download, I don’t see them as being comparable to this command (different use cases).

Q. But Windows 10 clearly wasn’t made to work this way!

A. Well… there’s always Windows 10 LTSC. Which is awfully close to this, having very few AppX packages, and no Microsoft Store. It’s only for sale to Enterprise users though. You could say this is the closest thing to a “poor man’s” LTSC-ifier for standard Windows 10.

Open Question: How will Apple keep sideloading in Europe?

I saw the news by Bloomberg (a questionable source) about how Apple was getting ready to comply with the European Digital Markets Act, at last, by allowing sideloading among other things. However, this quote caught my eye:

If similar laws are passed in additional countries, Apple’s project could lay the groundwork for other regions, according to the people, who asked not to be identified because the work is private. But the company’s changes are designed initially to just go into effect in Europe.

I have one question: How?

This might seem like a dumb question, but consider the following:

  • GDPR applies to European citizens. Companies like Apple are bound by GDPR even if said citizen is currently physically located in the United States or another country (making it a extraterritorial law). If the DMA is similar in this way (which I currently cannot find a certain answer for), Apple would be required to allow sideloading outside the European Union if the user is an EU Citizen (for example, if they flew to the US for a week). But how do you tell, without ID, if a user is European? Vice versa, how do you tell that a US user didn’t just fly to Europe for a week?
  • The DMA appears to be a retroactive law, applying to all iPhones that currently exist as part of the “platform” (i.e. anything currently supported). If so, there are no doubt phones in Europe that were purchased in the US. What happens to them? Let’s say 5% are not what Apple would call European-sold phones. Is updating 95% of phones to comply, and not 100%, legally kosher? Or could Apple be sued stepping on people’s rights for not getting everyone?

The first point would suggest a geolocation-based block to be ineffective and potentially illegal. The second point would seem to also make a unique serial-number-based (or other point-of-sale-based) check also illegal and ineffective. iPhones don’t require an Apple ID and the DMA doesn’t have exceptions for one, so the country on an Apple ID would not be usable either. It doesn’t seem, to me, like Apple has many options for actually restricting sideloading fully to Europe without technically-knowledgeable users being able to join in on the fun.

Thus my open question: Any thoughts how they’ll do it? Comment below.

Nobody agrees what “Right to Repair” actually means

Right to Repair: Almost everyone supports it, it will make our devices more repairable, but if you look closely: the definition of what Right to Repair actually is and entails constantly changes based on who you talk to.

Note: This table is an oversimplification of their definitions of R2R and does not include all necessary nuances to make a point. I apologize for any errors. It is also possible I’m just splitting hairs, though I think some real differences do appear near the end of the article.

OEM Parts for saleSchematics, Diagrams Publicly AvailableBoard-level partsRepairable Design Requirements*Aftermarket or 3rd-party Parts OK
MKBHD
Not addressed in definition of R2R.

Considered Rossmann’s opinion below a “good take.”

Considered Rossmann’s opinion below a “good take.”

Not addressed in definition of R2R.
Louis Rossmann
Schematics and diagrams for board-level repair should be publicly available.

“Don’t tell the company that made this part they can’t sell it.” [paraphrased]

“I don’t want right to repair to push my personal preference for design on consumers.”

If it’s cheaper, and customer chooses, should be OK.
iFixit
“repair information like software tools and schematics should be available”

Not addressed. Primary focus is OEM parts – as evidenced by iPhone 14 obtaining 7/10 repairability.

Repair score penalizes companies that make difficult-to-repair items.
“companies block repair in all kinds of other sneaky ways. Sometimes they glue in batteries with industrial-strength adhesives… etc.”

“legalize modifying your own property to suit your own purposes.” (Also tests and resells batteries.) Obviously against counterfeiting, but also explicitly against locking out 3rd-party parts.
Hugh Jeffreys
Not addressed.

Not addressed.

Against digital locks (like others), talks about physical repairability requirements as being an R2R issue.

“has become a big market for scammers.” Not addressed in R2R definition.
Linus Tech Tips
Calls for parts and “components,” never mentions schematics. Seems to almost explicitly say this is not Right to Repair, as one should not be able to replicate a patented product.

“Right to access manufacturer components and resources to repair their devices when required.”

Defined in one video as being “Beyond Right to Repair”.

“So it sure is a good thing that no-one is calling for that either!”
[False?]
✓ = Included in public definition they give, ✗ = Not mentioned in their definition (though they may not be opposed, they just didn’t mention it as an R2R issue)

Note that the table above only considers how they defined Right-to-Repair, even though all of them would almost certainly support the following in other capacities even if it wasn’t in their R2R definition:

  • “OEM Parts Publicly for Sale” means if the manufacturer sells batteries, screens, sensors – those should be available for sale to anyone.
  • “Schematics Publicly Available” means that the manufacturer should provide all circuit information about how a device is assembled.
  • Board-level parts refers to the sale of proprietary chips and other unique parts, down to the level they are indivisible. (I.e. a specific power management chip, not a whole logic board.)
  • Design Requirements refers to a manufacturer being forced to make easier-to-repair products, not just make products that have parts available. Some call this Right to Repairable Design. (Both views do not consider digital locks acceptable, so an ✗ does not mean they are ignoring blocking digital locks as a design requirement.)
  • Aftermarket Parts OK refers to whether Right-to-Repair should explicitly prevent companies from blocking aftermarket Batteries or Screens. Others believe it is tolerable for aftermarket or 3rd-party parts.

Now, the above table lacks nuance and is not perfect, I know, and I’m probably going to get corrections (that’s fine, check back later for some, I’m not perfect at this). However, if I can boil down the schisms, it appears to be this:

  • R2R activists aren’t sure whether manufacturers should be simply required to provide OEM parts that they themselves have; or to be required to provide all proprietary individual components as well.
  • R2R activists don’t know whether companies like Apple should be prevented from blocking 3rd party batteries or similar; though they are unanimously against preventing swapping or installing OEM parts.
  • R2R activists don’t agree on whether (outside of digital locks) a manufacturer should be forced to make certain design decisions to make repairability easier.

Until we can unanimously define what Right to Repair actually entails, success is going to be hampered with confusion. I would argue, personally, the following:

  • Right to Repair should be the ability to obtain OEM parts and manuals, and to not have digital locks preventing repair without the manufacturer’s consent.
  • Right to Repairable Design should be restrictions on part serialization (for preventing counterfeiting), use of Phillips or other common screws, restrictions on excessive adhesive, etc.
  • Right to Advanced Repair should be right to obtain schematics, proprietary information, and individual components that is not attempted by the OEM’s own repair policies (i.e. most OEMs don’t do board-level repair). Basically, Right to Repair beyond what the OEM would attempt.

Splitting these issues up helps clarify exactly what each term means, and we should arguably fight for them all regardless, but without overlap. But that’s just my opinion on how to make things clearer.

My unlawyered opinion on why AI will legally survive in the US

In the past few months, there have been a surge of AI projects that allow generating images and text:

  • Stable Diffusion
  • ChatGPT
  • GPT-3 and earlier
  • DALL-E 2 and previous
  • Midjourney
  • GitHub Copilot

These AI programs are amazing – but they were also trained with publicly-available material; and the owners of that material almost certainly did not opt-in to having their material used for AI training, and have occasionally managed to get these services to repeat things very close to copyrighted material. This is almost certainly going to come up in future legal cases.

Putting the current US concept of fair use aside, I think that at this point, AI companies have a vested interest in doing everything they can to get these algorithms entrenched as an industry, because that may actually ensure their legal survival.

Consider a broader view of the US and technology:

  • VCRs upset movie studios tremendously, but were declared legal even if some people would abuse them or copy tapes. Format-shifting was declared officially legal from this decision, whereas before it had been legally grey, much like AI is now. However, there’s another side to the story: In 1983, according to the New York Times, there were approximately 1.2 million VCRs sold that year, with the decision decided in January 1984. (Basically) outlaw the industry? Nah.
  • Photoshop came out, and allowed for the manipulation of images in ways that were unprecedented. Users could also abuse Photoshop to make very… interesting… images of celebrities. Nonetheless, Photoshop was never sued for being liable for anything their users did.
  • CD Drives allowed copying CDs which did not have DRM, and made it easy to share the ripped discs online. This did not ultimately make CD drives, CD ripping, Online File Sharing, BitTorrent, The Internet, or any of the technologies involved illegal despite all of them being abused for copyright infringement. It also didn’t legalize internet censorship of DNS and packets to prevent copyright infringement despite the MPAA’s lawsuits and failed laws (SOPA/PIPA).

If there seems to be a pattern, I would quantify it as this:

US Courts do not enjoy clamping down on any new technology, even if said new technology can and is being used in copyright-infringing ways.

Now, one could make the argument, that none of these really have much to do with AI, or AI’s propensity to regurgitate information it learned with sometimes. I think, however, that this is a “hindsight is 20/20” moment. It’s obvious now, but it wasn’t obvious then. If we had CD ripping declared illegal, or lost VCR recording being legal, or had SOPA/PIPA enforced, our precedent for new technologies and copyright infringement would be very, very different.

Thus, in a weird way, it would seem to my unlawyered thoughts that the more AI can entrench itself, become accepted, widespread, diverse in function, the stronger the legal case will become. If it was just GitHub Copilot, it may be banned. But will courts be interested in hurting Copilot, Midjourney, DALL-E, GPT-3, etc.? I think they would punt the question to Congress before they would dare make a change to the status quo or declare that it isn’t “fair use,” if previous technology/copyright conflicts are anything to go by.

Remote attestation is coming back. How much freedom will it take?

Remote attestation has been a technology around for decades now. Richard Stallman railed about the freedom it would take in 2005, A Senator presented a bill asking for the required chips to become mandatory, and Microsoft prepared Palladium to improve “security” and bring remote attestation (among other things) to the masses. Then it all fell apart – Palladium was canceled, a Senator retired, and TPM chips have been in our PCs for years but have generally been considered benign.

For those who do not know what remote attestation is:

  • Remote attestation lets an external system validate, cryptographically, certain properties about a device.
  • For example, proving to a remote system that Secure Boot is enabled on your Windows PC, with no ability to forge that proof. And by extension, potentially loading a kernel driver that can prove certain installed applications have not been tampered with.
  • TPM chips invented in ~2004 were widely feared because they enabled this capability, but until now they have been primarily used only in corporate networks and for BitLocker hard drive encryption.

When it was first invented, it was widely feared by Linux users and by Richard Stallman, especially after Secure Boot was rolled out. Could an internet network require that users run up-to-date Windows, with Secure Boot on, and thus completely lock out Linux or anyone who is running Windows in a way Microsoft does not intend? With remote attestation, absolutely.

In practice though, only corporate networks adopted remote attestation to join, and only on their business PCs through the TPM chip (no BYOD here). TPMs have a ludicrous amount of certificates needing trusting, many in different formats and algorithms (1,681 right now, to be exact), and almost everything that isn’t a PC doesn’t have a TPM. Because of that, building a remote attestation setup with support for a broad variety of devices was, and is, very difficult. Easy for a business with a predictable fleet on one platform, almost impossibly complicated for the random assortment of general devices. And so, the threat of the TPM and remote attestation in general was dismissed as being fearmongering from 2 decades ago that never became reality.

If only it stayed that way. Remote Attestation is coming back and is, in my opinion, a legitimate threat to user freedom once more, and almost nobody has noticed. Not even on Hacker News or Linux circles like Phoronix where many such new technologies and changes are discussed.

Consider in the past few years:

  • Why is Microsoft building their own chip, the Pluton, into new Intel, AMD, and Qualcomm processors? Why does it matter so much to add a unified root of trust to the Windows PC?
  • Why does Windows 11 require a TPM 2.0 module?
  • Why has every PC since 2016 been mandated to have TPM 2.0 installed and enabled?
  • Why do so many apps on Android, from banking apps to McDonalds, now require SafetyNet checks to ensure your device hasn’t been rooted?
credit
  • What’s with some new video games requiring TPM and Secure Boot on Windows 11?

Remember that remote attestation has been possible for decades, but was overly complicated, unsupported on many devices, and just not practical outside of corporate networks. But in the last few years, things have changed.

  • What was once a fraction of PCs with TPMs, is now approaching 100% because of the 2016 requirement change, and because of the Windows 11 mandate. In ~5 more years, almost all consumer PCs will have a TPM installed.
  • macOS and iOS added attestation already with the DeviceCheck framework in iOS 11 / macOS 10.15. They don’t use a TPM but instead use the Secure Enclave from the T2 or M-series.
  • Google has had SafetyNet around for a while powered by ARM TrustZone, but is tightening the locks. Rooting your device invalidates SafetyNet, requiring complex workarounds that are gradually disappearing.

For the first time, remote attestation will no longer be a niche thing, only on some devices and not others. Within a few years, the amount of devices supporting remote attestation in some form will quickly approach 100%, allowing remote attestation to jump for the first time from corporate networks into public networks. Remote attestation is a technology that doesn’t make sense when only 70%, or 80%, or 90% of devices have it – only when it reaches >99% adoption does it make sense to deploy, and only then do its effects start to be felt.

We’re already seeing the first signs of remote attestation in our everyday lives.

  • macOS 13 and iOS 16 will use remote attestation to prove that you are a legitimate user, allowing you to bypass Cloudflare CAPTCHAs. How? By using remote attestation to cryptographically prove you are running iOS/macOS, without a jailbreak, on a valid device, with a digitally signed web browser.
  • Some video games are already requiring Secure Boot and TPM on Windows 11. According to public reports, they have not fully locked out users without these features, as they still allow virtualized TPMs, Windows 10, and so forth. However, they absolutely do not have to, and can disable virtualized (untrusted) TPMs and loading without Secure Boot as soon as adoption of Windows 11 and TPM is great enough. Once they shut the door, Windows 11 + Secure Boot + Unaltered Kernel Driver will be the only way to connect to online multiplayer, and it will be about as cryptographically secure against cheating as your PlayStation.
  • Cisco Meraki powers an insane amount of corporate networks. Even in my own life, it was my school’s WiFi, my library’s WiFi, the McDonalds WiFi, even my grandparent’s assisted living WiFi. Cisco is also a member of the Trusted Computing Group that developed the original TPM and Remote Attestation to begin with. All they have to do, once adoption becomes great enough, is update their pre-existing “AnyConnect” app to use TPM/Pluton on Windows, DeviceCheck on iOS/macOS, and SafetyNet on Android/ChromeOS before you join the network. Anyone with an unlocked or rooted device need not apply.
Credit Citrix Endpoint

I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.

What is more concerning is how many freedoms it will take in a more terrifying and unappreciable direction. For example, when I was in college, we had to jump through many, many hoops to connect to school WiFi. WPA2 Enterprise, a special private key, a custom client connection app, it wasn’t fun and even for me was almost impossible without the IT desk. If remote attestation was ready back then, they would have absolutely deployed it. Cloudflare has already shown it is possible for websites to use it to verify the humanity of a user and skip CAPTCHAs on macOS. What happens when Windows gains that ability? Linux users will be left out in the cold completely, as it is simply not practical to digitally approve every Linux distribution, kernel version, distribute a kernel module for them all, and then use the kernel module to verify if the browser is signed in the same way with all of its variations, without leaving any holes.

Thus, for Linux users, it will start with having to complete CAPTCHAs that their Windows and Mac-using friends will not. But will it progress beyond that? Will websites mandate it more? On an extremely paranoid note, will our government or a large corporation require a driver’s license for the internet, with a digital attestation binding a device to your digital ID in an unfalsifiable way? Microsoft is already requiring a Microsoft Account for Windows 11, including the Pro version. Will a grand cyberattack send deployment of this technology everywhere, and lock out Linux and rooted/jailbroken/Secure-Boot-disabled devices from most of the internet? Will you be able to use a de-Googled phone without being swarmed with CAPTCHAs and having countless apps deny access?

This is a major change of philosophy from the copy protection and DRM systems of yesteryear. Old copy protection systems tried to control what your PC could do, and were always defeated. Remote attestation by itself permits your PC to do almost anything you want, but ensures your PC can’t talk to any services requiring attestation if they don’t like what your PC is doing or not doing. This wouldn’t have hurt nearly as much back in 2003 as it does now. What if Disney+ decides you can’t watch movies without Secure Boot on? With remote attestation, they could.

I think I’ll end with a reference to Palladium again, Microsoft’s failed first attempt at a security chip from ~2003, cancelled from backlash. It had an architecture that looked like this:

Now compare that diagram with Microsoft’s own FASR (Firmware Attack Surface Reduction). FASR is a “Secured Core” PC technology that is not mandatory yet and not necessarily part of Pluton, but very likely will be required in the future.

All they did was flip the sides around, have a hypervisor instead of separate hardware abstraction layers, and rename NEXUS to “Secure Kernel.” Otherwise it is almost entirely the exact same diagram as from 2003 that was cancelled from backlash. They just waited ~20 years to try again and updated the terminology. (Also, of note, is the use of the word “Trustlet,” plagiarized from ARM TrustZone which powers Android’s SafetyNet remote attestation system.)

Some things never change.