Thinking out loud about 2nd-gen Email

Note: This is just me, thinking out loud; you absolutely do not need to think that I have carefully thought this through, or that this is a good idea. With expectations set as low as possible, let’s continue.

There are many old pieces of tech still in use, but there’s one that grinds my gears every time I try to use it: Email.

For users, email works pretty well. Sometimes it sends too many emails to Junk, but Email is old, reliable, easy to understand, and relatively easy to search. It’s a good system, and I’m not eager to replace it with Slack anytime soon.

However… the backend for email, is a mess. In escalating order (and “we” is used in a very imprecise, broad hand-waving sense for technologists):

My gut reaction to the above is that we’ve got a lousy spec, with decades of cruft and unofficial spec, and we aren’t that great at securing it, or making sure messages are authentic. So… could we do better?

Thus the hypothetical: 2nd-gen email.

Your initial reaction might be: That would be pointless, because not everyone would opt into it, and it would break compatibility all over the place. My thought is… that’s not necessarily a given. Imagine this:

  • We create a new DNS record, called MX2. Most email services, then, would have an MX2 and MX record. Older services only have MX.
  • If an ancient, 20 year old email client, tries to send a message – it finds the MX record and sends the message just like normal. A modern client sees the MX2 and sends the message there if it exists; otherwise, it falls back to MX.
  • From there, the email services which implement MX2 would publish a public date, on which all messages sent to them by the old MX record, will be automatically sent to Junk. If just Microsoft and Google alone agreed on such a date, that would be 40% of global email traffic.

If the above looks slightly familiar, it’s because this strategy already worked, in a sense, with the transition from HTTP to HTTPS. We threw away a multi-decade-old protocol, for a new and more secure one. We set browsers to automatically upgrade the connection wherever possible, and now warn users about insecure connections when accessing HTTP (especially on login pages). Nevertheless – users can still visit HTTP pages, ancient browsers still work on HTTP, but most websites have gotten the memo and upgraded to HTTPS anyway.

The incentive to upgrading to MX2 would be simple: Your messages, while they still would arrive, would go to Junk automatically past the publicly posted date. No business wants that, even if users are already trained to expect, and act, like that can happen. Thus, the incentive to upgrade without truly breaking any day-to-day compatibility.

Personally, I think that such a transition could go even faster than the HTTP to HTTPS transition. Self-hosted email is not very popular in part because of the complexity of the current email system, so between Microsoft, Google, Amazon, Zoho, GoDaddy, Gandhi, Wix, Squarespace, MailChimp, SparkPost, and SendGrid – you have most of the email market covered for the US; anyone not in the above list would quickly fold. The relative centralization of email, ironically, makes a mass upgrade to email much more achievable.

What would a 2nd-gen email prioritize then? Everyone has different priorities, but I’d personally suggest the following which would hopefully win a broad enough consensus if this idea goes anywhere (though experts, of which I am not one, would have plenty of their own ideas):

  1. A standardized HTML specification for email; complete with a test suite for conformance. Or, maybe we just declare a version of the HTML5 spec to be officially binding and that’s the end of it.

  2. Headers for email chain preferences, or other email-specific preferences (i.e. Is this email chain a top-reply chain, or a bottom-reply chain? The client shouldn’t need to guess, or worse, ignore it.)

  3. If an email has a rich, HTML view; it should be required to come with a text-only, non-HTML copy of the body as well; for accessibility, compatibility, and privacy reasons.

  4. All MX2 records must have a public key embedded in the record. To send an email from the domain:

    – A hash of the email content, and all headers, is created.
    – This hash is then encrypted with the private key, corresponding to the record’s public key.
    – This header is then added to the email, as the only permitted untrusted header.
    – When an email is received, the header containing the hash is decrypted with the DNS public key, and the rest of the email is checked against the hash for integrity and authenticity.

  5. Point #4 is a lot like DKIM and DMARC right now, except:

    – There would always be an automatic reject policy (p=reject) . Currently, only 19.6% of email services which even have DKIM are this stringent.
    – If headers do need to be added to an email, the spec can carefully define carve-outs for where untrusted data can go (i.e. if the spam filter wanted to add a header).
    – There also could be standardized carve-outs for, say, appending untrusted data from the receiving server to a message body (i.e. your business could add data to the body’s top or bottom indicating that the message from an external recipient and you have legal obligations, but your email client can also clearly show that this was not part of the original message and is not signed).
    – As such, the signing would not need to work around email compatibility to such an extent as DKIM, reducing the likelihood of critical flaws.

  6. By simplifying the stack to the above, eliminating SPF, DKIM, and DMARC (and their respective configuration options), and standardizing on one record (MX2) for the future, running your own self-hosted email stack would become much easier. Additionally, the additional authenticity verifications would hopefully allow spam filters to be significantly less aggressive by authenticating against domains instead of IPs.

  7. Point #6 is the biggest change – we’re no longer authenticating, or caring, about the IP Address that’s sending the email. Every email can and always would be verified against the domain using MX2 records and the public keys in them. Send a fake spam email? It doesn’t have a signature, so it gets tossed without any heuristics. Send a real spam email? Block that domain when there’s complaints. Go after the registrar (or treat domains belonging to that registrar as suspicious) if needed. This would mostly eliminate the need for IP reputation by replacing it with domain reputation – which, at least to me, is a far superior standard with more understandable and controllable outcomes(1).

  8. Clients which implement MX2 can, optionally, have an updated encryption scheme to replace OpenPGP. Something like Apple’s Contact Key Verification. Hopefully there would be forward secrecy this time.

If you have got great counterarguments, let me hear them.

(1) This would, perhaps, be the one and only “new feature” we could advertise to users. Not getting emails? You can just type in the name of the website, and always receive the emails.

Edit 1, for clarification: For bulk senders, there would be multiple MX2 records on the domain, each containing a public key for every authorized sender. One of those records would have a marker indicating it as suitable for incoming mail.

Edit 2: This article has had a very large discussion on Hacker News. While the discussion winds down, I have some additional thoughts from there:

  • If there is an MX2 (ever), a sane way to share large files (like hundreds of megabytes, or even gigabytes) would be great. Designing the protocol wouldn’t be easy especially due to spam concerns, but if I had a nickel for how many links are shared to avoid email size limits… this is a real-world problem.
  • MX2 will literally never happen if Google and Microsoft don’t join in. They would also, of course, have considerable control on the outcome. However, if even open-source communities and developers adopted MX2 because it was easy to implement and open source… you never know what grassroots can do.
  • Part of me wonders what would happen if MX2 threw out SMTP for HTTP with a standardized REST API and JSON bodies. Sure – it would add a mountain of HTTP overhead and be more complex. However, it would sure as heck make implementing MX2 into a project quite easy in most programming languages, as it would just be a web server running on a custom port answering endpoints. REST APIs are also, despite their complexity, a well-documented system including for preventing spam (it’s not like Stripe or S3 lets people spam their APIs with garbage). I don’t know enough about SMTP to know if that’s a good idea – but I do know that SMTP is sub-optimal enough that Microsoft and Google don’t use it when exchanging messages with each other.
  • There has been interesting commentary about being a pull protocol instead of a push protocol (i.e. instead of sending a message from X to Y; X sends Y a tiny standardized note saying to pick up a message from X). The most popular proposal of this was DJB’s Internet Mail 2000.
  • The idea of plain-text alternatives to HTML is probably impossible to enforce.
  • As some commenters have pointed out, if the public key is always on the DNS, and every MX2 implementation is required to have that public key, a sending-server-to-receiving-server email encryption becomes possible.
  • The idea of using HTML, at all, is controversial. Email was originally never designed for HTML, and the security risks of processing it are quite large. Using a superset of Markdown with style directives, or a customized XML schema, or even a new simple markup language all-together (Modern Mail Markup Language – M3L, claiming it now) might be an interesting thought experiment.
  • A consistent point that came up was that standards drift – people don’t always implement the spec right, mistakes are made. I answer that, being a new standard, this is a chance to rigidly enforce the rules from the beginning. For example, we could put it in spec that any incoming message that’s not signed, despite that being immediately and easily verifiable by the sender, causes a 1 hour IP ban for laziness. Just an example.

“Parental controls? What parental controls?”

Congress has recently proposed multiple bills to regulate the internet (such as the recent “Kids Online Safety Act”), in the name of protecting children. This has caused a simple response: “Just have parents be parents and set up parental controls if they care.”

That’s simpler said than done. Parental controls right now completely suck by being incomplete, full of loopholes, extremely buggy, overly complicated, poorly designed, privacy invading, or a combination of the above. As someone who has used almost every ecosystem, here’s an eye opener.

Also here’s my rallying cry: If you don’t want Congress regulating the internet to “protect the Kids,” demand companies fix their parental controls.

Windows 11

Microsoft has Family Safety as their tool for parental controls on Windows. Here’s my hate list.

  1. Every child must have a Microsoft Account to use it. Even the 8-year-olds. Local accounts are still supported by Windows 11 after setting up the main account, but you can’t use parental controls with them.
  2. You can’t disable Windows Copilot, or Edge Copilot. Don’t want your kid using AI from Bing or ChatGPT when they are just learning to write? Too bad. Don’t want them using AI for any host of ethical or personal issues? Too bad.
  3. You can’t disable the Microsoft News feed of tabloid content on new Microsoft Edge tabs, without disabling Bing (rendering you unable to offer restricted “SafeSearch” browsing of any kind, to your child, at all).
  4. You can’t disable the Microsoft Search button, which shows… surprise, more tabloid content. Even if you have your child’s account set to “allowed websites only.” They’ll still see headlines of the day about how the world is on fire.
  5. You can’t disable the Widgets button. The entire purpose of widgets is to serve tabloid content. And once again, “allowed websites only” doesn’t disable that or do basically anything to it.
  6. Windows 11 comes preloaded with Movies+TV, Xbox, Microsoft News, and other apps. Unless you as a parent know about these apps, open these apps individually so they appear in the dashboard, and then block these apps, they’re allowed. What?
  7. You can’t disable the Microsoft Store. You can set age restrictions; so setting everything to block all apps rated above the Age of 3 is the best you’ll be able to do.
  8. Microsoft Office lets you embed content from Bing, and does not respect system parental controls… at all. Even if you block Bing, your 8-year-old can still watch videos to their heart’s content by embedding them in a Word document, because the internet filters only work in Microsoft Edge.

So let’s say you want to allow your 8-year-old to work on school, on a Windows PC. What’s the point of parental controls when you can’t disable Widgets, can’t disable the tabloid content in the Search box even if you block Bing, the Bing filter can still be bypassed using Office, and your student can use AI all they want? These are “parental controls”?

MacOS / iOS

  1. Quite powerful, stupidly buggy. So much so that Apple even admitted as such to a reporter a few months ago. There is, in my experience, no part of macOS more buggy than Screen Time. How so?
  2. It appears that Screen Time runs using a daemon (background process) on the child’s account. That daemon has a propensity to randomly crash after running for a few hours. When it does, all locks are disabled. Safari will just let you open Private windows without filtering of any kind, regardless of your internet filtering settings, when that happens. This state then remains until the child logs out and logs back in. Naturally, what’s the point of parental controls that randomly fail open?
  3. The “allowed websites only” option shows Apple has never used this feature, ever. You will be swarmed on your first login with prompts to approve random IP addresses and Apple domains because of system services that can no longer communicate with the mother ship. And they will nag you constantly with no option to disable them, so your first experience of enabling this is entering your PIN code a dozen or more time to approve all sorts of random junk just to get to a usable state.
  4. The “allow this website” button on the “website blocked” page randomly doesn’t work; and this might be because the codebase is so old (and likely untested for so long), it’s not even HTML5. Meaning it’s probably been, what, a decade and a half since anyone really looked at it last?
  5. You can’t disable any in-box system apps. You don’t want your kid reading through the Apple Books store? You don’t want your kid seeing suggestive imagery and nudity in the Apple Music app? You don’t want your kid listening to random Podcasts from anyone in the Apple Podcasts app? You can’t do anything about it but set a time limit for 1 minute. Of course, that time limit randomly doesn’t work either.
  6. The “allowed websites” list in macOS has a comical, elementary bug showing how badly tested the code is. If you open the preference pane, it shows a list of allowed websites (say, A, B, and C). Let’s say I add a website called D, and close it. I open the preference pane again – only A, B, and C are in the list, D nowhere to be seen! However, D was added to the system, but it’s not in the list. If I go add E (so the list is now A, B, C, and E); D will then be removed and opening the preference pane will one again just show A, B, and C.

Nintendo Switch

I generally like Nintendo, but the Nintendo Switch Parental Controls are inexcusable.

  1. Every parental control is per-device. What about families that have, say, multiple children and can’t afford (or don’t want the risk) of having multiple Switches? Seems reasonable, but no, every Switch is personal to the owner if you use Parental Controls.
  2. Let’s say you then go, fine, and buy multiple Switches. There’s no ability to set a PIN lock; so theoretically the kids could just… swipe the other’s Switch?
  3. You can’t hide titles on the menu. Let’s say you have two kids on the same Switch. One plays M-rated titles, the other plays E-rated titles. The kid who plays E-rated titles will see all the M-rated titles on the Home Screen, and nothing can be done about it. They can even launch them and play them.
  4. You can’t disable the eShop from within the Parental Controls app. You can dig through the eShop settings to find the option to require a password before signing in, but that requires the kid to not know their own Nintendo password. If your kid uses, say, their actual email address on their Nintendo account, locking down the eShop is impossible – and they’ll see every game for sale regardless of how appropriate or inappropriate it is (Hentai Girls, Waifu Uncovered, anyone? Games you can “play with one hand”? Actual titles on the eShop).

Router based filtering

One of the best solutions. Unfortunately,

  1. Every child must have their own device (again, poor families need not apply, making this very financially uninclusive solution).
  2. Often, technically overwhelming for parents. What IP address is that kid’s laptop again?
  3. Premium routers like Eero, which have very easy to use router-based parental controls, often demand subscriptions to use them. In the case of Eero, $9.99/mo. after you already paid ~$200 for the devices. That’s not a viable solution – I can’t tell a poor family to pay $200+ for routers and a $9.99/mo. subscription as their solution.
  4. Some routers have parental controls that make me wonder what idiot thought this would work. Case in point – a Netgear router from a few years ago that let me “block websites” and had “parental controls” right on the box. Cool – but they worked by letting a parent individually enter, domain by domain, websites to block. Considering the internet’s scale, that’s criminally useless and if I had a lawyer, I would’ve sued for false advertising.
  5. Remember the Nintendo eShop? Try using router-based parental controls to block eShop access without blocking software updates or online play. Good luck with that. Router-based blocking has the least nuance of any of the above solutions.

But sure. It’s the parent’s job to set up parental controls. My response is, once again, if you don’t want Congress regulating the internet, parents need better tools than these.

Safari bug leaks cursor position between overlapping windows

Recently, I’ve built a navigation sidebar with Tailwind that changes appearance based on a CSS hover, and I’ve found a strange bug. I’ve found Safari will, under the right conditions, leak the cursor position during a click to completely different windows, underneath the window being clicked on.

Unfortunately, I can’t reproduce the above bug on demand. It’s happened multiple times for me, but I can’t make it happen whenever I want it to, which is frustrating and something I’m still looking into.

What I can reproduce with greater consistency is putting the mouse cursor into a position on the second window, and then typing into WordPress. This is an example after I completely quit Safari from recording the first video:

The only other thing of note, for anyone investigating, is that it also seems to happen when just moving the mouse cursor around, but while randomly pressing either Command or Option.

If anyone has any ideas for what is happening here or how to more consistently reproduce it, let me know. And also, hopefully, Apple sees this and makes sure that there isn’t anything else getting leaked across different windows.

Sorry Purism, I’m not investing. It’s (possibly) not even legal. (updated once)

Update 1: Purism states on their website in a March blog post that there is one production run that covers the entirety of late 2019 – mid 2021. So, it is actually possible to accomplish. This also makes some of the production timing mentioned below much more innocent even if calling it “In Stock” within two months in a pitch to investors seems to be a stretch. If Purism was willing to keep the “52 week lead time” visible to customers out of uncertainty, investors shouldn’t receive much better-looking alternative statements. (This also does not address whether the original 2019, or 2021 sales, were more questionable than the current 2023 sale for what they would be used for.) However, it does not address that the requirement of Accredited Investor status is not a “good faith” requirement as far as I can find, and this still could be an illegal securities sale.

Original article below

I just got this email (and an email like it) for the second time in just over a week from Purism:

Purism Supporter [sic],

5% bonus on any investment into Purism [sic], helping advance our social purpose mission.

We are contacting you either because you have directly asked us about investing in Purism, are on our newsletter, or a customer whom we thought would be interested in hearing about our investment opportunity. If you are not interested and don’t want any more emails from us, please let us know and we will quickly remove you from this private mailing list.

For the next two months you can earn an additional 5% immediate bonus on any investment.

Products in stock with less than 10 day shipping time:

  • Librem 14 laptop
  • Librem 5 USA phone (with Made in USA Electronics)
  • Librem AweSIM cellular service
  • Librem Key security token
  • Librem Mini mini desktop computer
  • Librem Server a 1U rackable server

Products shipping through backorders and in stock in July, 2023:

  • Librem 5 phone

Products planned to arrive within the year:

  • Librem 16 laptop
  • Librem 11 tablet

With this investment opportunity we are accepting increments starting at $1000 and allow for easy cart checkout to invest. We invite you to get more information on this investment round including the immediate 5% bonus. Find out how to invest, where we will use the funds, and our current progress in this round at our private investment page at


Todd Weaver  
CEO and Founder  
Purism, SPC

OK… first off. Going anecdotally from what I’ve heard through the grapevine, claiming that the Librem 5 will be in stock by July seems extremely ambitious, if not impossible. Anecdotes on Hacker News talk about having received orders from 2019 just weeks ago. Heck, the Purism website right now lists it as having a 52 week lead time. So why does the email to their investors say it will be in stock in just 8 weeks, whereas their website says 52 weeks? It can’t be confusion with the USA model either – that has a backlog according to the website of just 10 days.

So, putting that potentially illegally misleading statement to potential investors aside, look at this next bit from their investment page:

Has there been previous investment?

Purism has grown mostly from revenue, however, Purism announced closing $2.5m in notes in December 2019. Purism has raised over $10m in total all under convertible note terms.

That’s… incomplete. Purism also did this in 2021, which they disclose in the actual legal document and earlier on the web page if you have your eyes open, but not the FAQ. And people were (anecdotally) complaining then about having not received four-year-old orders. Why is this such a big deal? Look at what they are going to do with your investment:

What are the funds used for?

As stated above, we will use the investment funds for parts procurement in preparation for large production run of stock, as well as continuing development of all our freedom respecting revolutionary software stack, and for more convergent applications in PureOS for the Librem 5 phone.

Excuse me… does this almost look like some form of Ponzi scheme if the anecdotes are true? Purism raised $2.1 million from orders from the Librem 5. Then they sold this form of “stock” to get more cash in 2019, and 2021, and now 2023. They are openly saying right now the cash raised will go to ordering parts for a large production run, which will complete orders from 2019. As this community shipping date estimation thread on their own forum shows:

Now, I can’t go on anything more than a hunch. But my hunch is that Purism is using investor funds to subsidize orders, and selling “convertible notes” to do the job. Is that illegal? I am not a lawyer, and at least it’s pretty clear if you really go digging, so it probably is legal. But is it shady? Or at least Unsustainable? Plus, if I am an investor… how does it feel, knowing your cash is most likely just going to dig them out of a money pit and not actually work on growing the company otherwise? Is that not just a tiny bit misleading, for being a morally superior “Social Purpose Company”?

But then there’s one more problem. That email I got. Once again in the FAQ:

Am I an Accredited Investor?

For US Citizens, this is a good faith requirement, since there is no way for Purism to validate your accredited investor status, by investing you are stating you are an accredited investor that is defined as meeting any one of the following: earned income that exceeded $200,000 (or $300,000 together with a spouse or spousal equivalent) in each of the prior two years, and reasonably expects the same for the current year; OR has a net worth over $1 million, either alone or together with a spouse or spousal equivalent (excluding the value of the person’s primary residence); OR holds in good standing a Series 7, 65 or 82 license; OR any trust, with total assets in excess of $5 million, not formed specifically to purchase the subject securities, whose purchase is directed by a sophisticated person; OR certain entity with total investments in excess of $5 million, not formed to specifically purchase the subject securities; OR any entity in which all of the equity owners are accredited investors.

Purism, there’s no way for me to invest legally, even if I wanted to. So why are you emailing me soliciting investment? Why don’t your emails clearly say “US Citizens who make under $200K yearly cannot invest”? How does that even jam with the SEC rules on soliciting as this helpful Reddit thread points out?

It’s my understanding that if Purism offers/advertises the investment in a public manner (a “general solicitation” … and I think that this counts as a general solicitation ), they must satisfy Rule 506c or Rule 506b and must take reasonable steps to verify the “accredited investor” status ( ):

Some requirements of Rule 506c: 

all purchasers in the offering are accredited investors

the issuer takes reasonable steps to verify purchasers’ accredited investor status and

While Rule 506b doesn’t require everyone to be an accredited investor, they can only have up to 35 investors (in a calendar year), but Purism “must reasonably believe” the non-accredited investors have “such knowledge and experience in financial and business matters that he is capable of evaluating the merits and risks of the prospective investment”

(Here is the text for Rule 506: ).

My guess, though, is that they might be trying to fall under Rule 504 … where the disclosure and verification rules are more lax. However you can’t fall under Rule 504 if the offer is public … and I think that having the offer on a publicly accessible website is a violation. That said, I’m not 100% sure about what counts as a “general solicitation”. See: . The actual rule is

So, let’s say I could get past all of that. Let’s say I could get past all these red flags and questions, and the fact that the solicitation might be illegal, and the lack of actual checking who the buyers are may also be illegal. The elephant in the room:

What is my note worth?

The note is worth the amount you invested, it is debt owed to you. It also earns 3% annually, and upon conversion will earn an additional 8%, at which point you will be a shareholder of Purism, SPC.

There are a lot better ways to make 3% interest. My bank account with Discover gets 3.75%. I could do an 18-month CD with them to get 4.75%. I will get 8% when my note converts… in stock for the company, if I’m understanding it correctly; so I’d have my $1000 investment in Purism become worth $1,080 of stock at an unclear valuation. However, if all investments are just going to fill a backlog, how much is the company actually worth? Is my $1,080 stock going to be calculated based on how much other people invested, resulting in (arguably) a very inflated valuation?

Consider this thought experiment. Not being an economist or lawyer myself, just doing my best to understand, consider a company named X. X is worth, normally, $1 million. X sells $5 million worth of product for $2 million by mistake. X sells $3 million worth of stock to cover the gap, and convinces everyone that the company is now worth $4 million because of the prior $1 million valuation + $3 million worth of stock sold, even though basically nothing changed with the company’s actual value from before once the orders are finished. If there was a free market, that would quickly be discovered and tank the valuation back to being much closer to $1 million and shred the equity value. Which might be why Purism really doesn’t want you selling your notes:

Can I sell my note?

Not easily. The best way to look at convertible notes is to consider them long term investment in the future growth of a social purpose company you desire to see grow and reap the future benefits from its success. It is possible to transfer (e.g. sell) the note to other parties but that would be done separately and independently by you, notifying Purism of the legal transfer of ownership.

Now, is this all of this combined illegal? I don’t know. But icky? Definitely feels like it. Directly going to people trying to sell “convertible notes” with a misleading statement about your ludicrous backlog, and no notice in the solicitation that US individuals cannot buy them, and open admission that they do not themselves check if their buyers are from the US despite what the SEC rules appear to be (almost begging for people to ignore the notice if they read it), all looks as sketchy as heck to me.

And so, while this is not financial advice, and I know that saying I am not a financial advisor has very little legal merit, I would advise that anyone investing in Purism view it equivalently as a Moody’s C or an S&P D. View it as a donation, not a investment.

The “Location Off” switch on your phone is a lie.

If you’re going somewhere anonymously, or attending a politically unpopular protest, or visiting a sensitive client, you might want to turn Location Services to Off in your smartphone’s settings. Great – now you can go and do whatever it is without worrying.

Well, that would be the case if we lived in an ideal world, but that switch is more of a polite “please don’t” than an actual deterrent. There are many other ways of getting your location, some of which you may not have considered, but I’m going to focus on the biggest oversight I regularly see even privacy-focused people ignorant of. This will be nothing new for privacy experts, but… it’s your carrier.

Think about it. To join their network, you are literally logging in with your carrier account, which is (most likely) tied to your identity and also has your payment method attached. Maybe you were clever and got a prepaid card with cash, but that’s another step. But consider what happens next: If you are communicating with the network, your phone and the cell tower quickly become aware of how much time it takes for a message to go back and forth between them. Say, a few hundred nanoseconds. It doesn’t take much math, because the amount of time is consistent for the distance you add, to establish a radius for how far away you are. Add in two or three weaker towers in the area that aren’t as preferable when your phone is looking for a better signal, and the carrier’s got a pretty good idea of where you are.

Which, is also why buying prepaid with cash is overrated. All they have to do is look at where you are between 9PM and 5AM for most days, and they’ll have a pretty good idea of where you live. What’s the point of paying with cash if they can easily find your home address?

This is just one way that the carrier could find your location. And there’s nothing you can do about it. If you are thinking that downloading GrapheneOS and only using the stock apps makes you immune… no, it doesn’t. Every line of code could be handwritten by yourself, but the moment your phone talks to a cell tower, there’s no privacy.

If you want to learn more ways you may be identified, look into IMSI Catchers; and also consider that your phone regularly talks to cellphone towers even from other carriers if you don’t have a SIM card installed, to deliver E911 support. No phone in the US needs a cellular plan to call 911, but that means that even a SIM-free phone is still talking to towers.

Better to leave the phone at home. Or, at least in a Faraday cage you can remove it from if you are desperate.

Nintendo Switch modding is illegal in the US, full stop.

Note: I am not a lawyer – but hear me out.

Considering the recent squabble between Pointcrow and Nintendo, almost everyone has heard of the “DMCA Takedown.” The DMCA is a huge (and arguably unconstitutional and 70% stupid) law that has a ton of sections, with Section 512 dealing with takedowns.

However, there’s another section in the DMCA many people don’t know: DMCA Section 1201. It deals with what it calls “Technological Protection Measures.” It’s basically a 90s term for what we would now call Digital Rights Management, or DRM, but a little more widely-applied. The Library of Congress summarizes the section:

The Digital Millennium Copyright Act (“DMCA”), codified in part in 17 U.S.C. § 1201, makes it unlawful to circumvent technological measures used to prevent unauthorized access to copyrighted works, including copyrighted books, movies, video games, and computer software. Section 1201, however, also directs the Librarian of Congress, upon the recommendation of the Register of Copyrights following a rulemaking proceeding, to determine whether the prohibition on circumvention is having, or is likely to have an adverse effect on users’ ability to make noninfringing uses of particular classes of copyrighted works. Upon such a determination, the Librarian may adopt limited temporary exemptions waiving the general prohibition against circumvention for such users for the ensuing three-year period.

So, there you have it, in short. Breaking any digital lock / TPM / DRM, without an exception being created during the rule-making every three years, is illegal. Even for fair use cases, like repairing a tractor, or jailbreaking your smartphone. DMCA Section 1201 takes precedence before any “Fair Use” claim. This point cannot be overstated – even if everything you do is otherwise legal and even protected by law as Fair Use, if you cross DMCA Section 1201, it’s illegal.

You might ask – wait a minute, jailbreaking my iPhone is illegal? Well, it actually used to be, but an exception was created for jailbreaking smartphones and tablets. However, guess what doesn’t have an exception yet: video game consoles. Well, they do have one exception – you can break digital locks, only to replace a broken disk drive, as long as you then put the digital lock back afterwards.

So, believe it or not, modding your Nintendo Switch in any capacity, under DMCA Section 1201, is actually illegal in the United States. There is historical precedent for Section 1201 enforcement as well, making it not just a theoretical issue. RealNetworks lost a lawsuit for Section 1201 violations when they made DVD ripping software, and Psystar went bankrupt partially from violating DMCA 1201 for making macOS run on unapproved hardware by bypassing Apple’s lockout. Guess which law Gary Bowser was convicted (among others) of violating, that sent him to prison with a 40 month sentence, for selling Nintendo Switch modchips. He now owes Nintendo about $14.5 million in part for violating this law, and he will have his wages garnished by about 30% until the debt is paid in full (which, almost certainly, will never happen).

This leaves Pointcrow and game modders like him in a quandary, legally, before even getting into copyright issues, or whether Nintendo’s Terms of Use (which say no reverse-engineering) are enforceable. How did he obtain a copy of the game to start modding? There’s only two ways:

  • Piracy (Copyright infringement – illegal)
  • Jailbreaking his Switch to get a game dump (DMCA Section 1201 – illegal)

So, before even talking about whether he’s violated Nintendo’s copyrights, or violated Nintendo’s Terms of Service that came with the game, he could have committed a crime with up to five years in prison and $500,000 in criminal penalties. This is also why anyone saying, “but he was clearly Fair Use, Nintendo is just using illegal DMCA takedown notices!” doesn’t know what they are talking about – this exact thing is what DMCA takedowns were originally designed for!

You might also be thinking right now, “but wait a minute, what about where it started? With NES and SNES modding?” Well, curiously enough, that’s not a Section 1201 violation because the NES and SNES don’t have encrypted ROMs or qualifying TPMs / DRM. This is also why you can rip a CD with your computer legally (because it has no encryption), but cannot legally rip a DVD in most cases despite the encryption algorithm being breakable with just 7 lines of Perl.

Welcome to the United States, land of “freedom.”

Perhaps something was rotten in Skylake

Here’s yet another theory that could partially explain why Windows 11 doesn’t support anything below Intel 8th Gen: Something’s really borked in Skylake (Intel 6th Gen), and operating systems are eager to get away from it. And, when possible, the refresh (7th gen).


There are good alternative motives for all of those events (Windows 11 wanting HVCI and MBEC, macOS trying to phase out Intel, Microsoft trying to heavily push Windows 10)… but when you add them all up, and factor in an ex-Intel engineer saying Skylake pushed Apple over the edge due to “abnormally bad” QA, it begins to look like Skylake is something everyone wants to drop as soon as possible. Or, at a minimum, claim they are not responsible for supporting.

Now, what is this bug? We know there’s a major hyperthreading bug, but it most likely is whatever requires the most invasive fixes… or maybe it’s just the sheer abundance of tiny little paper cuts (Apple allegedly finding more bugs than Intel themselves) that becomes the issue. This has been backed up by Microsoft commentators as potentially being the source of the particularly buggy-at-first Surface Book and Surface Pro 4 (aka “Surfacegate”).


Tech’s over-reliance on the internet is a preventable national security issue

What would happen if the internet suffered a prolonged and serious outage, reason irrelevant (cyberattack, zero days, P = NP with a simple and fast algorithm, solar superstorms, major vendor compromise, AWS KMS shredded from attack or mistake, total BGP meltdown, take your pick), but we still had electricity, gas, mail, mostly functioning government, and basically everything we used to have in the ~80s, in most areas?

Well, besides the obvious awful consequences on basically everything in every industry, I can sure think of some extremely low-cost, easily preventable technical consequences which would make rebuilding unnecessarily difficult:

  • How many people would have maps?
  • How many people would have survival information?
  • We had PCs before we had the internet. What happens when you can’t set up a PC without the internet?
  • Many platforms don’t support offline updates. What happens when you have a Switch game card for your desperate kids, but don’t have the update for the Switch?
  • How would education continue, if so many books and resources gone digital no longer exist – and the physical material that exists is now in great danger of theft?

Now… I will admit, what is the likelihood of such a scenario? Not very high… but it’s more amazing that we have successfully digitized so much knowledge, we now have capacity to widely distribute this knowledge and make ourselves more resilient to outages, and we don’t.

Imagine my following (very early, not set in stone, probably have loopholes or other issues, they are just sketches, hopefully somewhat common-sense) proposals:

  • Every internet-connected device should be capable of being set up, and updated, without an internet connection, from stored offline files.
  • Devices should be capable of exporting their own newer firmware to an offline image, to update other devices on older firmware offline. If my PlayStation is on v37, and my friend is on v32, and my game requires v34, I should be able to help my friend update to v37 and play, especially because we’re going to need it during those difficult times.
  • App developers on closed ecosystems, such as the Apple App Store, should have the option to allow their apps to be installed offline. Apple can still certify the app to their standards, but if I’m the developer of an open-source application, I should have the option to let my users export my app to a signed file, stash it on a flash drive somewhere, and install it on random people’s iPhones in case of emergency. (I’m not making a point against the App Store here – the application would still have been signed by Apple at some point, and it could be double-checked if internet is available.)
  • Right now, people can self-certify up to $300 of charitable giving to the IRS without receipts. Why can’t the government grant, say, a $20 Tax Credit for self-certifying you are storing a full set of Project Gutenberg? Or for storing a database of emergency survival information with images? Or for storing full copies of OpenStreetMap for your state? Or for storing an offline copy of Wikipedia (~120GB)? If even 10 million people did it, it would cost up to $80 million – a pittance by government budget standards (and our $700+Billion national defense budget), but it could make a ludicrous and disproportionate difference on outcome to have the knowledge so widely distributed. If people widely cheated on it and 100 million claimed to be doing it… is even $8 billion with some fraud here and there that big of a deal compared to our national defense budget and the benefits provided?
  • Emergency situations are unpredictable – that’s why every phone is legally required to support 911, even without a carrier plan. But we have smartphones now, so why aren’t we raising the bar? Would it really kill us to store a database of just written information on how to survive various situations on every phone? Why can’t I ask Siri, without an internet connection, how to do CPR? It would probably take 10MB at most… and save many lives.
  • Many films and TV Shows are becoming streaming-exclusive, and as many fans are finding out, this is very dangerous for archival purposes. Just ask fans of “Final Space,” who had the series completely erased from all platforms, even if they purchased it, for accounting reasons. I wonder if the relationship between creators and fans should be reconsidered slightly. If you are a major corporation, and you get fans invested in a series, do you perhaps have a moral obligation to provide a copy of your content on physical media for those interested, so as to prevent a widespread loss of culture? (Also because, all it takes is a few Amazon data centers to blow up and a ton of streaming-exclusive movies might no longer exist…) Perhaps this should be called a Cultural Security issue.


Debloating Windows 10 with one command and no scripts

Recently, I had to set up a Windows 10 computer for one specific application in a semi-embedded use case. Anything else that Windows does or comes with is unnecessary for this. While there are plenty of internet scripts and apps for de-bloating Windows, I have found the easiest (and little known) way to debloat Windows without running any internet scripts is as follows:

  1. Open Powershell. (NOTE: Strongly recommend using fresh Windows install, and trying in a VM first to see if this method works for your use-case.)
  2. Type Get-AppxPackage | Remove-AppxPackage. (See note about Windows 11 below – this is for 10 only.)
  3. Ignore any error messages about packages that can’t be removed, it’s fine.

This is my Start Menu, after installing my CAD software:

After running the command, you will just have the Windows folders, Microsoft Edge, and Settings. And that’s literally it – no Microsoft Store, no Apps, just Windows and a Web Browser. Also, even though the command sounds extreme, almost nothing in Windows actually breaks after you run it (Windows Search, Timeline, Action Center, all work fine)*. If you want to try it yourself, I’d advise using a virtual machine and giving it a try, it works shockingly well for my use case.

After that, if I want to further de-bloat a PC for an embedded use case, I use Edit Group Policy on Windows 10 Pro. It’s a mess to navigate, but almost everything can be found there. Don’t want Windows Search to use the internet? Or something niche, like disabling Windows Error reporting? It’s almost certainly there.

Will this work for everyone? No, of course not, but it’s a great one-line, easily memorable tool for cleaning up a PC quickly for an industrial use case without any security risks caused by online scripts.

FAQs from Hacker News discussion:

Q. What about Windows 11?

A. Windows 11 is far, far more dependent on AppX than Windows 10 and will continue to be even more dependent on it in the future, most likely. Windows 10, at this point, is unlikely to change in this regard. Running these instructions on Windows 11 is far more likely to leave you in a bag of hurt down the road than Windows 10.

Q. What about .NET Frameworks, VCLibs, and some other important-sounding packages?

A. This will remove them, but despite their important-sounding names, they aren’t as important as you may think. The .NET packages (in Appx, not to be confused with the unpackaged “classic” .NET Frameworks) and VCLibs in my experience are primarily for Microsoft Store applications and Desktop Converter Bridge applications (Win32 in Store package), which if you don’t have the Store, probably won’t affect you. (This may sound optimistic, I say probably because I can’t try every application, but if Steam, FreeCAD, and Fusion 360 can run without issue, you’ll probably not have issues.) Try in a Virtual Machine or old computer first if this is concerning.

Q. Can I undo this?

A. Technically yes, but it’s hard. Reinstalling Windows is easier. Plan accordingly. Actually, you can, with this command in a PowerShell Administrator window according to Microsoft documents: Get-AppxPackage -allusers | foreach {Add-AppxPackage -register "$($_.InstallLocation)\appxmanifest.xml" -DisableDevelopmentMode}. I still recommend using a VM first just in case and only using a fresh install. After running this reinstall command, get updates through the Microsoft Store, and restart. This should work and in my testing it does, but the Weather app was complaining about Edge WebView2 being missing (but provided download links).

Q. But it might rip out XYZ which I need (e.g. Microsoft Store).

A. I recommend, in that case, using a VM first or an old computer to see if you actually need it.

Q. Security risks?

A. Most likely not, and actually, likely less than if you didn’t de-bloat (lower attack surface). You will lose many libraries used for primarily running Windows Store apps (and the apps themselves), but Windows Update and Windows Defender are not affected by the command in any way I can discern. YMMV though.

Q. But de-bloating might damage Windows. (Also in this category, “this is stupid and could destroy your PC!”)

A. It’s the risk we all take whenever attempting to de-bloat Windows in any way Microsoft doesn’t sanction (the risk comes with the territory). But if you are still interested in de-bloating, I think that it’s good to have an option that doesn’t need downloads. There might be downloadable options that are better. Any criticism (even valid) about de-bloating would almost certainly apply to other programs and scripts and not just mine. It can’t be worse than businesses who go and use Windows 10 Ameliorated.

Also, use case should be considered. Consider mine: CNC and CAD. CNC Software is stuck in the 90s for some machines, and if literally anything goes wrong, you could actually lose hundreds of dollars of material from a botched cutting job. Is it really so dumb to risk some stability, for the greater stability of having less bloat, from a PC that will rarely if ever touch the internet (and cost me $150, and has all the data storage on a separate dedicated NAS)? I think it’s a fair trade. The last thing I need is the (normally not removable) Windows Game Bar popping up over Mach3 CNC Control Software and blocking the Emergency Stop button. Your situation is almost certainly different.

Q. But what about the Chris Titus Tech debloater, or O&O AppBuster?

A. They’re probably great solutions. The main appeal of this one is that it is memorable, can be used immediately, and requires no downloads. If you are OK with downloading scripts from the internet (which, I am, but not everyone is), there are great, more granular options out there. Because of the requirement of a download, I don’t see them as being comparable to this command (different use cases).

Q. But Windows 10 clearly wasn’t made to work this way!

A. Well… there’s always Windows 10 LTSC. Which is awfully close to this, having very few AppX packages, and no Microsoft Store. It’s only for sale to Enterprise users though. You could say this is the closest thing to a “poor man’s” LTSC-ifier for standard Windows 10.

Open Question: How will Apple keep sideloading in Europe?

I saw the news by Bloomberg (a questionable source) about how Apple was getting ready to comply with the European Digital Markets Act, at last, by allowing sideloading among other things. However, this quote caught my eye:

If similar laws are passed in additional countries, Apple’s project could lay the groundwork for other regions, according to the people, who asked not to be identified because the work is private. But the company’s changes are designed initially to just go into effect in Europe.

I have one question: How?

This might seem like a dumb question, but consider the following:

  • GDPR applies to European citizens. Companies like Apple are bound by GDPR even if said citizen is currently physically located in the United States or another country (making it a extraterritorial law). If the DMA is similar in this way (which I currently cannot find a certain answer for), Apple would be required to allow sideloading outside the European Union if the user is an EU Citizen (for example, if they flew to the US for a week). But how do you tell, without ID, if a user is European? Vice versa, how do you tell that a US user didn’t just fly to Europe for a week?
  • The DMA appears to be a retroactive law, applying to all iPhones that currently exist as part of the “platform” (i.e. anything currently supported). If so, there are no doubt phones in Europe that were purchased in the US. What happens to them? Let’s say 5% are not what Apple would call European-sold phones. Is updating 95% of phones to comply, and not 100%, legally kosher? Or could Apple be sued stepping on people’s rights for not getting everyone?

The first point would suggest a geolocation-based block to be ineffective and potentially illegal. The second point would seem to also make a unique serial-number-based (or other point-of-sale-based) check also illegal and ineffective. iPhones don’t require an Apple ID and the DMA doesn’t have exceptions for one, so the country on an Apple ID would not be usable either. It doesn’t seem, to me, like Apple has many options for actually restricting sideloading fully to Europe without technically-knowledgeable users being able to join in on the fun.

Thus my open question: Any thoughts how they’ll do it? Comment below.