Imagine how differently the CloudStrike incident would have played out, if users saw the above message on July 19th. It’s also quite possible for Microsoft to do this – Windows already keeps track of which drivers crash, and how often. How do you think they wrote their own post-mortem?
Some security folks might say that a piece of malware might try to engineer a crash, to get the driver disabled. I argue, if the driver was well-written and memory-safe like it should have been from the beginning, that wouldn’t happen. The burden of making sure the driver stays loaded would be on CloudStrike, not a naive Windows boot process.
Even though IE6 was substantially more painful to support than Firefox, it’s sobering to think that having a 6x greater market share was not enough to save it. The arguments about the lost revenue (from not supporting those users) were just as valid then, and it still did not matter.
This also bodes badly for the future of Firefox: They have no leverage. If they lag behind in web standards or bug compatibility even slightly; an IE6 future is all they can look forward to.
(On an even more pessimistic note, I’m sure the bean counters at every advertising agency are weighing the odds that, if they block Firefox when Manifest v3 rolls out, they will be victorious and kill it altogether, resulting in a web where only handicapped ad blockers are available.)
Today, we had the Cloudstrike incident. Yes, Microsoft can claim it was a third party’s fault. My question is why Cloudstrike needed a kernel level driver on Windows; but no longer needs or uses such access on Mac and Linux. It is worth seriously asking the question: Did Cloudstrike use a kernel level driver, to begin with, because Windows is inherently more flawed and less secure?
Microsoft, just hours earlier, took down significant parts of Azure due to a configuration mistake. This caused some initial blame for the Cloudstrike incident to shift to them, under the belief it was an extension of the same problem. I personally received an email from Microsoft at 2AM saying my password was reset by an unknown phone number; but after logging in, found that my password had been “reset” to exactly the same password and there was no sign-in activity.
Microsoft has already been labeled a national security risk by the former White House cyber policy director AJ Grotto, due to repeated incompetence, buggy software, and inconclusive investigations on serious intrusions.
Microsoft has failed to protect Azure from both Russia and China. They have also failed to remediate the issue quickly, failed to report the issue in a modicum of a reasonable timeline, and even failed to discover how they got in to begin with.
We can argue about reform, antitrust, memory safe code, decentralization, the automatic OneDrive backups, the whole shebang for another day. There’s a great discussion to be had there. However, I know one “first step” that I will ask Congress to investigate, and if I had lobbying money, I would demand:
Microsoft has absolutely no right, no grounds, to demand a Microsoft Account to set up Windows.
Microsoft has demonstrated themselves to be quite possibly a greater risk to users than what the Microsoft account supposedly protects them from.
Forcing Microsoft to give up the mandate, because they have not earned the trust required for it (or assured our nation that such trust is well placed, considering it could render computers inoperable out of the box), would be the first step in forcing Microsoft to acknowledge reality beyond a PR statement.
(And for anyone saying there are still ways to use local accounts on Windows: Sure, just like there are ways to run apps that Apple doesn’t approve on the iPhone by jailbreaking it. Saying there are workarounds is not defensible.)
Note: This is just me, thinking out loud; you absolutely do not need to think that I have carefully thought this through, or that this is a good idea. With expectations set as low as possible, let’s continue.
There are many old pieces of tech still in use, but there’s one that grinds my gears every time I try to use it: Email.
For users, email works pretty well. Sometimes it sends too many emails to Junk, but Email is old, reliable, easy to understand, and relatively easy to search. It’s a good system, and I’m not eager to replace it with Slack anytime soon.
However… the backend for email, is a mess. In escalating order (and “we” is used in a very imprecise, broad hand-waving sense for technologists):
Many things in Email have no spec; even basic things. For example: When you reply, are you replying at the top of the message, or the bottom? It might even be a political question, depending on who you ask. This has been worked around by email clients basically guessing the order and rarely even showing the email’s original text, putting layers between the user and the actual message.
What HTML are you allowed to put in an email? Well… it depends. When there’s no spec, and Microsoft Outlook abuses the Microsoft Word HTML renderer, it gets ugly. There’s no guarantee the receiver even has an HTML renderer, and then it’s even more ugly.
Did I mention all of the above, plus aggressive anti-spam policies, makes self-hosting email insanely difficult?
Last, but not least, there’s the inane juggling of IP reputation. Some IP addresses are “cleaner” than other addresses, especially on shared systems like SendGrid or AWS SES. This makes signing up for a mass-mailing account, for whatever reason, messy; and causes countless surprise instances of legitimate emails going to Junk. Combine that with IP Address depletion, and the number of mostly clean addresses is shrinking over time.
My gut reaction to the above is that we’ve got a lousy spec, with decades of cruft and unofficial spec, and we aren’t that great at securing it, or making sure messages are authentic. So… could we do better?
Thus the hypothetical: 2nd-gen email.
Your initial reaction might be: That would be pointless, because not everyone would opt into it, and it would break compatibility all over the place. My thought is… that’s not necessarily a given. Imagine this:
We create a new DNS record, called MX2. Most email services, then, would have an MX2 and MX record. Older services only have MX.
If an ancient, 20 year old email client, tries to send a message – it finds the MX record and sends the message just like normal. A modern client sees the MX2 and sends the message there if it exists; otherwise, it falls back to MX.
From there, the email services which implement MX2 would publish a public date, on which all messages sent to them by the old MX record, will be automatically sent to Junk. If just Microsoft and Google alone agreed on such a date, that would be 40% of global email traffic.
If the above looks slightly familiar, it’s because this strategy already worked, in a sense, with the transition from HTTP to HTTPS. We threw away a multi-decade-old protocol, for a new and more secure one. We set browsers to automatically upgrade the connection wherever possible, and now warn users about insecure connections when accessing HTTP (especially on login pages). Nevertheless – users can still visit HTTP pages, ancient browsers still work on HTTP, but most websites have gotten the memo and upgraded to HTTPS anyway.
The incentive to upgrading to MX2 would be simple: Your messages, while they still would arrive, would go to Junk automatically past the publicly posted date. No business wants that, even if users are already trained to expect, and act, like that can happen. Thus, the incentive to upgrade without truly breaking any day-to-day compatibility.
Personally, I think that such a transition could go even faster than the HTTP to HTTPS transition. Self-hosted email is not very popular in part because of the complexity of the current email system, so between Microsoft, Google, Amazon, Zoho, GoDaddy, Gandhi, Wix, Squarespace, MailChimp, SparkPost, and SendGrid – you have most of the email market covered for the US; anyone not in the above list would quickly fold. The relative centralization of email, ironically, makes a mass upgrade to email much more achievable.
What would a 2nd-gen email prioritize then? Everyone has different priorities, but I’d personally suggest the following which would hopefully win a broad enough consensus if this idea goes anywhere (though experts, of which I am not one, would have plenty of their own ideas):
A standardized HTML specification for email; complete with a test suite for conformance. Or, maybe we just declare a version of the HTML5 spec to be officially binding and that’s the end of it.
Headers for email chain preferences, or other email-specific preferences (i.e. Is this email chain a top-reply chain, or a bottom-reply chain? The client shouldn’t need to guess, or worse, ignore it.)
If an email has a rich, HTML view; it should be required to come with a text-only, non-HTML copy of the body as well; for accessibility, compatibility, and privacy reasons.
All MX2 records must have a public key embedded in the record. To send an email from the domain:
– A hash of the email content, and all headers, is created. – This hash is then encrypted with the private key, corresponding to the record’s public key. – This header is then added to the email, as the only permitted untrusted header. – When an email is received, the header containing the hash is decrypted with the DNS public key, and the rest of the email is checked against the hash for integrity and authenticity.
Point #4 is a lot like DKIM and DMARC right now, except:
– There would always be an automatic reject policy (p=reject) . Currently, only 19.6% of email services which even have DKIM are this stringent. – If headers do need to be added to an email, the spec can carefully define carve-outs for where untrusted data can go (i.e. if the spam filter wanted to add a header). – There also could be standardized carve-outs for, say, appending untrusted data from the receiving server to a message body (i.e. your business could add data to the body’s top or bottom indicating that the message from an external recipient and you have legal obligations, but your email client can also clearly show that this was not part of the original message and is not signed). – As such, the signing would not need to work around email compatibility to such an extent as DKIM, reducing the likelihood of critical flaws.
By simplifying the stack to the above, eliminating SPF, DKIM, and DMARC (and their respective configuration options), and standardizing on one record (MX2) for the future, running your own self-hosted email stack would become much easier. Additionally, the additional authenticity verifications would hopefully allow spam filters to be significantly less aggressive by authenticating against domains instead of IPs.
Point #6 is the biggest change – we’re no longer authenticating, or caring, about the IP Address that’s sending the email. Every email can and always would be verified against the domain using MX2 records and the public keys in them. Send a fake spam email? It doesn’t have a signature, so it gets tossed without any heuristics. Send a real spam email? Block that domain when there’s complaints. Go after the registrar (or treat domains belonging to that registrar as suspicious) if needed. This would mostly eliminate the need for IP reputation by replacing it with domain reputation – which, at least to me, is a far superior standard with more understandable and controllable outcomes(1).
Clients which implement MX2 can, optionally, have an updated encryption scheme to replace OpenPGP. Something like Apple’s Contact Key Verification. Hopefully there would be forward secrecy this time.
If you have got great counterarguments, let me hear them.
(1) This would, perhaps, be the one and only “new feature” we could advertise to users. Not getting emails? You can just type in the name of the website, and always receive the emails.
Edit 1, for clarification: For bulk senders, there would be multiple MX2 records on the domain, each containing a public key for every authorized sender. One of those records would have a marker indicating it as suitable for incoming mail.
Edit 2: This article has had a very large discussion on Hacker News. While the discussion winds down, I have some additional thoughts from there:
If there is an MX2 (ever), a sane way to share large files (like hundreds of megabytes, or even gigabytes) would be great. Designing the protocol wouldn’t be easy especially due to spam concerns, but if I had a nickel for how many links are shared to avoid email size limits… this is a real-world problem.
MX2 will literally never happen if Google and Microsoft don’t join in. They would also, of course, have considerable control on the outcome. However, if even open-source communities and developers adopted MX2 because it was easy to implement and open source… you never know what grassroots can do.
Part of me wonders what would happen if MX2 threw out SMTP for HTTP with a standardized REST API and JSON bodies. Sure – it would add a mountain of HTTP overhead and be more complex. However, it would sure as heck make implementing MX2 into a project quite easy in most programming languages, as it would just be a web server running on a custom port answering endpoints. REST APIs are also, despite their complexity, a well-documented system including for preventing spam (it’s not like Stripe or S3 lets people spam their APIs with garbage). I don’t know enough about SMTP to know if that’s a good idea – but I do know that SMTP is sub-optimal enough that Microsoft and Google don’t use it when exchanging messages with each other.
There has been interesting commentary about being a pull protocol instead of a push protocol (i.e. instead of sending a message from X to Y; X sends Y a tiny standardized note saying to pick up a message from X). The most popular proposal of this was DJB’s Internet Mail 2000.
The idea of plain-text alternatives to HTML is probably impossible to enforce.
As some commenters have pointed out, if the public key is always on the DNS, and every MX2 implementation is required to have that public key, a sending-server-to-receiving-server email encryption becomes possible.
The idea of using HTML, at all, is controversial. Email was originally never designed for HTML, and the security risks of processing it are quite large. Using a superset of Markdown with style directives, or a customized XML schema, or even a new simple markup language all-together (Modern Mail Markup Language – M3L, claiming it now) might be an interesting thought experiment.
A consistent point that came up was that standards drift – people don’t always implement the spec right, mistakes are made. I answer that, being a new standard, this is a chance to rigidly enforce the rules from the beginning. For example, we could put it in spec that any incoming message that’s not signed, despite that being immediately and easily verifiable by the sender, causes a 1 hour IP ban for laziness. Just an example.
Congress has recently proposed multiple bills to regulate the internet (such as the recent “Kids Online Safety Act”), in the name of protecting children. This has caused a simple response: “Just have parents be parents and set up parental controls if they care.”
That’s simpler said than done. Parental controls right now completely suck by being incomplete, full of loopholes, extremely buggy, overly complicated, poorly designed, privacy invading, or a combination of the above. As someone who has used almost every ecosystem, here’s an eye opener.
Also here’s my rallying cry: If you don’t want Congress regulating the internet to “protect the Kids,” demand companies fix their parental controls.
Windows 11
Microsoft has Family Safety as their tool for parental controls on Windows. Here’s my hate list.
Every child must have a Microsoft Account to use it. Even the 8-year-olds. Local accounts are still supported by Windows 11 after setting up the main account, but you can’t use parental controls with them.
You can’t disable Windows Copilot, or Edge Copilot. Don’t want your kid using AI from Bing or ChatGPT when they are just learning to write? Too bad. Don’t want them using AI for any host of ethical or personal issues? Too bad.
You can’t disable the Microsoft News feed of tabloid content on new Microsoft Edge tabs, without disabling Bing (rendering you unable to offer restricted “SafeSearch” browsing of any kind, to your child, at all).
You can’t disable the Microsoft Search button, which shows… surprise, more tabloid content. Even if you have your child’s account set to “allowed websites only.” They’ll still see headlines of the day about how the world is on fire.
You can’t disable the Widgets button. The entire purpose of widgets is to serve tabloid content. And once again, “allowed websites only” doesn’t disable that or do basically anything to it.
Windows 11 comes preloaded with Movies+TV, Xbox, Microsoft News, and other apps. Unless you as a parent know about these apps, open these apps individually so they appear in the dashboard, and then block these apps, they’re allowed. What?
You can’t disable the Microsoft Store. You can set age restrictions; so setting everything to block all apps rated above the Age of 3 is the best you’ll be able to do.
Microsoft Office lets you embed content from Bing, and does not respect system parental controls… at all. Even if you block Bing, your 8-year-old can still watch videos to their heart’s content by embedding them in a Word document, because the internet filters only work in Microsoft Edge.
So let’s say you want to allow your 8-year-old to work on school, on a Windows PC. What’s the point of parental controls when you can’t disable Widgets, can’t disable the tabloid content in the Search box even if you block Bing, the Bing filter can still be bypassed using Office, and your student can use AI all they want? These are “parental controls”?
MacOS / iOS
Quite powerful, stupidly buggy. So much so that Apple even admitted as such to a reporter a few months ago. There is, in my experience, no part of macOS more buggy than Screen Time. How so?
It appears that Screen Time runs using a daemon (background process) on the child’s account. That daemon has a propensity to randomly crash after running for a few hours. When it does, all locks are disabled. Safari will just let you open Private windows without filtering of any kind, regardless of your internet filtering settings, when that happens. This state then remains until the child logs out and logs back in. Naturally, what’s the point of parental controls that randomly fail open?
The “allowed websites only” option shows Apple has never used this feature, ever. You will be swarmed on your first login with prompts to approve random IP addresses and Apple domains because of system services that can no longer communicate with the mother ship. And they will nag you constantly with no option to disable them, so your first experience of enabling this is entering your PIN code a dozen or more time to approve all sorts of random junk just to get to a usable state.
The “allow this website” button on the “website blocked” page randomly doesn’t work; and this might be because the codebase is so old (and likely untested for so long), it’s not even HTML5. Meaning it’s probably been, what, a decade and a half since anyone really looked at it last?
You can’t disable any in-box system apps. You don’t want your kid reading through the Apple Books store? You don’t want your kid seeing suggestive imagery and nudity in the Apple Music app? You don’t want your kid listening to random Podcasts from anyone in the Apple Podcasts app? You can’t do anything about it but set a time limit for 1 minute. Of course, that time limit randomly doesn’t work either.
The “allowed websites” list in macOS has a comical, elementary bug showing how badly tested the code is. If you open the preference pane, it shows a list of allowed websites (say, A, B, and C). Let’s say I add a website called D, and close it. I open the preference pane again – only A, B, and C are in the list, D nowhere to be seen! However, D was added to the system, but it’s not in the list. If I go add E (so the list is now A, B, C, and E); D will then be removed and opening the preference pane will one again just show A, B, and C.
Nintendo Switch
I generally like Nintendo, but the Nintendo Switch Parental Controls are inexcusable.
Every parental control is per-device. What about families that have, say, multiple children and can’t afford (or don’t want the risk) of having multiple Switches? Seems reasonable, but no, every Switch is personal to the owner if you use Parental Controls.
Let’s say you then go, fine, and buy multiple Switches. There’s no ability to set a PIN lock; so theoretically the kids could just… swipe the other’s Switch?
You can’t hide titles on the menu. Let’s say you have two kids on the same Switch. One plays M-rated titles, the other plays E-rated titles. The kid who plays E-rated titles will see all the M-rated titles on the Home Screen, and nothing can be done about it. They can even launch them and play them.
You can’t disable the eShop from within the Parental Controls app. You can dig through the eShop settings to find the option to require a password before signing in, but that requires the kid to not know their own Nintendo password. If your kid uses, say, their actual email address on their Nintendo account, locking down the eShop is impossible – and they’ll see every game for sale regardless of how appropriate or inappropriate it is (Hentai Girls, Waifu Uncovered, anyone? Games you can “play with one hand”? Actual titles on the eShop).
Router based filtering
One of the best solutions. Unfortunately,
Every child must have their own device (again, poor families need not apply, making this very financially uninclusive solution).
Often, technically overwhelming for parents. What IP address is that kid’s laptop again?
Premium routers like Eero, which have very easy to use router-based parental controls, often demand subscriptions to use them. In the case of Eero, $9.99/mo. after you already paid ~$200 for the devices. That’s not a viable solution – I can’t tell a poor family to pay $200+ for routers and a $9.99/mo. subscription as their solution.
Some routers have parental controls that make me wonder what idiot thought this would work. Case in point – a Netgear router from a few years ago that let me “block websites” and had “parental controls” right on the box. Cool – but they worked by letting a parent individually enter, domain by domain, websites to block. Considering the internet’s scale, that’s criminally useless and if I had a lawyer, I would’ve sued for false advertising.
Remember the Nintendo eShop? Try using router-based parental controls to block eShop access without blocking software updates or online play. Good luck with that. Router-based blocking has the least nuance of any of the above solutions.
But sure. It’s the parent’s job to set up parental controls. My response is, once again, if you don’t want Congress regulating the internet, parents need better tools than these.
Recently, I’ve built a navigation sidebar with Tailwind that changes appearance based on a CSS hover, and I’ve found a strange bug. I’ve found Safari will, under the right conditions, leak the cursor position during a click to completely different windows, underneath the window being clicked on.
Unfortunately, I can’t reproduce the above bug on demand. It’s happened multiple times for me, but I can’t make it happen whenever I want it to, which is frustrating and something I’m still looking into.
What I can reproduce with greater consistency is putting the mouse cursor into a position on the second window, and then typing into WordPress. This is an example after I completely quit Safari from recording the first video:
The only other thing of note, for anyone investigating, is that it also seems to happen when just moving the mouse cursor around, but while randomly pressing either Command or Option.
If anyone has any ideas for what is happening here or how to more consistently reproduce it, let me know. And also, hopefully, Apple sees this and makes sure that there isn’t anything else getting leaked across different windows.
Update 1: Purism states on their website in a March blog post that there is one production run that covers the entirety of late 2019 – mid 2021. So, it is actually possible to accomplish. https://puri.sm/posts/where-is-my-librem-5-part-3/ This also makes some of the production timing mentioned below much more innocent even if calling it “In Stock” within two months in a pitch to investors seems to be a stretch. If Purism was willing to keep the “52 week lead time” visible to customers out of uncertainty, investors shouldn’t receive much better-looking alternative statements. (This also does not address whether the original 2019, or 2021 sales, were more questionable than the current 2023 sale for what they would be used for.) However, it does not address that the requirement of Accredited Investor status is not a “good faith” requirement as far as I can find, and this still could be an illegal securities sale.
Original article below
I just got this email (and an email like it) for the second time in just over a week from Purism:
Purism Supporter [sic],
5% bonus on any investment into Purism [sic], helping advance our social purpose mission.
We are contacting you either because you have directly asked us about investing in Purism, are on our newsletter, or a customer whom we thought would be interested in hearing about our investment opportunity. If you are not interested and don’t want any more emails from us, please let us know and we will quickly remove you from this private mailing list.
For the next two months you can earn an additional 5% immediate bonus on any investment.
Products in stock with less than 10 day shipping time:
Librem 14 laptop
Librem 5 USA phone (with Made in USA Electronics)
Librem AweSIM cellular service
Librem Key security token
Librem Mini mini desktop computer
Librem Server a 1U rackable server
Products shipping through backorders and in stock in July, 2023:
Librem 5 phone
Products planned to arrive within the year:
Librem 16 laptop
Librem 11 tablet
With this investment opportunity we are accepting increments starting at $1000 and allow for easy cart checkout to invest. We invite you to get more information on this investment round including the immediate 5% bonus. Find out how to invest, where we will use the funds, and our current progress in this round at our private investment page at https://puri.sm/ir/convertible-note/.
So, putting that potentially illegally misleading statement to potential investors aside, look at this next bit from their investment page:
Has there been previous investment?
Purism has grown mostly from revenue, however, Purism announced closing $2.5m in notes in December 2019. Purism has raised over $10m in total all under convertible note terms.
As stated above, we will use the investment funds for parts procurement in preparation for large production run of stock, as well as continuing development of all our freedom respecting revolutionary software stack, and for more convergent applications in PureOS for the Librem 5 phone.
Excuse me… does this almost look like some form of Ponzi scheme if the anecdotes are true? Purism raised $2.1 million from orders from the Librem 5. Then they sold this form of “stock” to get more cash in 2019, and 2021, and now 2023. They are openly saying right now the cash raised will go to ordering parts for a large production run, which will complete orders from 2019. As this community shipping date estimation thread on their own forum shows:
Now, I can’t go on anything more than a hunch. But my hunch is that Purism is using investor funds to subsidize orders, and selling “convertible notes” to do the job. Is that illegal? I am not a lawyer, and at least it’s pretty clear if you really go digging, so it probably is legal. But is it shady? Or at least Unsustainable? Plus, if I am an investor… how does it feel, knowing your cash is most likely just going to dig them out of a money pit and not actually work on growing the company otherwise? Is that not just a tiny bit misleading, for being a morally superior “Social Purpose Company”?
But then there’s one more problem. That email I got. Once again in the FAQ:
Am I an Accredited Investor?
For US Citizens, this is a good faith requirement, since there is no way for Purism to validate your accredited investor status, by investing you are stating you are an accredited investor that is defined as meeting any one of the following: earned income that exceeded $200,000 (or $300,000 together with a spouse or spousal equivalent) in each of the prior two years, and reasonably expects the same for the current year; OR has a net worth over $1 million, either alone or together with a spouse or spousal equivalent (excluding the value of the person’s primary residence); OR holds in good standing a Series 7, 65 or 82 license; OR any trust, with total assets in excess of $5 million, not formed specifically to purchase the subject securities, whose purchase is directed by a sophisticated person; OR certain entity with total investments in excess of $5 million, not formed to specifically purchase the subject securities; OR any entity in which all of the equity owners are accredited investors.
It’s my understanding that if Purism offers/advertises the investment in a public manner (a “general solicitation” … and I think that this counts as a general solicitation https://puri.sm/ir/convertible-note/ ), they must satisfy Rule 506c or Rule 506b and must take reasonable steps to verify the “accredited investor” status ( https://www.sec.gov/smallbusiness/exemptofferings/rule506c ):
Some requirements of Rule 506c:
all purchasers in the offering are accredited investors
the issuer takes reasonable steps to verify purchasers’ accredited investor status and
While Rule 506b doesn’t require everyone to be an accredited investor, they can only have up to 35 investors (in a calendar year), but Purism “must reasonably believe” the non-accredited investors have “such knowledge and experience in financial and business matters that he is capable of evaluating the merits and risks of the prospective investment”
So, let’s say I could get past all of that. Let’s say I could get past all these red flags and questions, and the fact that the solicitation might be illegal, and the lack of actual checking who the buyers are may also be illegal. The elephant in the room:
What is my note worth?
The note is worth the amount you invested, it is debt owed to you. It also earns 3% annually, and upon conversion will earn an additional 8%, at which point you will be a shareholder of Purism, SPC.
There are a lot better ways to make 3% interest. My bank account with Discover gets 3.75%. I could do an 18-month CD with them to get 4.75%. I will get 8% when my note converts… in stock for the company, if I’m understanding it correctly; so I’d have my $1000 investment in Purism become worth $1,080 of stock at an unclear valuation. However, if all investments are just going to fill a backlog, how much is the company actually worth? Is my $1,080 stock going to be calculated based on how much other people invested, resulting in (arguably) a very inflated valuation?
Consider this thought experiment. Not being an economist or lawyer myself, just doing my best to understand, consider a company named X. X is worth, normally, $1 million. X sells $5 million worth of product for $2 million by mistake. X sells $3 million worth of stock to cover the gap, and convinces everyone that the company is now worth $4 million because of the prior $1 million valuation + $3 million worth of stock sold, even though basically nothing changed with the company’s actual value from before once the orders are finished. If there was a free market, that would quickly be discovered and tank the valuation back to being much closer to $1 million and shred the equity value. Which might be why Purism really doesn’t want you selling your notes:
Can I sell my note?
Not easily. The best way to look at convertible notes is to consider them long term investment in the future growth of a social purpose company you desire to see grow and reap the future benefits from its success. It is possible to transfer (e.g. sell) the note to other parties but that would be done separately and independently by you, notifying Purism of the legal transfer of ownership.
Now, is this all of this combined illegal? I don’t know. But icky? Definitely feels like it. Directly going to people trying to sell “convertible notes” with a misleading statement about your ludicrous backlog, and no notice in the solicitation that US individuals cannot buy them, and open admission that they do not themselves check if their buyers are from the US despite what the SEC rules appear to be (almost begging for people to ignore the notice if they read it), all looks as sketchy as heck to me.
And so, while this is not financial advice, and I know that saying I am not a financial advisor has very little legal merit, I would advise that anyone investing in Purism view it equivalently as a Moody’s C or an S&P D. View it as a donation, not a investment.
If you’re going somewhere anonymously, or attending a politically unpopular protest, or visiting a sensitive client, you might want to turn Location Services to Off in your smartphone’s settings. Great – now you can go and do whatever it is without worrying.
Well, that would be the case if we lived in an ideal world, but that switch is more of a polite “please don’t” than an actual deterrent. There are many other ways of getting your location, some of which you may not have considered, but I’m going to focus on the biggest oversight I regularly see even privacy-focused people ignorant of. This will be nothing new for privacy experts, but… it’s your carrier.
Think about it. To join their network, you are literally logging in with your carrier account, which is (most likely) tied to your identity and also has your payment method attached. Maybe you were clever and got a prepaid card with cash, but that’s another step. But consider what happens next: If you are communicating with the network, your phone and the cell tower quickly become aware of how much time it takes for a message to go back and forth between them. Say, a few hundred nanoseconds. It doesn’t take much math, because the amount of time is consistent for the distance you add, to establish a radius for how far away you are. Add in two or three weaker towers in the area that aren’t as preferable when your phone is looking for a better signal, and the carrier’s got a pretty good idea of where you are.
Which, is also why buying prepaid with cash is overrated. All they have to do is look at where you are between 9PM and 5AM for most days, and they’ll have a pretty good idea of where you live. What’s the point of paying with cash if they can easily find your home address?
This is just one way that the carrier could find your location. And there’s nothing you can do about it. If you are thinking that downloading GrapheneOS and only using the stock apps makes you immune… no, it doesn’t. Every line of code could be handwritten by yourself, but the moment your phone talks to a cell tower, there’s no privacy.
If you want to learn more ways you may be identified, look into IMSI Catchers; and also consider that your phone regularly talks to cellphone towers even from other carriers if you don’t have a SIM card installed, to deliver E911 support. No phone in the US needs a cellular plan to call 911, but that means that even a SIM-free phone is still talking to towers.
Better to leave the phone at home. Or, at least in a Faraday cage you can remove it from if you are desperate.
Considering the recent squabble between Pointcrow and Nintendo, almost everyone has heard of the “DMCA Takedown.” The DMCA is a huge (and arguably unconstitutional and 70% stupid) law that has a ton of sections, with Section 512 dealing with takedowns.
However, there’s another section in the DMCA many people don’t know: DMCA Section 1201. It deals with what it calls “Technological Protection Measures.” It’s basically a 90s term for what we would now call Digital Rights Management, or DRM, but a little more widely-applied. The Library of Congress summarizes the section:
The Digital Millennium Copyright Act (“DMCA”), codified in part in 17 U.S.C. § 1201, makes it unlawful to circumvent technological measures used to prevent unauthorized access to copyrighted works, including copyrighted books, movies, video games, and computer software. Section 1201, however, also directs the Librarian of Congress, upon the recommendation of the Register of Copyrights following a rulemaking proceeding, to determine whether the prohibition on circumvention is having, or is likely to have an adverse effect on users’ ability to make noninfringing uses of particular classes of copyrighted works. Upon such a determination, the Librarian may adopt limited temporary exemptions waiving the general prohibition against circumvention for such users for the ensuing three-year period.
So, there you have it, in short. Breaking any digital lock / TPM / DRM, without an exception being created during the rule-making every three years, is illegal. Even for fair use cases, like repairing a tractor, or jailbreaking your smartphone. DMCA Section 1201 takes precedence before any “Fair Use” claim. This point cannot be overstated – even if everything you do is otherwise legal and even protected by law as Fair Use, if you cross DMCA Section 1201, it’s illegal.
You might ask – wait a minute, jailbreaking my iPhone is illegal? Well, it actually used to be, but an exception was created for jailbreaking smartphones and tablets. However, guess what doesn’t have an exception yet: video game consoles. Well, they do have one exception – you can break digital locks, only to replace a broken disk drive, as long as you then put the digital lock back afterwards.
So, believe it or not, modding your Nintendo Switch in any capacity, under DMCA Section 1201, is actually illegal in the United States. There is historical precedent for Section 1201 enforcement as well, making it not just a theoretical issue. RealNetworks lost a lawsuit for Section 1201 violations when they made DVD ripping software, and Psystar went bankrupt partially from violating DMCA 1201 for making macOS run on unapproved hardware by bypassing Apple’s lockout. Guess which law Gary Bowser was convicted (among others) of violating, that sent him to prison with a 40 month sentence, for selling Nintendo Switch modchips. He now owes Nintendo about $14.5 million in part for violating this law, and he will have his wages garnished by about 30% until the debt is paid in full (which, almost certainly, will never happen).
This leaves Pointcrow and game modders like him in a quandary, legally, before even getting into copyright issues, or whether Nintendo’s Terms of Use (which say no reverse-engineering) are enforceable. How did he obtain a copy of the game to start modding? There’s only two ways:
Piracy (Copyright infringement – illegal)
Jailbreaking his Switch to get a game dump (DMCA Section 1201 – illegal)
So, before even talking about whether he’s violated Nintendo’s copyrights, or violated Nintendo’s Terms of Service that came with the game, he could have committed a crime with up to five years in prison and $500,000 in criminal penalties. This is also why anyone saying, “but he was clearly Fair Use, Nintendo is just using illegal DMCA takedown notices!” doesn’t know what they are talking about – this exact thing is what DMCA takedowns were originally designed for!
You might also be thinking right now, “but wait a minute, what about where it started? With NES and SNES modding?” Well, curiously enough, that’s not a Section 1201 violation because the NES and SNES don’t have encrypted ROMs or qualifying TPMs / DRM. This is also why you can rip a CD with your computer legally (because it has no encryption), but cannot legally rip a DVD in most cases despite the encryption algorithm being breakable with just 7 lines of Perl.
Here’s yet another theory that could partially explain why Windows 11 doesn’t support anything below Intel 8th Gen: Something’s really borked in Skylake (Intel 6th Gen), and operating systems are eager to get away from it. And, when possible, the refresh (7th gen).
Consider:
Windows 11 doesn’t support Skylake, but does support a few Kaby Lake processors (i7-7820HQ, some Intel X-series).
There are good alternative motives for all of those events (Windows 11 wanting HVCI and MBEC, macOS trying to phase out Intel, Microsoft trying to heavily push Windows 10)… but when you add them all up, and factor in an ex-Intel engineer saying Skylake pushed Apple over the edge due to “abnormally bad” QA, it begins to look like Skylake is something everyone wants to drop as soon as possible. Or, at a minimum, claim they are not responsible for supporting.