I keep waiting for someone to come up with some kind of explanation for this that even sorta makes sense. No, as far as I can tell, companies just work this way.
It’s a historical quirk of the industry. This stuff came around before Open Source Software and the OSI definition was ever a thing.
10BASE5 ethernet was an open standard from the IEEE. If you were implementing it, you were almost certainly an engineer at a hardware manufacturing company that made NICs or hubs or something. If it was $1,000 to purchase the standard, that’s OK, your company buys that as the cost of entering the market. This stuff was well out of reach of amateurs at the time, anyway.
It wasn’t like, say, DECnet, which began as a DEC project for use only in their own systems (but later did open up).
And then you have things like “The Open Group”, which controls X11 and the Unix trademark. They are not particularly open by today’s standards, but they were at the time.
The tooling around it needs to be brought up to snuff. It seems like it hasn’t evolved much in the last 20+ years.
I had a small team make an attempt to use it at work. Our conclusion was that it was too clunky. Email plugins would fool you into thinking it was encrypted when it wasn’t. When it did encrypt, the result wasn’t consistently readable by plugins on the receiving end. The most consistent method was to write a plaintext doc, encrypt it, and attach the encrypted version to the email. Also, key servers are setup by amateurs who maintain them in their spare time, and aren’t very reliable.
One of the more useful things we could do is have developers sign their git commits. GitHub can verify the signature using a similar setup to SSH keys.
It’s also possible to use TLS in a web of trust way, but the tooling around it doesn’t make it easy.
I hate grammers in anything that don’t support trailing commas. It’s even worse when it’s supported in some contexts and not others. Like lists are OK, but not function parameters.
I setup my opnsense firewall for IPv6 recently with Spectrum as an ISP. I followed this howto from The Other Site:
Even as someone who has a background in networking, I’d have no idea how to figure some of that stuff out on my own (besides reading a whole lot and trying shit that will probably break my network for a weekend). And whatever else you might say about Spectrum, they have one of the saner ways to implement it; no 6to4 or PPPoEv6 or any of that nonsense.
I did set the config for a /54, but Spectrum still gave me a /64. Which you can’t subnet in IPv6. Boo.
Oh, and I’m not 100% sure if the prefix is static or not. There’s no good reason that it should change, except to make self-hosting more difficult, but I have a feeling I’ll see it change at some point.
So basically, if this is confusing and limiting for power users, how are average home users supposed to do it?
There are some standardization things that could make things easier, but ISPs seem to be doing everything they can to make this as painful as possible. Which is to their own detriment. Sticking to IPv4 makes their networks more expensive, less reliable, and slower.
S-expressions are basically directly writing the AST a compiler would normally generate. They can be extremely flexible. M-expressions were supposed to be programming part of Lisp, and S-expressions the data part. Lisp programmers noticed that code is just another kind of data to be manipulated and then only used S-expressions.
Logo is arguably a Lisp with M-expressions. But whatever niche Logo had is taken by Python now.
I’d like something akin to XML DOM for config files, but not XML.
The one benefit of binary config (like the Windows Registry) is that you can make a change programmatically without too many hoops. With text files, you have a couple of choices for programmatic changes:
That last one probably exists for very specific formats for very specific languages, but it’s not common. It’s a little more cumbersome to use as a programmer–anyone who has worked with XML DOM will attest to that–but it’s a lot nicer for end users.
No matter which tool you’re using, this:
- |> LEFT JOIN |> FROM foo |> GROUP BY clusterid |> SELECT clusterid, COUNT(*)
+ |> LEFT JOIN |> FROM foobar |> GROUP BY clusterid |> SELECT clusterid, COUNT(*)
ON cluster.id = foo.clusterid
Is always less readable than:
|> LEFT JOIN
- |> FROM foo
+ |> FROM foobar
|> GROUP BY clusterid
|> SELECT clusterid, COUNT(*)
ON cluster.id = foo.clusterid
And this isn’t even the worst example I’ve seen. That would be a file that had a bug due to duplicated entries in a list, and it became very obvious as soon as I converted it to something akin to the second version.
What about respecting the reader of the diff when there’s a change in the middle?
Possibly unpopular opinion: more languages should embrace unicode symbols in their syntax with multi-character ascii equivalents like Raku did. I set my vim config to automatically replace the ascii version with unicode. It wasn’t hard, it makes the code a little more compact, and with good character choices, it stands out in an understandable way.
It’s used that way in Elixir. I don’t find it a problem.
Steve Jobs and Steve Wozniak are the classic example. Jobs has some technical skill, but not a lot. He’s the “ideas guy” that all other “ideas guy” try to be. I don’t have a lot of respect for the “idea guy”; Jobs was a manipulative narcissist, and he should not be emulated.
Woz, OTOH, is an absolute genius, and one of the most genuinely nice people you’ll ever meet. Apple made him enough money that he can do whatever he wanted with his life, and what he wanted was to do cool things with computers and pull harmless pranks.
Bill Gates had Steve Ballmer and Paul Allen. That was more of a collaboration. They all had some level of technical and business skill mixed together. It wasn’t quite the complementary skillset we see with Jobs and Woz. A lot of Microsoft’s success was being in the right place at the right time to make the right deal.
It’s a series where a dragon kidnaps a princess, and a plumber from New York must save her. To do so, he must gather mushrooms by hitting bricks while jumping with his fist, jump on turtles to make them hide in their shell, and dodge fire breathing plants.
In the most recent 2d incarnation, the fire breathing plants will sing at you.
The people who made this were on a lot of drugs.
Household income would be a whole family that lives together.
I think we can put a specific maximum for a comfortable western lifestyle. You can certainly argue that a comfortable western lifestyle is already far and away better than most of the people on Earth will ever see. This is something of an arbitrary point where past this, most of us are going to agree that it’s excessive.
It’s USD 10 million.
Why? Let’s start with the Trinity study:
https://thepoorswiss.com/updated-trinity-study/
The original looked at a standard retirement portfolio and asked how much you can withdraw over a thirty year retirement. It took market data from 1925 through 1995 (the updated version linked above goes to 2023) and then checked a thirty year window over that entire period with various withdrawal rates.
What it found is that if you withdraw 4% of the portfolio the first year, and increasing it by inflation each subsequent year, it’s highly unlikely the portfolio will run out in the 30 year window. The time period covers has market ups and downs, high inflation and low, and this 4% stays.
The updated study above says a 3.5% withdraw had a high chance of lasting 50 years.
Lets play it ultra safe and put it at 2.5%. With $10M, we’ll have $250,000/year to play with, and our rules adjust that for inflation.
(Median household income in Manhattan is $128k)[https://www.point2homes.com/US/Neighborhood/NY/Manhattan-Demographics.html]. We’re pulling almost twice that. I feel comfortable saying a person can live nicely in any city on this income.
So there you go: $10M. If you want a 100% tax bracket, that’s a good place to put it. Any more money past that is just a game that hurts everyone else.
TLS already has algorithms hardened against QC. The effects of QC against encryption are greatly exaggerated, anyway. The number of qubits that would be needed to break encryption may be too large to ever be feasible.
Get IPv6 going and stuff like SNI becomes unnecessary.
Encryption everywhere isn’t about the individual content. By making it ubiquitous, it’s harder for bad actors to separate the encrypted data they want from the one’s they don’t. If only special content is encrypted, then just the fact that it’s encrypted is a flag for them. It also makes it much harder to ban. It’s pretty much impossible to ban the algorithms in TLS at this point. Too much depends on it.
What, you don’t love downloading a zip file that contains an msi (which is perfectly capable of internally compressing much of its internal data)?
If you’re going to lecture about “maturing”, then maybe don’t start by jumping to conclusions based on the first sentence.
There’s downsides to the companies, though. Interviewing new candidates takes money, and takes time away from people already on the team. If everyone is switching jobs to get a higher salary, then companies aren’t saving anything in the long run. They also have a major knowledge base walking out the door, and that’s hard to quantify.
It’s a false savings.
If I were to steel man this, it’d be cross-pollination. Old employees get set in their ways and tend to put up with the problems. They’ve simply integrated ways to work around problems in their workflow. New people bring in new ideas, and also point out how broken certain things are and then agitate for change.
This, I think, doesn’t totally sink the idea of the “company man” who sticks around for decades. It means there should be a healthy mix.