So what, are we giving Mozilla a free pass to do anything now? Is the new bar “not quite as shitty as Google”?
So what, are we giving Mozilla a free pass to do anything now? Is the new bar “not quite as shitty as Google”?
You don’t have to install drivers or CUPS on client devices. Linux and Android support IPP out of the box. Just make sure your CUPS on the server is multicasting to the LAN.
You may need to install Avahi on the server if it’s not already (that’s what does the actual multicasting). The printer(s) should then auto magically appear in the print dialogs on apps on Linux clients and in the printer service on Android.
On Linux it may take a few seconds to appear after you turn it on and may not appear when it’s off. On Android it shows up anyways as long as the CUPS server is on.
From what I understand OP’s images aren’t the same image, just very similar.
Any PC can do that, it’s called “status after power off” or something like that.
Isn’t it fourth?
Bayesian filters are statistical, they have nothing to do with machine learning.
You should consider if you really want to integrate your application super tightly with the HTTP protocol.
Will it always be used exclusively over a REST-ful HTTP API that you control, and it has exactly one hop to the client, or passes through hops that can be trusted to never alter the HTTP metadata significantly? In that case you can afford to make HTTP codes semantically relevant for your app.
But maybe you need to pass data through multiple different types of layers and different mechanisms (socket protocols, pub-sub, file storage etc.) In that case you want all your semantics to be independent from any form of transport.
It’s a perfectly fine way of doing things as long as it’s consistent and the spec is clear.
HTTP is a transport layer. You don’t have to use its codes for your application layer. It’s often done that way but it’s not the only way.
In the example above the transport layer is saying “OK I’ve delivered your output” which is technically correct. It’s not concerned with logical errors inside what it was transporting, just with the delivery itself.
If any client app is blindly converting body to JSON without checking (at the very least) content type and size, they deserve what they get.
If you want to make it part of your API spec to always return JSON that’s one thing, but don’t do it to make up for poorly written clients. There’s no end of ways in which clients can fail. Sticking to a clear spec is the only way to preserve your sanity.
It’s impossible to tell how meaningful Backblaze’s numbers are because we don’t know the global failure rate for each model they test, so we can’t calculate the statistical significance. Also there are other factors involved like the age of the drives and the type of workload they were used for.
buying more reliable devices can definitely save you time and headache in the future by having to deal with failures less frequently.
That’s a recipe for sorrow. Don’t waste time on “reliability” research, just plan for failure. All HDDs fail. Assume they will and backup or replicate your data.
Any difference you personally experience between the three big brands is meaningless. For any failed HDD you have there’s going to be another person who swears by them and has had five of them running for 10 years without a hitch.
But whatever’s cheaper in your area and stop worrying. Your reliability should be assured by backups anyway not by betting on a single drive. Any drive can fail.
For home setup you don’t care because you should have either redundancy or backup (preferably both).
So that typically means buying the cheapest HDD that’s new and from one of the established brands (Seagate, Western Digital, Toshiba) that’s in the correct size for your needs, and you can afford to buy it at least twice (for the aforementioned backups or redundancy), or even thrice, and replace as soon as needed.
In other words there’s no need to speculate on how long an HDD will last, you simply replace it when needed.
Please also note that HDDs over 10 TB are starting to get increasingly replaced with enterprise models which run hotter and make more noise.
This is not a new problem, .internal is just a new gimmick but people have been using .lan and whatnot for ages.
Certificates are a web-specific problem but there’s more to intranets than HTTPS. All devices on my network get a .lan name but not all of them run a web app.
As opposed to what, the domain certificate? Which can’t be air-gapped because it needs to be used by services and reverse proxies.
If you mean properly signed certificates (as opposed to self-signed) you’ll need a domain name, and you’ll need your LAN DNS server to resolve a made-up subdomain like lan.domain.com
. With that you can get a wildcard Let’s Encrypt certificate for *.lan.domain.com
and all your https://whatever.lan.domain.com
URLs will work normally in any browser (for as long as you’re on the LAN).
But denormalized databases are not a new thing. There are engines that build on it on purpose in order to be more efficient, like Cassandra. Most data warehousing engines use this “trick”. And of course you can do it with a regular RDBMS too.
To some extent all software is disposable. Some places take it to a more ridiculous level than others. If they have money to burn just make sure as much of it as possible ends up in your pocket.
Then why do they offer a separate, distinct DDoS mitigation feature on the enterprise plans? And did you notice they call them “mitigation” and not “protection”? 🙂
Look at the description of each one, the free one “stops illegitimate traffic at the edge”. Meaning they’ll serve from cache, it’s not getting through to your actual site. You can get caching from any CDN service, it doesn’t have to be CF. All CDN services are distributed and will try to serve for as long as possible because their whole purpose is to deal with traffic spikes.
And if you want to know for how long CF (or any service) will serve from cache and how far they’ll go for an account (especially a free account), you want to check the terms of service not the plans. The plans are made to sell to you, the fine print is in the terms.
Anyway, I really don’t understand people’s obsession with DDoS, particularly self-hosting people. The chances of their little website ever being the target of a DDoS are astronomical. Many of them don’t take proper backups, and don’t worry about theft or fire or electric spikes, which are far more likely, but go frantic when they hear about features they’ll never use.
Use your common sense. They’re not going to expend any significant resources to keep up a free website.
They have a small capacity available for mitigating DoS for free accounts together, while resources last. If you happen to fit in that capacity at any given time that’s nice, if you don’t, you go down.
Why do you assume they haven’t warned Mozilla in advance?
Also, Mozilla was fully aware that what they were doing is in breach of GDPR. I find it extremely hard to believe that the makers of Firefox are not fully familiarized with it by now.
Last but not least Mozilla is doing this for financial gain. It’s selling pur data to advertisers. Why should we excuse it? It’s a very hostile act.
If Mozilla has hit rock bottom and has been reduced to selling our data to survive then that’s that. We’ll find another way and another FOSS browser. Accepting it is not an option.