“Open source” is not a license, it’s a description. Things can be free with no license restrictions and still not be “open source”.
“Open source” is not a license, it’s a description. Things can be free with no license restrictions and still not be “open source”.
A freely available and unencumbered binary (e.g., the model weights) isn’t the same thing as open-source. The source is the data. You can’t rebuild the model without the data, nor can you verify that it wasn’t intentionally biased or crippled.
Did the image get copied onto their servers in a manner they were not provided a legal right to? Then they violated copyright. Whatever they do after that isn’t the copyright violation.
And this is obvious because they could easily assemble a dataset with no copyright issues. They could also attempt to get permission from the copyright holders for many other images, but that would be hard and/or costly and some would refuse. They want to use the extra images, but don’t want to get permission, so they just take it, just like anyone else who would like an image but doesn’t want to pay for it.
In life, people will frequently say things to you that won’t be the whole truth, but you can figure out what’s actually going on by looking at the context of the situation. This is commonly referred to as “being deceptive” or sometimes just “lying”. Corporate PR and salespeople, the ones who put out this press release, do it regularly.
You don’t need to record content categories of searches to make a good tool for displaying websites, you need it to perform predictions about what users will search for. They’ve already said they wanted to focus on AI and linked to an example of the system they want to improve, it’s their site recommender, complete with sponsored recommendations that could be sold for a higher price if the Mozilla AI could predict that “people in country X will soon be looking for vacations”.
The example of the “search optimization” they want to improve is Firefox Suggest, which has sponsored results which could be promoted (and cost more) based on predictions of interest based on recent trends of topics in your country. “Users in Belgium search for vacations more during X time of day” is exactly the sort of stuff you’d use to make ads more valuable. “Users in France follow a similar pattern, but two weeks later” is even better. Similarly predicting waves of infection based on the rise and fall of “health” searches is useful for public health, but also for pushing or tabling ad campaigns.
You can technically modify any network weights however you want with whatever data you have lying around, but without the core training data you can’t verify that your modifications aren’t hurting the original capabilities. Fine-tuning (which LoRa is for) isn’t the same thing as modifying a trained network. You’re still generally stuck with their original trained capabilities you’re just reworking the final layer(s) to redirect/tune it towards your problem. You can’t add pet faces into a human face detector, and if a new technique comes out that could improve accuracy you can’t rebuild the model with it.
In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.
Then it’s open source enough to live in my browser.
So just free/noncorporate. A model is effectively a binary and the data is the source (the actual ML code is the compiler). If you don’t get the source, it’s not open source. A binary can be free and non-corporate, but it’s still not source code.
What does “open source” mean to you? Just free/noncorporate? Because a “100% open source model” doesn’t really make sense by the traditional definition. The “source” for a model is its data, not the code and not the model itself. Without the data you can’t build the model yourself, can’t modify it, and can’t inspect why it does what it does.
Unless they’re going to publish their data, AI can’t be meaningfully open source. The code to build and train a ML model is mostly uninteresting. The problems come in the form of data and hyperparameter selection which either intentionally or unintentionally do most of the shaping of the resulting system. When it’s published it’ll just be a Python project with some magic numbers and “put data here” with no indications of what went into data selection or choosing those parameters.
Mozilla wants to be an AI company. This is data collection to support that. Telemetry to understand the user browsing experience doesn’t need to be content-categorized.
This isn’t even telemetry, it’s data collection for AI. That they refused to say that let’s you know that they think what they’re doing needs to be obfuscated.
Telemetry doesn’t need topic categorization. This is building a dataset for AI.
Inconclusive = pr0n is probably a pretty reliable mapping.
You absolutely do not know what you’re talking about. This is just trivial copyright law, but there’s a weird internet mythology that if you can access something on the net you can take it as long as you don’t share it further. The reason the mass-sharers tended to get prosecuted is because they were easier and more valuable targets, not because the people they were sharing it with weren’t also breaking the law.
Tokenizing and calculating vectors or whatever is not the same thing as distributing copies of said work.
It very much is. You can’t just run a cipher on a copyrighted work and say “it’s not the same, so I didn’t copy it”. Tokenization is reversible to the original text. And “distributing” is separate from violating copyright. It’s not distriburight, it’s copyright. Copying a work without authorization for private use is still violating copyright.
It’ll also do the maximizing revenue sort of layoffs, which are also a really bad thing in a society where basic necessities are tied to employment. The execs will also fuck up a bunch in humorous ways, but that’s nothing more than a comforting distraction from the real and present danger automation of this level presents to a society built around employment.
What do you think happens to data when it’s scraped? Copying the data is a fundamental requirement for using it in training. These models are trained in big datacenters where the original work is split up and tokenized and used over and over again.
The difference between you training a model and you reading a book (put online by its author in clear text, to avoid the obvious issue of actual piracy for human use) is that you reading on a website is the intention of the copyright holder and you as a person have a fundamental right to remember things and be inspired. You don’t however have a right to copy and use the text for other purposes, whether that’s making a t-shirt with a memorable line, printing it out to give to someone else, or tokenizing it to train a computer algorithm.
Downloading copyrighted stuff from the internet isn’t “surveillance”.
Knowing about how nav systems work would make them more likely to find against Google, because an online nav system is trivially updatable. Even if they wanted to be extra cautious a simple call to the local police or a peek at a satellite image in the preceding 9 years would give confirmation.
This is a ridiculous argument. We set limits on things all the time. That the limit will be arbitrary doesn’t mean there simply cannot be liability. 1 year is fine, 6 months is fine, hell, 1 month is fine. The company’s internal processes will expand or contract to fit legal liability.
A license that requires source. And since then there have been many different licenses, all with the same requirement. Giving someone a binary for free and saying they’re allowed to edit the hex codes and redistribute it doesn’t mean it’s open source. A license to use and modify is necessary but not sufficient for something to be open source. You need to provide the source.