British Government’s open consultation on copyright and artificial intelligence
Seems the British Government has set up an open consultation on copyright and artificial intelligence, to discuss how artwork stored on cloud servers, or processed using cloud-based software should be treated with regard to being used to teach AI (artificial intelligence) machines. Essentially Government offers three possible choices.
- No nothing and leave things as they are.
- Customers opt-in. LFITC’s (large foreign information technology corporations) can only use their customers’ work if the customer has expressly granted permission.
- Customers opt out. LFITCs can use their customers’ work unless the customer has refused permission.
Frankly, I had always assumed this to be an area where British Government would pursue a policy of having no policy, preferring instead to allow the courts to sort out the mess. So top marks to our Government for at least trying to protect us this time. However, that I fear this is where the good news ends.
First thoughts
Firstly, it seems AI may prove to be a bigger threat to artists that I first thought. Certainly this will likely be the case for artists who don’t have their wits about them. It seems the Government’s preference is for an ‘opt out’. LFITC’s may use customer’s work to train their AI, unless the customer specifically says they cannot do so.
Moreover, I am doubtful that anyone in government or in the opposition has the technical knowledge to understand the raft of complex technical issues and the nuances contained therein. Instead I suspect that officials will rely upon advice from consultants. These consultants will be mainly from the same LFITCs (large foreign information technology corporations) that already enjoy significant patronage from the British Government.
Meantime I fear the legislation itself will likely be characteristically British – botched, badly framed, watered-down and generally half-arsed. Consequently, the courts will still have to sort out the mess. And, as we all know, the LFITCs have much deeper pockets that we do. Therefore it is reasonable to expect that most of the subsequent legal challenges are unlikely to be resolved in the public’s favour.
I would add that in my view, legislation of this sort, at this point in time is far too little, far too late. Government is attempting to shut the stable door, while the proverbial horses are galloping off into the sunset, several km away.
Fait Accompli
Fact is, LFITCs already have our data, masses of it, and it’s already been assimilated, at least to some extent. In fact the rot set in over two decades ago when LFITCs such as Google and Yahoo pulled an immense sleight of hand, conning the public into allowing them to “cache” our images in their “search engines”. Thus the majority of the planet’s pictures are already “in the system”.
It got worse when we allowed proprietary operating systems to send back a constant stream of personally identifiable data to its manufacturer. While they did that, they also conned some folks into renting software rather than buying the licence outright. Thus allowing these LFITC’s to have their fingers permanently embedded in our wallets too.
Don’t care
To make matters even worse, I fear most users don’t care much about copyright and artificial intelligence anyway. LFITCs have been pilfering and plundering our data for years in a most outrageous manner, in my view – e.g. Meta. Yet most people seem almost completely unfazed by this. Indeed, I am often accused of being “paranoid” and/or “communist” for even daring to mention it. So people will have their work assimilated into AI without understanding the full implications of this.
If the opt-out method is adopted, and if you continue to use LFITC’s products and services, without specifically ticking the appropriate boxes, they will effectively be allowed to steal your work, and there will be nothing you can do about it. The result will be that if you trust LFITC’s and their self-serving software and leaky “clouds” with your data, then they will inevitably find ways steal, abuse and profit from your misplaced trust. In fact they have been doing so for years, on a truly industrial scale.
I would add as an aside, that for those of us who do care, there are significantly safer alternatives out there. But sadly most people either don’t have the time or cannot be can’t be bothered to learn and deploy them. LFITC’s know this of course and are profiting from their own customer’s ignorance and inertia.
Something Government actually could do, perhaps
Government is in a position to make a very important additional condition to use of AI in this manner. Namely that all “AI knowledge” gathered should be open sourced and made a matter of public record – so that everybody can use it.
I don’t think I am alone in my aversion to allowing foreign billionaires to steal our intellectual property and make themselves even richer than they are already, at our expense. However, if one removes the profit motive and makes whatever is gathered a matter of public record, then I think much of the current skulduggery would pretty much evaporate. After all, why go to all the effort of stealing something if one has to give it all back after it has been stolen?
More importantly, AI in this context would become a true publicly-owned resource that we can all enjoy. It wouldn’t stop AI being developed. In fact I suspect the lads and lasses at Stable Diffusion would love the idea. But it would go a long way to ensuring AI was used for the good of the many rather than the profit of a few. Granted, I’m struggling to frame that last concept in a manner that yer average politician might actually understand.
Fortunately the consultation doesn’t have to be completed until 2025-02-25. Moreover, consultation site allows one to save one’s answers and come back to them later. I was quite impressed with this feature actually. If one is required to give thoughtful responses, one needs time to pause for thought.
Have your say on copyright and artificial intelligence
Remember, this consultation closes 2025-02-25 23:59 UTC.