Sorry, ones again (for newcomers to the debate) maybe smaller and better recap to core issue and points to debate.
Rough picture about matters thus far;
Company (LION.AI) provides huge datasets of links (billions) to images (lots of it copyrighted material) including metadata about its descriptions (they are out of reach for law, they store no images/work directly). Many companies are involved now with these datasets (eg. Midjourney, Dall-E, Shutterstock, etc.), public also gets involved/included.
The work of many artists are in these data link-sets (yours probably too), more get added/updated frequently. When a company use this data link-sets for AI training, the effective images/data (several millions) get downloaded to Vram of GPU servers ((core questionable event is happening here)) for processing the end product (eg, Stable Diffusion, DALL-E, Midjourney). Ones done they delete the images from Vram (evidence gone). When the end product is solidified its very hard to trace back images involved (some essential data potential remain in a single file) and what’s in can’t get back out (lots is in now and more get in, the products need to compete now, thus get better and need more essence).
Company’s involved use loopholes in copyright law with scientific use pretense to commit largest scale “art theft” ever witnessed (to make billion dollar tech company’s/products)?
Most of us seem ok with it because some parts of the product are thrown in public for free?
The company’s probably go full throttle (3D next) if they notice public discourse is ok with this type of default opt in as long as they provide some free parts of the product?
General public is probably going to do the same type of unlicensed art theft to customize own AI instance if there is no common ground about this type of use cases?
Is it artistic cannibalism on never seen scale soon?
Very little care and its fine?
What can we do if we care?
Just a few informed artists updating his/her custom license probably little to no effect?
CGtrader globally changing its license maybe more effect?
Advocating for separate license, category and repository for AI training content and prohibit use of everything outside of it better option?
Some type of pixel tag maybe while uploading work (some browser extinction, or tool for publisher platforms) the AI auto image crawlers then need to respect?
Opt out is set by default, opt in is option?