TL;DR:
- Kickstarter, the crowdfunding platform, introduces a new policy to address concerns surrounding generative AI projects.
- AI tools like Stable Diffusion and ChatGPT, trained on publicly available content, have raised issues of crediting and compensation for original creators.
- Kickstarter’s policy mandates disclosure of details on AI content usage, distinguishing between original and AI-generated components.
- New projects involving AI tech must reveal training data sources and implement safeguards for content creators.
- The policy aims to ensure transparency, trust, and proper credit for artists’ work, going into effect on August 29.
Main AI News:
Kickstarter, the renowned crowdfunding platform, is embarking on a groundbreaking endeavor to address the challenges posed by generative AI technology. As AI increasingly permeates the mainstream, Kickstarter has faced the arduous task of striking a balance between all parties involved in the ongoing debate.
In the world of generative AI, tools like Stable Diffusion and ChatGPT have become instrumental in creating remarkable art and text. However, the use of publicly available images and text from the web for training purposes has given rise to a contentious issue. Creators of these original works often remain unrecognized, uncompensated, and left with no say in whether their content is used for AI training.
While the proponents of these AI tools argue that the fair use doctrine safeguards their actions, content creators beg to differ, especially when AI-generated content or the AI tools themselves turn into profit-generating endeavors. To establish a clear stance, Kickstarter has taken a decisive step by announcing a new policy.
Starting August 29, Kickstarter will mandate that projects employing AI tools to generate images, text, music, speech, or audio must disclose comprehensive “relevant details” on their project pages. This disclosure will encompass the specific utilization of AI content in their work, distinguishing between wholly original components and those created using AI tools.
Moreover, Kickstarter will require new projects centered around AI tech, tools, and software to disclose the sources of their training data. This disclosure will include insights into how these sources handle consent and credit. Additionally, project owners must implement their own “safeguards,” such as opt-out or opt-in mechanisms for content creators.
Kickstarter’s road to this transformative policy has not been without obstacles. The platform took a significant stride when it banned Unstable Diffusion, a group seeking funding for a generative AI art project lacking safety filters. This move aimed to protect users from potentially harmful or explicit content, including pornography, and safeguard specific communities from exploitation.
The journey towards Kickstarter’s forward-thinking policy commenced in December when the platform indicated a reevaluation of whether media from external sources in AI training data amounts to copying or mimicking an artist’s work. Since then, Kickstarter has steadily navigated through challenges, striving to arrive at a policy that strikes the perfect balance.
Central to Kickstarter’s mission is the promotion of transparency. Project submissions on Kickstarter will now undergo rigorous scrutiny, with a series of relevant questions probing the extent of AI tech usage in generating artwork or the project’s primary focus on AI tech development. Crucially, the platform demands explicit confirmation of consent from the original work owners whose creations contribute to the AI-generated segments of the projects.
Conclusion:
Kickstarter’s pioneering policy on AI transparency represents a significant step in the market, setting a precedent for ethical AI practices. By requiring clear disclosure and proper credit for original creators, Kickstarter fosters trust among its community and promotes accountability in the broader AI industry. As other platforms and companies observe this move, the market is likely to see an increased focus on transparency and responsible use of AI technologies.