- Figma retracted its “Make Designs” AI tool due to user reports of designs resembling Apple’s weather app.
- Concerns were raised about potential legal issues and the possibility of training the AI on Apple’s designs.
- CEO Dylan Field denied that Figma’s AI models were trained on external app designs.
- VP of Product Design Noah Levin stated that new components added before the Config event were not adequately vetted.
- Figma has removed the problematic assets and disabled the feature, planning to implement a new quality assurance process.
- The tool’s launch coincided with Config, where it was pulled after Apple-like mockups appeared online.
- Figma’s other AI tools remain available, and users have until August 15 to decide on data usage for future models.
Main AI News:
Figma recently retracted its “Make Designs” generative AI tool after users noted that it produced designs for a weather app strikingly similar to Apple’s own weather app. This revelation raised concerns about potential legal issues and suggested that Figma might have used Apple’s designs for training. CEO Dylan Field promptly denied that the tool was trained on Figma’s or any other app’s designs. However, Figma has since issued a comprehensive statement via its blog.
In the statement, Figma’s VP of Product Design, Noah Levin, explained that while the design systems used for Make Designs were rigorously reviewed during development and the private beta, new components and screens added prior to the Config event were not scrutinized thoroughly. “Some of these assets resembled real-world applications, and appeared in the output when certain prompts were used,” Levin noted.
Upon discovering the issue, Figma promptly removed the problematic assets from the design system and disabled the feature. Levin also mentioned that the company is implementing a more robust quality assurance process before reinstating Make Designs, though no timeline has been provided.
Originally launched in a limited beta during the Config event, Make Designs was quickly pulled after mockups resembling Apple’s designs surfaced on social media. Field admitted to pushing the team to meet the Config deadline but denied that the AI models used—OpenAI’s GPT-4o and Amazon’s Titan Image Generator G1—were trained on external design content.
Levin’s blog post further detailed the design systems used by the tool. He explained that Figma created two extensive design systems—one for mobile and one for desktop—comprising hundreds of components and examples. These were fed into the model along with user prompts to generate designs, which were then completed by Amazon Titan’s diffusion model. Despite this mishap, Figma’s other AI tools, including one for generating design text, remain available. Users have until August 15 to decide whether to allow Figma to use their data for future model training.
Conclusion:
The incident with Figma’s AI tool highlights the critical importance of rigorous vetting processes when deploying generative AI technologies. The similarity of designs to an established app like Apple’s not only exposes potential legal risks but also underscores the need for transparency and robust quality assurance in AI development. For the market, this situation serves as a reminder of the delicate balance between innovation and intellectual property concerns, stressing the importance of clear guidelines and thorough oversight in the AI industry.