In a staggering example of the pitfalls of AI in design, Figma’s new AI tool, Make Designs, has been temporarily pulled from use. The tool, intended to help designers quickly mock up app designs using generative AI, instead generated designs strikingly similar to Apple’s iOS weather app. This incident has sparked widespread criticism and raised serious questions about the ethics and effectiveness of AI in the creative industry.
The Incident
Days after the celebrated Figma AI feature rollout in Config 2024 from 26 Jun 2024 – 28 Jun 2024, Figma’s CEO, Dylan Field, today addressed the controversy on X (formerly Twitter), admitting that the Make Designs feature was producing near-replicas of Apple’s weather app. He attributed this issue to his push for the team to meet deadlines and the use of off-the-shelf AI models alongside a bespoke design system. However, this excuse does little to mitigate the fact that the tool’s output was embarrassingly unoriginal.
(1) As we shared at Config last week – as well as on our blog, our website, and many other touchpoints – the Make Design feature is not trained on Figma content, community files or app designs. In other words, the accusations around data training in this tweet are false. https://t.co/jlfmroPPhm
— Dylan Field (@zoink) July 2, 2024
Andy Allen, CEO of Not Boring Software, first identified the issue and warned other designers to be cautious. His discovery highlighted a fundamental problem with AI-generated designs: they often lack originality and can inadvertently mimic existing work, leading to potential legal issues.
Figma’s CTO Kris Rasmussen couldn’t confirm if Apple’s designs were part of the training data but insisted that Figma didn’t train the AI models themselves. Instead, they relied on models from OpenAI and Amazon’s Titan Image Generator G1, combined with a bespoke design system.
This approach raises significant ethical concerns. If Figma didn’t train the models but they still plagiarised and produced designs similar to Apple’s, it suggests that OpenAI or Amazon’s models may have been trained on such data. This highlights a critical flaw in using AI for design: you can never be sure where the AI’s “inspiration” comes from, making it a dubious source of originality leading to plagiarism.
Figma Plagiarism: Undermining the Role of UX Designers
Figma’s reliance on AI tools also undermines the role of human designers. The company’s vision of AI-generated designs, marketed as a way to quickly draft and explore ideas, ultimately diminishes the value of thoughtful, researched, and original design work. It’s a troubling trend where companies push AI as a cost-cutting measure, sidelining skilled professionals who demand fair wages and contribute significantly to the creative process.
The AI tools, as demonstrated by Figma’s Make Designs feature, are essentially remixing existing designs rather than creating anything genuinely new. This leads to a pertinent question: What’s the point of using AI in design if it only results in derivative, uninspired work that requires significant modification to avoid legal issues?
The incident with Figma’s Make Designs feature underscores the broader issue of AI’s unreliability. When AI systems, like Google Gemini or ChatGPT, warn users to verify their outputs because they might be incorrect or even dangerous, it undermines the whole premise of AI as a time-saving tool. If designers must double-check and extensively modify AI-generated designs, the purported efficiency gains are lost.
Moreover, the ethical implications of AI in design are profound. The use of AI models trained on potentially unethically sourced data raises questions about the integrity of the designs produced. By pushing AI tools that replicate existing designs, companies like Figma risk legal repercussions and erode trust in the design community.
A Call for Accountability
Figma’s response to the controversy has been to disable the Make Designs feature temporarily. Rasmussen indicated that they are reviewing the bespoke design system to ensure greater variability and quality. However, this reactive approach is insufficient. The company needs to implement more rigorous QA processes and commit to transparency in its AI development practices.
This situation serves as a critical lesson for the UX design community. As designers, we must be vigilant about the tools we use and the ethical implications of AI. While AI has the potential to revolutionize design workflows, it must be implemented with care, transparency, and a commitment to originality and quality.
Conclusion
Figma’s experience highlights the significant challenges and ethical concerns associated with integrating AI into design. The design community must hold companies accountable for the tools they produce and ensure that AI enhances rather than undermines the creative process. As we navigate the evolving landscape of AI in design, let’s prioritize originality, integrity, and the invaluable contributions of human designers.
[…] First Draft, which is now available in a limited beta. Formerly known as Make Designs, the feature was paused earlier this year following concerns about its outputs closely resembling existing app designs, particularly […]