Not but a few days ago, the character Tilly Norwood burst onto the entertainment scene. Created by the AI-talent studio Xicoia (a division of Particle6), Tilly is being presented as a synthetic “actress” with an IMDb page and social media presence. Her debut has provoked fierce backlash from SAG-AFTRA and other performers, who argue that Tilly undermines human actors and exposes significant gaps in protections for identity, publicity, and likeness rights. For attorneys, creators, performers, and rights holders, Tilly’s arrival is a clarion call: the law around name, image, and likeness (NIL) is going to have to evolve… fast.
What Are the Core NIL Issues Here?
The Tilly Norwood debut forces us to revisit foundational questions about what it means to “own” one’s persona in an AI world. One concern is that the AI model behind Tilly was trained on images or performances of real actors without consent or compensation. If true, this could raise claims of misappropriation of likeness or violation of publicity rights. Another unresolved question is what counts as “use” of NIL in the AI context. Traditional law is geared toward unauthorized use of a person’s actual name, photo, or voice. But what about generating synthetic video that resembles a real person, or combining features from many real people into a composite? This blurs the boundary between inspiration and imitation.
Another complication is jurisdictional inconsistency. NIL laws in the U.S. are largely state-based and vary widely. Some states recognize post-mortem rights, others do not. Some explicitly cover digital replicas, while others are silent . The rise of AI-created personas like Tilly highlights how fragile this patchwork is. At the same time, some jurisdictions are experimenting with statutory innovation. Tennessee’s ELVIS Act (Ensuring Likeness Voice and Image Security) protects against unauthorized AI voice and likeness simulations. Montana has advanced legislation (HB 514) to give residents explicit rights over NIL in the context of AI fakes. California recently passed bills (AB 1836 and AB 2602) limiting use of digital replicas of performers, including protections that extend to deceased actors’ estates. These laws represent early attempts to grapple with the unique risks posed by synthetic identity use.
For studios and performers alike, contracts will need to become more precise. Agreements increasingly will require clauses about whether AI derivatives are permitted, how royalties apply to synthetic performances, what audit rights exist for verifying AI training data, and whether indemnities will be provided if an AI “actor” is accused of copying a real person. Beyond commercial rights, there are also moral rights and attribution questions. If an AI persona like Tilly gives a performance, who gets credit? What happens if a likeness is distorted or misused in a way that a real performer could object to? These questions are not yet clearly addressed by NIL law.
Key Implications and Risks
The biggest fear in the industry is justifiably is displacement of human performers (recall that this was partially cited as the rationale for the SAG-AFTRA strike in 2023). If synthetic actors like Tilly become cost-effective substitutes, especially for background or supporting roles, they could significantly weaken the bargaining power of human performers. SAG-AFTRA has already warned that AI actors may “put actors out of work” and has called Tilly’s debut a direct threat to human-centered creativity.
Studios and AI companies also face litigation exposure. If an AI persona too closely mirrors a real person, producers could face claims of misappropriation of likeness, violations of publicity rights, breach of contract, or false endorsement. At the same time, the legal uncertainty may create a chilling effect. Some producers may avoid experimenting with AI actors altogether, while others may forge ahead and invite lawsuits. The lack of clarity increases risk for both sides.
The situation also points to the need for industry guardrails. Disclosure rules requiring studios to inform audiences when an “actor” is AI-generated could help prevent deception. Safe harbor or licensing regimes, similar to music sampling, may emerge as ways to monetize and regulate AI likeness use. Transparency about training data provenance is another key reform, as performers cannot defend their rights if they do not know whether their likeness was used in an AI system.
What Tilly Reveals About the Gaps in Today’s NIL Law
The Tilly Norwood phenomenon underscores that the law lags behind innovation. Existing NIL frameworks were not designed for synthetic personas. The traditional defenses of “transformation” or “parody” are difficult for producers or other defendants to apply when a composite AI actor could be built from dozens of human likenesses. Remedies under current law may also be inadequate, as litigation is expensive, and often only famous performers can realistically enforce their rights . This imbalance leaves lesser-known actors vulnerable.
There are also free speech and artistic expression considerations. Any new regulation must strike a balance between protecting personal identity and allowing creative experimentation. Internationally, the challenges compound. France, for example, has strict image rights that could make an AI persona like Tilly far more vulnerable to a successful lawsuit than in the United States. Because synthetic personas are inherently global, conflicting regimes will make compliance especially complex for studios.
What Grellas Shah Should Watch / Recommend
From a practical perspective, clients should be advised to take proactive steps. Contracts must clearly address whether AI derivatives of talent are allowed, how NIL rights extend into digital domains, and what remedies exist for unauthorized uses. Clients should monitor rapidly evolving legislation in states like Tennessee, Montana, and California, and anticipate federal action as pressure builds for harmonization. Industry standards will likely emerge around watermarking, disclosure, and provenance tracking, and studios or performers who lead in adopting these standards will have a reputational advantage. Finally, risk audits should become routine before deploying synthetic personas. A “persona risk audit” can help determine whether an AI likeness overlaps with real individuals, whether the training process is legally defensible, and whether additional clearances are required.
Conclusion
Tilly Norwood is not just a novelty; she is a legal stress test for NIL law. Her debut forces courts, legislatures, and practitioners to confront questions about the boundaries of identity rights in the age of AI. Whether through statutory reform, contract drafting, or litigation, the framework for protecting human likenesses must evolve quickly. The stakes are high: for performers’ livelihoods, for the integrity of storytelling, and for the future relationship between humans and synthetic talent.