MilikMilik

When AI ‘Clones’ Code and Drafts Legal Memos: What You Really Risk With Copyright and Privilege

When AI ‘Clones’ Code and Drafts Legal Memos: What You Really Risk With Copyright and Privilege

From Malus.sh to “Clean Room” Cloning: Why Developers Are Worried

Malus.sh is an AI tool that claims it can “liberate” software from existing open source licenses by recreating it from scratch. Inspired by the older “clean room” method—where one team documents how a system works and a separate, insulated team writes fresh code—Malus.sh uses code‑generating AI to mimic a program’s functions without copying its original source. Its website even boasts of “liberation from open source license obligations” and “legally distinct” code. While the project has a tongue‑in‑cheek tone, it is a real product with paying customers, which alarms many developers. For rights holders, the fear is simple: if AI generated code can closely replicate their work while sidestepping license terms, the economic and community incentives behind open source may be undermined. For small businesses and freelancers, this highlights a broader reality of AI copyright issues: just because an AI outputs something doesn’t mean the underlying rights are clear or uncontested.

When AI ‘Clones’ Code and Drafts Legal Memos: What You Really Risk With Copyright and Privilege

Attorney–Client Privilege, Work Product and What AI Changes

Attorney–client privilege protects confidential communications between a client and a lawyer made for the purpose of legal advice. The work product doctrine protects materials prepared by or for a lawyer in anticipation of litigation, especially documents reflecting legal strategy. Generative AI complicates both. In one US case, a court held that documents a defendant created using a public AI chatbot were not privileged: they were not communications with a licensed lawyer, were not confidential because the chatbot’s policy allowed data collection and disclosure, and the chatbot itself disclaimed giving legal advice. The same court also refused work product protection because the defendant’s counsel had not directed him to use the AI tool. Another court, however, allowed a self‑represented litigant to claim work product protection over documents drafted with an AI chatbot, emphasising that such protection is only waived when materials are exposed to an adversary, not merely by using a tool.

What Ordinary Users and SMEs Risk by Feeding Sensitive Material to AI

For consumers, freelancers and small businesses, the practical risks are less abstract. When you paste contracts, NDAs, source code or internal reports into a public AI tool, you may be sharing them with a third party that logs, stores and analyses that data. If the tool’s terms allow disclosure to service providers or regulators, you cannot realistically treat those communications as confidential. That can weaken arguments that something is protected by attorney–client privilege or by work product, even if you later share the AI output with your lawyer. On the copyright side, using AI generated code or contract language does not erase your legal responsibilities; you can still be accused of infringing others’ rights if what you deploy is too similar to protected material. The Malus.sh controversy underlines that AI’s ability to imitate does not automatically make resulting code safe from license or copyright disputes.

Practical Do’s and Don’ts for Malaysians Using Generative AI

To reduce generative AI legal risks and AI privacy concerns, treat public chatbots like posting to a semi‑public forum. Do not paste: signed contracts and NDAs, detailed legal advice from your lawyer, confidential HR or financial data, or proprietary source code that is not already public. If you must work with sensitive material, insist on enterprise or on‑premises solutions where the provider contractually commits not to use your data for training and offers clear confidentiality terms. When seeking legal help, use AI only for generic explanations or drafting starting points, then have a qualified lawyer review and customise the result. Do not rely on AI as your “lawyer” or assume communications with a bot are privileged. For coding, use AI to explore ideas or boilerplate patterns, but keep robust version control, code reviews and license checks so you can demonstrate independent development if questions arise.

What Future Regulation Might Mean for AI Generated Code and Legal Docs

Courts are just beginning to address attorney client privilege AI questions and AI copyright issues. Early decisions show that judges will look closely at how AI tools are used—public versus controlled environments, involvement of counsel, and the fine print of privacy policies. Over time, regulators may demand clearer disclosure when AI models are trained on copyrighted works and could impose obligations on providers to support audit trails, making it easier to trace how AI generated code was produced. For legal documents, professional rules are likely to emphasise that lawyers remain responsible for checking AI outputs and preserving client confidentiality, regardless of what tools they use. For Malaysian users, the practical lesson is forward‑looking: assume that your AI practices today could be scrutinised later. Building habits of data minimisation, contractual safeguards and human legal review now will make it easier to adapt as enforcement and regulation tighten.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!