AI Transparency Policy

Our commitment to responsible, transparent, and ethical use of artificial intelligence.

Last updated: 02/03/2026Effective date: 02/03/2026
Back to Legal Center

1Introduction

Mowly uses artificial intelligence to enhance the capabilities of legal professionals. We believe that transparency about how our AI systems work, what data they use, and what their limitations are is essential — particularly when serving the legal profession, where accuracy and reliability are paramount. This AI Transparency Policy describes our approach to AI development and deployment, our commitments regarding data use, and how we comply with the EU Artificial Intelligence Act (Regulation (EU) 2024/1689).

2AI Systems We Use

Mowly employs several categories of AI systems to power our services:

  • Large Language Models (LLMs): We use state-of-the-art language models from leading providers (including Google Gemini and OpenAI via secure API access) for text analysis, summarization, translation, document drafting, and legal research assistance
  • Retrieval-Augmented Generation (RAG): Our legal research features combine AI language models with real-time retrieval from legal databases to ground responses in actual legal sources, reducing hallucination risk
  • Document processing AI: Specialized models for document parsing, structure recognition, clause identification, and legal document classification
  • Speech-to-text: AI models for transcribing audio recordings of legal proceedings and depositions into structured text
  • Pseudonymization engine: AI-powered identification and replacement of personal data in legal documents to support GDPR compliance
  • Embedding models: Vector embedding models for semantic search across legal databases and document repositories

3Data Training Practices

We are unequivocal about our data training practices: Mowly does not use your data to train AI models. Your documents, queries, AI interactions, and outputs are never used to train, fine-tune, or improve any AI model — neither our own nor those of our AI providers. We have contractual agreements with all AI providers that explicitly prohibit the use of our customers' data for model training. Your data is processed solely for the purpose of delivering the requested service and is handled in accordance with our Privacy Policy and Data Processing Agreement.

4Transparency Measures

We implement the following transparency measures in our AI systems:

  • Source attribution: When our AI provides legal research results, it includes citations to specific legal sources (case law, legislation, doctrine) that can be independently verified
  • Confidence indicators: Where technically feasible, we provide indicators of the AI system's confidence level in its outputs to help users assess reliability
  • AI disclosure: All AI-generated content within the platform is clearly identified as such. Users are never misled about whether content was generated by AI or by a human
  • Model information: We disclose which AI provider and model family is being used for each feature, while protecting specific implementation details as trade secrets
  • Limitation warnings: Our platform includes clear warnings about the limitations of AI output, particularly regarding the need for professional verification
  • Processing transparency: Users can understand how their data flows through our systems and which AI services are involved in processing their requests

5Human Oversight

We design our AI systems with human oversight as a core principle:

  • Human-in-the-loop: Mowly is designed as an AI assistant, not an AI replacement. All features are designed to augment human professional judgment, not replace it
  • No autonomous decisions: Our AI systems do not make autonomous legal decisions. All AI output is presented as a suggestion or draft that requires human review and approval
  • Override capability: Users always have the ability to override, edit, reject, or disregard AI output. The platform never forces reliance on AI-generated content
  • Feedback mechanisms: Users can report inaccurate or problematic AI output through built-in feedback mechanisms, which are reviewed by our team to improve system quality
  • Professional responsibility: We reinforce throughout the platform that legal professionals retain full professional responsibility for their work product, regardless of whether AI tools were used in its preparation

6Accuracy and Quality

We take the following measures to ensure AI output quality:

  • RAG architecture: Our legal research features use Retrieval-Augmented Generation to ground AI responses in actual legal sources, significantly reducing the risk of hallucinated citations or fabricated case law
  • Source verification: Where possible, AI-provided legal references are cross-referenced against our verified legal databases before being presented to users
  • Regular evaluation: We regularly evaluate our AI systems' accuracy through benchmark testing against known legal questions and expert review of outputs
  • Continuous improvement: We continuously update our systems to improve accuracy, incorporating feedback from users and advances in AI technology
  • Known limitations: We are transparent about known limitations, including that AI may not reflect the very latest legal developments, may not cover all jurisdictions equally, and may produce errors in complex or nuanced legal analysis

7Bias Mitigation

We are committed to identifying and mitigating bias in our AI systems:

  • Awareness: We acknowledge that AI systems can reflect biases present in their training data and take active steps to identify and address such biases in the context of legal applications
  • Diverse testing: We test our AI outputs across diverse legal scenarios, jurisdictions, and use cases to identify potential biases in responses
  • Legal neutrality: Our AI systems are designed to provide objective legal information and analysis without bias toward particular legal positions, parties, or outcomes
  • Monitoring: We monitor AI outputs for patterns that may indicate systematic bias and take corrective action when identified
  • User reporting: Users can report perceived bias in AI outputs, which is reviewed by our team and used to improve system fairness

8EU AI Act Compliance

Mowly is committed to full compliance with the EU Artificial Intelligence Act (Regulation (EU) 2024/1689):

  • Risk classification: We have assessed our AI systems under the EU AI Act risk classification framework. Our legal research and document analysis tools are classified as limited-risk AI systems requiring transparency obligations
  • Transparency obligations: In compliance with Article 52 of the EU AI Act, we clearly inform users when they are interacting with an AI system and when content has been generated by AI
  • Record keeping: We maintain records of our AI systems' design, development, and deployment decisions as required by the EU AI Act
  • Quality management: We implement quality management systems for our AI features that include testing, validation, and monitoring procedures aligned with EU AI Act requirements
  • Human oversight measures: Our platform is designed with the human oversight measures required by the EU AI Act, ensuring that AI systems can be effectively overseen by human operators
  • Ongoing compliance: We actively monitor developments in EU AI Act implementation, including guidance from the AI Office and national authorities, to ensure continued compliance as the regulation is fully implemented

9AI Providers and Sub-processors

We use the following categories of AI providers:

  • Google (Gemini models): Used for text generation, analysis, and reasoning tasks. Processing occurs through secure enterprise APIs with contractual guarantees regarding data security and non-use for training
  • OpenAI (GPT models): Used for text generation and analysis tasks. Accessed through enterprise API with data processing agreements prohibiting use of inputs for model training
  • Specialized model providers: We may use specialized models for specific tasks such as speech recognition, document parsing, or embedding generation. All providers are vetted for security and GDPR compliance
  • Data routing: Users may choose their preferred AI provider where the platform offers this option. We select default providers based on quality, reliability, security, and EU data residency capabilities
  • Provider changes: We may change or add AI providers as technology evolves. Material changes to providers that affect data processing will be communicated in accordance with our DPA notification obligations

10Your Rights Regarding AI Processing

As a user of our AI-powered services, you have the following rights:

  • Right to information: You have the right to understand how AI is used in processing your data, including which AI systems are involved and what their capabilities and limitations are
  • Right to human review: You have the right to request human review of any AI-generated output before it is used in a professional context
  • Right to opt out: Where technically feasible, you can choose to use alternative non-AI features for specific tasks
  • Right to explanation: You can request an explanation of how specific AI output was generated, including the sources and reasoning involved
  • Right to object: Under GDPR Article 21, you have the right to object to AI processing of your personal data on grounds relating to your particular situation

11Continuous Improvement

We are committed to continuously improving our AI practices:

  • Research monitoring: We actively monitor advances in AI safety, fairness, and transparency research and incorporate relevant findings into our practices
  • Regulatory engagement: We engage with regulatory developments including the EU AI Act implementation, GDPR guidance on AI, and professional legal regulation of AI use
  • Industry collaboration: We participate in industry initiatives and working groups focused on responsible AI use in the legal sector
  • Policy updates: This AI Transparency Policy is reviewed and updated regularly to reflect changes in our AI systems, regulatory requirements, and best practices

12Contact Us

If you have questions about our AI systems, practices, or this policy, please contact us at: AI inquiries: ai@mowly.ai | Data Protection Officer: dpo@mowly.ai | General support: support@mowly.ai. We are committed to providing transparent and timely responses to all inquiries about our AI practices.