Digital Services Act - Transparency Report

Published: February 26, 2025

Updated: N/A

This report is being provided in accordance with our obligations under the Digital Services Act (DSA).

Article 15 (1) (a) || Orders Received from Member States’ Authorities ||

No orders were received from Member State authorities.

Article 15 (1) (b) || Notices Submitted in Accordance with Article 16 ||

Number of Notices Submitted:

  • 120,065
    • Note: Due to the possibility of multiple notices for a single item of information, the number of actions taken may be lower than the number of notices.

Categorization Breakdown:

  • Allege Intellectual Property Infringement:
    • 142
  • Allege Animal Cruelty:
    • 1
  • Allege Misleading and Deceptive:
    • 148,561
  • Allege Offensive and Harmful:
    • 10
  • Allege Content Impersonation and Misrepresentation:
    • 1
  • Allege Promotional and Solicitation:
    • 1
  • Allege Violation of Community Guidelines:
    • 5
  • Allegation Not Specified:
    • 15
  • Allege Other (Not Categorized):
    • 9

Number Submitted by Trusted Flaggers:

  • 0

Action Taken on Bases of Law:

  • 25

Action Taken on Basis of Terms and Conditions:

  • 417

Number Processed Using Automated Means

  • 0

Median to Action

  • 100 hours
    • This figure excludes complaints that were deemed not-actionable.
    • This figure also excludes complaints affected by processing issues, which had an independent median resolution time of 601 hours.

Article 15 (1) (c) || Content Moderation Engaged in at Own Initiative ||

Content Moderation Engaged in at Unity’s Own Initiative:

Unity proactively and reactively moderates content across its services. For detailed information about our moderation activities for specific services, please visit our Content Transparency Center.

Use of Automated Tools:

We utilize automated tools as part of our content moderation process. Further details about this usage, along with additional information, can be found below and at our Content Transparency Center.

Measures Taken to Provide Training and Assistance:

Unity provides comprehensive training and support to our content moderation teams. This includes:

  • Training: Live and recorded sessions, playbooks, and other resources.
  • Support: Clear escalation paths, including access to legal counsel and Digital Services Act specialists.

Measures Affecting Information Availability, Visibility, Accessibility, and Provision.

  • Action Taken on Basis of Law:
    • 0
  • Action Taken on Basis of Terms and Conditions:
    • 2,600,080
      • Note: Due to the possibility of multiple detections for a single item of information, the number of actions taken may be lower than the number of detections.
  • Detected by Automation
    • 667,547
  • Detected by User Report
    • 18,712
  • Detected by Moderators
    • 2,520,438
  • Detected by Other Method
    • 1,221
  • Content Removed
    • 294,645
  • Content Not Permitted
    • 2,940
  • Content Modified (or Requested)
    • 2,294,736
  • User Suspended
    • 21
  • User Banned
    • 7,738

Article 15 (1) (d) || Complaints Internal Complaint-handling Systems ||

Online Platforms

  • Number of Complaints
    • 4
  • Basis of Complaints
    • Insufficient information provided to determine.
  • Median Time to Close
    • 134 hours
  • Number of Instances Decision Reversed
    • 0

Other Services

  • Number of Appeals
    • 615
  • Number of Instances Decision Reversed
    • 97

Article 15 (1) (e) || Automated means for Content Moderation ||

  • Qualitative Description & Specification of Purpose
    • We utilize automated tools as part of our content moderation process. Further details about this usage, including its description and purpose, can be found at our Content Transparency Center.
  • Measures Solely Taken by Automated Means
    • Number of Measures
      • 856,035
        • Note: This figure includes automated processes that both analyze information and take action affecting its availability, visibility, accessibility, and provision. For example, this includes automated reviews that determine submitted information does not violate our Terms or applicable laws.
    • Rate of Accuracy:
      • 93.10%
    • Indicators of Accuracy
      • Several key indicators are used to evaluate the performance of our automated systems in detecting and managing violative information. These indicators provide valuable insights into how effectively the system identifies and handles such content while also shaping our accuracy and error rate assessments for processes solely reliant on automation. The primary indicators include:
        • True Positive Rate (TPR): This metric measures how effectively the system correctly identifies and prevents violative content. A higher true positive rate reflects strong performance in catching harmful or non-compliant information.
        • Accuracy: Accuracy represents the overall correctness of the system's predictions. It accounts for both the correctly flagged violative content and the correctly allowed non-violative content. A higher accuracy score indicates consistent, reliable performance.
        • Precision: Precision measures the correctness of the system’s flagged content (i.e., how often flagged items are truly violative). High precision indicates fewer false alarms and ensures greater confidence in the system's decisions.
        • False Positive Rate (FPR): This metric reflects how often the system incorrectly flags non-violative content as violative. A lower false positive rate is crucial to minimizing unnecessary disruptions for users and ensuring legitimate content remains unaffected.
        • False Negative Rate (FNR): This measures how often the system fails to identify and block violative content. A lower false negative rate demonstrates better system effectiveness in ensuring harmful content is promptly addressed.
    • Possible Rate of Error
      • 6.9%
  • Measures Partially Taken by Automated Means
    • Number of Measures
      • 871,999
    • Indicators of Accuracy
      • Several key indicators are used to evaluate the performance of our automated systems in detecting and managing violative information. These indicators provide valuable insights into how effectively the system identifies and handles such content while also shaping our accuracy and error rate assessments for processes partially reliant on automation. The primary indicators include:
        • False Positive Rate (FPR): This metric assesses how often the system incorrectly flags legitimate content as violative. A lower false positive rate is desirable, as it indicates fewer legitimate posts are mistakenly identified as problematic.
        • Sensitivity: Sensitivity measures the system's ability to accurately detect actual violative information. Higher sensitivity reflects stronger performance in identifying content that violates policies.
        • Specificity: This metric evaluates how well the system recognizes non-violative content as legitimate. High specificity means the system effectively avoids misclassifying lawful content as violative.
        • Precision: Precision measures the accuracy of the system’s positive predictions (e.g., flagging something as spam or toxic). Higher precision indicates that most flagged content indeed violates the rules, reducing false alarms.
        • False Negative Rate (FNR): This metric indicates how often the system fails to detect or block violative information. A lower false negative rate translates to better performance in preventing harmful content from being overlooked.
    • Rate of Accuracy:
      • 99.37%
    • Possible Rate of Error
      • 0.63%
  • Safeguards Applied
    • We utilize automated tools as part of our content moderation process. Further details about this usage, including safeguards, can be found at ourContent Transparency Center.

Article 24 (1) (a) || Out-of-court Disputes ||

No disputes were submitted to the out-of-court dispute settlement bodies.

Article 24 (1) (b) || Suspensions imposed pursuant to Article 23 ||

No such suspensions were imposed pursuant to Article 23.

Report Notes

  • Scope of Jurisdictions: This report includes global metrics, covering both EU and non-EU jurisdictions, due to technical and product limitations. However, detailed violation data for some self-initiated actions are currently limited to the EU
  • Reporting Period: February 17, 2024 – December 31, 2024
  • Exclusions: Due to data limitations, this report excludes:
    • One product's self-initiated actions before October 1, 2024. This issue has been resolved.