Procedures for content moderation

1. Prohibited content

You can see detailed information about what content is not allowed in our terms of service under the heading “Creating an ad”.

2. Automated screening

We use algorithms, including word analysis, to proactively analyze ads to ensure they comply with our rules.

3. Human screening

Our customer service performs a manual review of our ads using sample checks to ensure that the ads meet our quality and safety standards.

4. Screening via User Reports

Our users have the opportunity to report content that does not comply with our guidelines. When users report content, we document the complaint and take necessary actions. Necessary actions may include removing ads or closing user accounts. We always inform the ad owner via email about the removal of the ad.

5. Handling of Deleting Ads

Customer service evaluates the content of an ad based on inquiries and stop words. If an ad violates our rules, it will be deleted, and the seller will be informed via email about the reason. Repeated violations may lead to the closure of the seller's account.


Information to customers about our content moderation

1. How and why is content moderated?

Our customer service receives and responds to inquiries or automatically flags ads through stop words. Violations lead to the deletion of ads to maintain a safe platform.

2. Basis and process for moderation

Ads are reviewed both manually and automatically based on historical data. These data sources help us identify fraud risks.

3. Illegal material and our response

If an ad violates our rules, we inform the seller of the violation via email and document the incident in the user's account log to prevent future issues.


This approach ensures clear communication and transparency with our customers regarding the handling of ads and user activity on our platform.