This template shows how to perform client-side and server-side moderation of text.
Client-side text is moderated using the Text Toxicity Classifier model from TensorFlow.js. If the user tries to publish a toxic message to the guestbook, a message pops up reminding them to be nice.
Server-side text is moderated once published to the Firebase Realtime Database using a Cloud Function that is triggered by a write to the database.
See file functions/index.js for the moderation code.
The dependencies are listed in functions/package.json.
Users anonymously add a message - an object with a text
attribute - to the /messages
list:
/functions-project-12345
/messages
/key-123456
text: "This is my first message!"
/key-123457
text: "IN THIS MESSAGE I AM SHOUTING!!!"
The function triggers every time a message is added. If the message is deemed toxic, then it is deleted.
The security rules only allow users to create message but not edit them afterwards.
This sample comes with a Function and web-based UI for testing the function. To configure it:
- Create a Firebase Project using the Firebase Console.
- Clone or download this repo and open the
text-moderation
directory. - You must have the Firebase CLI installed. If you don't have it install it with
npm install -g firebase-tools
and then configure it withfirebase login
. - Configure the CLI locally by using
firebase use --add
and select your project in the list. - Install dependencies locally by running:
cd functions; npm install; cd -
- Deploy your project using
firebase deploy
- Open the app using
firebase open hosting:site
, this will open a browser. - Open the app and add messages to the message board. Write some good and bad messages to verify that toxic text is moderated.