What is this?

This is a Twitter live-stream of hateful/abusive/bigoted replies to a small number accounts on Indian Twitter as detected by a machine learning algorithm.
This is a work in progress. There will inevitably be false positives here, especially when it encounters new trends or topics it has not been trained in.

What's the purpose?

I hope people can report the replies that are abusive or created with the intent to harass others.

Does reporting help?

Abusive and hateful tweets often get deleted by Twitter when reported. The user may also receive a warning.
Repeat offenders can have their account features limited or even suspended.

Why isn't the username of the person shown?

I hoped to report the tweet based on it's content without being influenced by the person's username, profile pic, bio, stereotypes etc.

Why does it only track replies to a few accounts?

The data used to train this algorithm has mostly been replies to social activists that receive a lot of harassment. This limits the kinds of abuse this algorithm can detect.

On what basis were these accounts chosen?

At this time, I simply add them whenever I come across accounts on Twitter that seems to receive a disproportionate amount of hateful replies. So it is more or less arbitrary.

What's the tech behind this?

It's only a handful of simple python scripts using the NLTK library. The 'intelligence' in the algorithm comes me manually marking 30 thousand odd tweets as abusive and non-abusive. This is why it is able to handle Hindi with relative ease.

How can I contribute?

By reporting these abusive accounts whenever you can spare the time.
The website and algorithm are incredibly cheap to run at this time, so monetary contributions are not necessary.

Nothing new is appearing on the page.

It is quite possible that the algorithm has stalled. Please check again after while.