Publicly challenging micro-aggressions online is an exhausting and stressful responsibility, particularly if you are a member of a marginalised group. In an effort to reduce the pressure on these groups to take on such a responsibility, online community leaders are beginning to use of bots in areas of community communication. From correcting ableist language in Slack channels to explaining the meaning of "cis" for the millionth time on Twitter, automated responders are being used to deal with the draining work of improving community members' knowledge and vocabulary.
This talk will focus on the usage of bots in online communities, analysis of why they are generally successful and the related technical challenges involved in their creation. In addition to drawing on examples in open-source communities and Twitter, I'll also discuss their potential use in further anti-harassment measures and other ideas for their future expansion across gaming and technology. By the end of the talk, attendees should not only have an understanding of how these bots work but also be inspired to create their own bots for improving humanity, both in their own communities and beyond.