Entrenched Misogyny

Summary
Today, the feminisation of AI assistants has become a visible feature of everyday technology, with more than 8 billion AI voice assistants in use worldwide, most of which default to female voices and personas. This design choice has coincided with troubling patterns of abuse: studies show that between 10 per cent and 50 per cent of human–AI interactions involve verbal harassment, while a 2023 experiment found that 18 per cent of interactions with female-embodied agents focused on sexual content, compared to 10 per cent for male agents and just 2 per cent for non-gendered robots. In extreme cases, the scale is stark, with Brazil’s Bradesco bank reporting 95,000 sexually harassing messages directed at its feminised chatbot in a single year. Despite these figures, regulation remains limited, often failing to treat gender stereotyping as a serious risk. This raises concerns that such patterns of misogyny may become increasingly normalised as AI systems grow more embedded in daily life.
Application
The outcomes of AI depend largely on the prejudices of those who use and shape it, as the technology reflects human inputs, values and intentions. When bias or hostility guides its use, these flaws surface in AI behaviour and outputs. Ultimately, the limitations of AI mirror the imperfections of the human heart behind it.