28.11.2025

Health risks in AI: a new form of violence? The case of obstetric violence, racism and the key role of data

by Anastasia Karagianni,

3 min read 

Introduction

Digital technologies, such as mobile health apps, electronic health records and AI-driven decision support tools, hold significant promise for enhancing access to information and improving data collection in obstetrics. However, because these technologies rely on input data and algorithmic models – many of which are shaped by biased or incomplete datasets – substantial data gaps remain. These gaps hinder efforts to improve the quality of care and outcomes for diverse patient populations, particularly those who are often marginalised, such as Black women, transgender and LGBTQIA+ individuals, and survivors of violence. Addressing these inequities is critical not only for advancing health outcomes, but also for challenging and reducing gender-based discrimination embedded in digital health systems. We are currently at a pivotal moment when it comes to our ability to take action; failing to do so could have long-term consequences for the accuracy of diagnoses, the effectiveness of treatments and the equity of care delivery in obstetrics.

According to multiple sources, the integration of so-called artificial intelligence (AI) into obstetrics may yield substantial opportunities for advancing maternal and foetal health care, particularly by improving early detection, enhancing diagnostic accuracy and enabling more personalised treatment strategies. AI-driven algorithms can analyse ultrasound images and foetal heart rate patterns with greater accuracy, aiding in the early identification of complications such as preeclampsia, gestational diabetes and foetal distress. Additionally, AI-powered remote monitoring systems improve pregnancy surveillance, enabling timely medical interventions. Machine learning models may facilitate personalised care by predicting risks and optimising labour management, thereby reducing maternal and infant mortality rates. AI may also streamline administrative processes, allowing health-care providers to focus on patient-centred care while improving resource allocation in hospitals. Furthermore, AI-powered telemedicine expands access to obstetric care in underserved regions, and its applications in research accelerate the development of innovative treatments. Collectively, these advancements may contribute to safer pregnancies, more efficient health-care systems and improved maternal and neonatal outcomes

The results demonstrated that machine-learning classification attained 94 per cent sensitivity, 91 per cent specificity and 99 per cent area under the curve. This meant that, compared with obstetrician and midwife predictions and methods reported in earlier studies, machine learning greatly enhanced efficiency in the detection of caesarean section and normal vaginal deliveries using foetal heart rate data. A recent study on women with ovarian endometriosis showed that an AI algorithm using MRI images significantly improved diagnostic accuracy. In another example, the application of AI in the triage of women with acute abdominal pain produced accurate models for rapid assessment and triage. 

While AI in obstetrics offers numerous benefits, it also presents several challenges and limitations. One major concern is the risk of algorithmic bias, as AI models are often trained on datasets that may not fully represent diverse populations, potentially leading to disparities in care. Additionally, overreliance on AI could reduce the development of clinical intuition and decision-making skills among health-care professionals, particularly if AI-generated recommendations are followed without critical evaluation. Data privacy and security risks are also significant, as AI systems require large amounts of sensitive (health) data, raising concerns about patients’ consent, data breaches and unauthorised access. Furthermore, AI's inability to account for complex, individualised cases may lead to misdiagnoses or inappropriate treatment recommendations, as not all pregnancies follow predictable patterns. Last but not least, legal and ethical uncertainties surrounding AI decision-making, particularly in high-stakes scenarios such as labour and delivery, raise questions about liability and accountability when errors occur. Addressing these challenges is crucial to ensuring that AI enhances rather than compromises obstetric care. These challenges not only impact the quality and equity of obstetric care but may also create bottlenecks in achieving key EU priorities for 2021–2027, such as promoting health equity, advancing digital innovation and safeguarding fundamental rights. To stay on course, the EU must ensure that AI integration in health care is both inclusive and ethically grounded.

Risk modelling in health care: systemic violence perpetuating obstetric violence and racism

Risk modelling refers to the use of statistical and computational techniques to estimate the likelihood of specific outcomes, such as complications during pregnancy or childbirth. It has been used in obstetrics for several decades to support clinical decision-making. The integration of machine learning and AI in recent years has enhanced its sophistication. Discussing risk modelling is essential, as it plays a critical role in predicting and preventing adverse events, guiding interventions and, ultimately, improving maternal and foetal health outcomes. But while it is intended to improve patient outcomes risk modelling in health care may inadvertently perpetuate systemic violence, particularly in obstetrics, where it may reinforce medical racism and obstetric violence. Algorithmic biases embedded in AI-driven risk assessments often arise from the use of historical data that disproportionately reflects racial disparities, leading to skewed risk stratifications that systematically disadvantage marginalised communities. For example, Black and Indigenous women are more likely to be classified as »high-risk« based on population-level data rather than individualised assessments. This may result in over-medicalisation, unnecessary interventions and a lack of autonomy in their care. This reliance on flawed predictive models may exacerbate obstetric violence by being used to justify coercive practices such as forced caesarean sections, denial of pain relief or dismissive attitudes towards patients’ concerns. Furthermore, risk modelling often prioritises institutional efficiency and liability reduction over patient-centred care, reinforcing a medical system that privileges control over informed consent. As a result, racialised individuals experience disproportionate rates of maternal mortality, traumatic birth experiences and diminished trust in health-care providers. Addressing these issues requires a critical reassessment of how risk modelling is developed and applied, ensuring that AI and predictive analytics do not perpetuate the very inequities they aim to resolve.

Obstetric violence is a form of systemic violence deeply embedded in health-care systems. It reflects structural and cultural inequities that disproportionately impact marginalised groups. It manifests in dismissive attitudes, coercive practices, lack of informed consent and disrespectful treatment during pregnancy, childbirth and postpartum care. Rooted in power dynamics, gender discrimination and hierarchical medical practices, systemic obstetric violence prioritises institutional efficiency or provider authority over patient autonomy. Marginalised groups, such as women of colour, low-income individuals and persons with diverse gender identities, are particularly vulnerable, exacerbating disparities and structural racism in maternal health outcomes.

To sum up, AI in obstetrics has the potential to improve maternal and foetal health care, but it may also introduce significant biases that disproportionately affect also transgender and non-binary patients. Most obstetric AI models are trained on cisnormative datasets that assume a binary understanding of pregnancy, failing to account for the unique medical and social needs of transgender individuals. Moreover, the lack of transparency in many AI models, particularly those using deep learning, limits clinicians' ability to understand and explain AI-driven decisions, which could undermine trust and lead to overreliance on automated recommendations. As a result, these systems may misclassify or exclude trans patients, leading to inadequate risk assessments and inappropriate clinical recommendations. Additionally, AI-driven electronic health records and diagnostic tools often lack inclusive language and options for gender-diverse identities, reinforcing systemic barriers to equitable care. These biases contribute to disparities in maternal health outcomes, as transgender patients may experience misgendering and denial of care or may be reluctant to seek medical attention due to prior experience of discrimination. Addressing these challenges requires inclusive data collection, bias mitigation strategies in AI development, and training for health-care providers to ensure that obstetric AI enhances, rather than exacerbates, health-care inequities for transgender patients

How can we fix this broken system? Tackling this systemic issue demands bold, multi-layered reforms, starting with individual accountability, extending to institutional policies and challenging the broader cultural norms that enable mistreatment and neglect. Digital technologies can be powerful tools in this transformation: they can improve data collection, help identify patterns of mistreatment, and give patients greater agency in their care. They have the potential to aid these efforts by enhancing data collection, tracking mistreatment and empowering patients, although their implementation must avoid reinforcing existing inequities or introducing new forms of control. But to truly drive change, these tools must be designed and implemented with equity at their core, ensuring that they dismantle, rather than deepen existing disparities.. 

In obstetric AI, statistical models are essential for training algorithms, ensuring data-driven predictions about pregnancy risks, labour progression and neonatal outcomes. However, the accuracy of these models depends on the quality and inclusivity of the data used, as biased or incomplete datasets can reinforce health-care disparities, particularly for marginalised groups, such as Black women and transgender patients.

Statistics plays a crucial role in preventing risks and addressing historical violence in obstetrics by providing evidence-based insights into maternal and foetal health and helping to identify patterns of disparities in health-care delivery. Through statistical analysis, health-care providers can better predict and manage risks, such as preeclampsia, gestational diabetes, or foetal distress, leading to timely interventions and improved patient outcomes. Moreover, by examining historical data, statistics can uncover systemic patterns of obstetric violence, such as coercive medical practices, racial discrimination and gender bias, which disproportionately affect marginalised groups, including Black and Indigenous women and transgender people. Through careful analysis, statisticians can identify areas in which health-care systems have failed to provide equitable care, enabling policymakers and clinicians to design interventions that address these gaps and promote justice in maternal health care. Furthermore, by incorporating inclusive and diverse datasets, statistics can help to ensure that risk modelling algorithms in obstetrics are accurate and fair, preventing biased medical decisions and reducing the likelihood of harm

To conclude, the deployment of AI in obstetrics presents significant opportunities for improving maternal and foetal care, but it may also introduce a number of key risks. One primary concern is bias in data, as AI systems are often trained on datasets that may not adequately represent diverse populations, leading to inaccurate risk assessments and perpetuating health-care disparities, especially for marginalised groups such as Black, Indigenous and transgender patients. Data privacy and security are also critical, as AI systems require large amounts of sensitive medical data, making them vulnerable to breaches or unauthorised access. The lack of transparency in many AI models, particularly those using deep learning, limits clinicians' ability to understand and explain AI-driven decisions, which could undermine trust and lead to overreliance on automated recommendations. Furthermore, limited generalisability means that AI tools trained on specific populations may not perform well across different regions, ethnic groups, or health-care settings. Ethical and legal issues may arise concerning accountability when AI errors occur, particularly in high-stakes scenarios such as labour and delivery. Addressing these risks requires rigorous testing, inclusive datasets, transparent algorithms and ongoing collaboration between health-care professionals and technology developers to ensure that AI enhances obstetric care equitably and safely.

Anastasia Karagianni is a Doctoral Researcher at the LSTS Department of the Law and Criminology Faculty of VUB and former FARI Scholar.  Her academic research focuses on the "Divergencies of gender discrimination in AI". Besides her academic interests, Anastasia is a digital rights activist, since she is a co-founder of DATAWO, a civil society organisation based in Greece advocating for gender equality in the digital era, and founder of @femme_group_BrusselsGR. Anastasia Karagianni was MozFest Ambassador 2023, and Mozilla Awardee for the project “A Feminist Dictionary in AI”– of the Trustworthy Artificial Intelligence working group. 

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to our Newsletter

Watch our short videos and recorded events Youtube