AI Helps Reveal Hidden Police Bias and Restore Trust
Introduction
Police departments across the country face increasing scrutiny and criticism from the public and the media over allegations of racial profiling, excessive force, and other forms of misconduct. These accusations erode the trust and legitimacy of the police, which in turn affects their ability to prevent and solve crimes, maintain order, and protect the rights and safety of the citizens. One of the key challenges for police reform is to ensure that police contacts with the public are fair, respectful, and transparent, and that any potential bias or discrimination is identified and addressed.
One of the ways to achieve this goal is to require police officers to document and report every contact they have with the public, regardless of whether it results in a citation, an arrest, or no action. This would provide a comprehensive and accurate data set that can be used to monitor and evaluate the patterns and outcomes of police interactions with different groups of people, such as by race, gender, age, or sexual orientation. Such data can help identify and correct any disparities or problems in police practices, as well as inform and improve police training, policies, and accountability. However, this approach also faces significant challenges and resistance from both the police and the public.
One of the main obstacles is the practical difficulty and burden of documenting and reporting every police contact. This would require a substantial amount of time, effort, and resources from the police officers, who already have to deal with a heavy workload and complex situations. Moreover, the documentation and reporting process may be prone to errors, omissions, or inconsistencies, as officers may have to rely on their memory, judgment, or perception to record the details of each contact. Another challenge is the potential legal and ethical issues that may arise from the collection and use of sensitive personal information, such as race, gender, or sexual orientation, which may not be readily observable or verifiable by the officers. This may expose the officers to accusations of bias, discrimination, or harassment, as well as violate the privacy and dignity of the individuals they encounter.
This white paper proposes a simple and innovative solution to overcome these challenges and build trust in policing: using artificial intelligence (AI) to document and analyze police contacts with the public. AI is a branch of computer science that aims to create machines or systems that can perform tasks that normally require human intelligence, such as perception, reasoning, learning, decision making, and communication. AI has been widely used and applied in various fields and domains, such as health care, education, business, entertainment, and security. In this paper, I will explore how AI can be used to enhance and automate the documentation and analysis of police contacts with the public, by leveraging the existing and emerging technologies of body-worn cameras, natural language processing, computer vision, and machine learning. I will also discuss the benefits and challenges of using AI for this purpose, as well as the implications and recommendations for policy and practice.
How AI Can Document and Analyze Police Contacts
Body-worn cameras (BWCs) are small devices that can be attached to the uniform or equipment of police officers, and that can record audio and video of their interactions with the public. BWCs have been increasingly adopted and implemented by police departments across the country, as a way to enhance transparency, accountability, and evidence collection in policing. According to a 2016 survey by the Bureau of Justice Statistics[1], about half of the general-purpose law enforcement agencies in the United States had acquired BWCs, and about a third had fully deployed them to all patrol officers. BWCs have been shown to have positive effects on reducing the use of force, citizen complaints, and litigation against the police, as well as improving the behavior and attitudes of both the officers and the public.
However, BWCs also pose significant challenges and limitations for the documentation and analysis of police contacts. One of the main challenges is the sheer volume and complexity of the data that BWCs generate. According to a 2018 report by the Police Executive Research Forum[2], a single BWC can produce about 1.5 hours of video per shift, which translates to about 18,000 hours of video per year for a police department with 100 officers. This amount of data is overwhelming and costly to store, manage, and review, and requires a lot of human labor and expertise to process and analyze. Moreover, the data that BWCs provide may not be sufficient or reliable to capture the full context and details of each contact, such as the location, time, duration, reason, outcome, or the demographic characteristics of the individuals involved. These information gaps may hinder the ability to conduct a comprehensive and accurate assessment of the patterns and trends of police contacts, and to identify and address any issues or problems in police practices.
This is where AI can play a crucial role in enhancing and automating the documentation and analysis of police contacts. AI can be used to extract, classify, and summarize the relevant information from the BWC footage, using a combination of natural language processing (NLP), computer vision (CV), and machine learning (ML) techniques. NLP is a subfield of AI that deals with the analysis and generation of natural language, such as speech or text[3]. CV is a subfield of AI that deals with the analysis and understanding of visual information, such as images or videos[4]. ML is a subfield of AI that deals with the creation and application of algorithms that can learn from data and improve their performance over time[5]. By applying these techniques to the BWC footage, AI can provide a more efficient and effective way to document and analyze police contacts, as well as to generate useful insights and feedback for police reform and improvement.
The following are some examples of how AI can document and analyze police contacts using BWC footage:
AI can use NLP to transcribe the speech and dialogue of the officers and the public, and to extract the key information and keywords from the conversation, such as the reason, outcome, or consent of the contact. AI can also use NLP to detect the tone, sentiment, and emotion of the speakers, and to identify any indicators of aggression, hostility, or distress[6].
AI can use CV to recognize and annotate the faces, objects, and actions in the video, and to infer the demographic characteristics of the individuals, such as their race, gender, age, or sexual orientation. AI can also use CV to estimate the distance, angle, and movement of the camera and the subjects, and to determine the level of visibility, lighting, and noise in the environment[7].
AI can use ML to classify and label the type and category of the contact, such as a traffic stop, a pedestrian stop, a call for service, or a use of force incident. AI can also use ML to compare and correlate the data from different sources and modalities, such as the audio, video, GPS, or metadata, and to identify any anomalies, discrepancies, or patterns in the data[8].
AI can use NLP, CV, and ML to generate a summary and a report of the contact, highlighting the main facts, events, and outcomes of the interaction, as well as any issues, concerns, or recommendations for improvement. AI can also use NLP, CV, and ML to provide feedback and guidance to the officers, such as by alerting them of any potential risks, violations, or best practices, or by suggesting them to take certain actions, such as to de-escalate, to explain, or to apologize[9].
Benefits and Challenges of Using AI for Police Contacts
Using AI to document and analyze police contacts with the public can offer several benefits and advantages for both the police and the public, such as:
AI can reduce the workload and burden of the officers, by automating and streamlining the documentation and reporting process, and by saving them time, effort, and resources. This can allow the officers to focus more on their core duties and responsibilities, and to improve their performance and productivity[10][11].
AI can improve the quality and accuracy of the data, by providing a more comprehensive and consistent capture and analysis of the information, and by minimizing the errors, omissions, or biases that may occur in human documentation and reporting. This can enhance the validity and reliability of the data, and the credibility and trustworthiness of the police[12].
AI can increase the transparency and accountability of the police, by providing a more objective and verifiable record and evaluation of the police contacts, and by making the data more accessible and understandable to the public and the media. This can foster a more open and honest dialogue and communication between the police and the public, and a more informed and evidence-based decision making and policy making process[13].
AI can facilitate the monitoring and evaluation of the police, by providing a more efficient and effective way to measure and assess the patterns and outcomes of the police contacts, and by generating useful insights and feedback for police reform and improvement. This can help identify and address any disparities or problems in police practices, as well as inform and improve police training, policies, and accountability[14].
AI can enhance the fairness and respect of the police, by providing a more neutral and impartial documentation and analysis of the police contacts, and by avoiding the collection and use of sensitive personal information that may violate the privacy and dignity of the individuals. This can reduce the potential for bias, discrimination, or harassment in policing, and improve the relationship and trust between the police and the public[15].
However, using AI to document and analyze police contacts with the public also poses several challenges and risks that need to be carefully considered and addressed, such as:
AI may not be able to capture and understand the full context and complexity of the police contacts, as it may lack the human intuition, judgment, or common sense that are required to interpret and explain the situations and behaviors of the officers and the public. AI may also make mistakes or errors in the data extraction, classification, or summarization, or generate inaccurate or misleading results or recommendations[16].
AI may not be able to guarantee the security and privacy of the data, as it may be vulnerable to hacking, tampering, or misuse by unauthorized or malicious actors, who may access, alter, or delete the data, or use it for harmful or illegal purposes[17]. AI may also raise ethical and legal issues regarding the consent, ownership, and control of the data, and the rights and responsibilities of the data providers, users, and subjects[18].
AI may not be able to ensure the accountability and oversight of the police, as it may create a lack of clarity or transparency in the documentation and analysis process, or a lack of responsibility or liability for the data quality or outcomes[19]. AI may also create a dependency or reliance on the technology, or a loss of human agency or autonomy in the policing process[20].
AI may not be able to improve the trust and legitimacy of the police, as it may face resistance or skepticism from the police or the public, who may not trust or accept the technology, or its results or recommendations[21]. AI may also create or exacerbate the social and economic inequalities or disparities in the access, use, or impact of the technology, or its benefits or harms[22].
Implications and Recommendations for Policy and Practice
Using AI to document and analyze police contacts with the public is a promising and innovative solution that can build trust in policing, by enhancing and automating the documentation and analysis process, and by providing useful insights and feedback for police reform and improvement[23][24]. However, this solution also requires careful and responsible implementation and regulation, to ensure that it is effective, ethical, and equitable, and that it respects the rights and interests of both the police and the public[25][26]. Therefore, I propose the following implications and recommendations for policy and practice, based on the best practices and guidelines from the literature and the field:
Policy makers and practitioners should conduct a thorough and comprehensive assessment of the needs, goals, and expectations of the police and the public, and the feasibility, suitability, and acceptability of the AI solution, before adopting and deploying it[27][28]. They should also involve and engage the relevant stakeholders, such as the officers, the community members, the civil rights groups, the researchers, and the technology providers, in the design, development, and evaluation of the AI solution, and ensure that it is aligned with the mission, vision, and values of the police and the public[29][30].
Policy makers and practitioners should establish and enforce clear and consistent standards and protocols for the collection, storage, management, and analysis of the data, and the generation, dissemination, and use of the results and recommendations, of the AI solution[31][32]. They should also ensure that the data and the technology are secure, reliable, and accurate, and that they comply with the relevant ethical and legal principles and regulations, such as the privacy, consent, ownership, and control of the data, and the accountability, oversight, and auditability of the technology[33][34][35].
Policy makers and practitioners should provide and promote adequate and appropriate training and education for the officers and the public, on the purpose, function, and operation of the AI solution, and the benefits, challenges, and risks of using it[36][37]. They should also ensure that the officers and the public are aware and informed of their rights and responsibilities, and their roles and expectations, in relation to the AI solution, and that they have the opportunity and the means to provide and receive feedback and guidance, and to express and address any concerns or complaints[38][39].
Policy makers and practitioners should monitor and evaluate the performance and impact of the AI solution, and the satisfaction and perception of the officers and the public, on a regular and ongoing basis, and use the data and the feedback to improve and adjust the AI solution, and to inform and update the policies and practices[40][41]. They should also communicate and share the results and the lessons learned from the AI solution, and the successes and challenges of using it, with the officers and the public, and with other stakeholders and partners, and seek and incorporate their input and suggestions[42].
Conclusion
This white paper has explored how AI can build trust in policing, by using AI to document and analyze police contacts with the public, using the existing and emerging technologies of body-worn cameras, natural language processing, computer vision, and machine learning. I have discussed how AI can provide a more efficient and effective ways to document and analyze police contacts, and to generate useful insights and feedback for police reform and improvement. I have also discussed the benefits and challenges of using AI for this purpose, and the implications and recommendations for policy and practice. I hope that this white paper will inspire and inform the policy makers and practitioners, as well as the researchers and the technology providers, who are interested and involved in this topic, and that it will contribute to the advancement and innovation of policing and public safety.
About the author
FPI Fellow Philip Lukens, is a retired police chief and a policing consultant who writes extensively about policing and artificial intelligence and about many other police-related issues. Click here to read his full bio. To read more of his commentary on AI in policing visit his Substack.
Footnotes
[1] Hyland, S. S. (2018). Body-worn cameras in law enforcement agencies, 2016. Bureau of Justice Statistics. Retrieved from https://bjs.ojp.gov/content/pub/pdf/bwclea16.pdf
[2] Police Executive Research Forum. (2018). Cost and benefits of body-worn camera deployments. Retrieved from https://www.policeforum.org/assets/BWCCostBenefit.pdf
[3] : IBM. (n.d.). What is natural language processing? Retrieved from https://www.ibm.com/topics/natural-language-processing
[4] IBM. (n.d.). What is computer vision? Retrieved from https://www.ibm.com/topics/computer-vision
[5] IBM. (n.d.). What is machine learning? Retrieved from https://www.ibm.com/topics/machine-learning
[6] IBM. (n.d.). What is natural language processing? Retrieved from https://www.ibm.com/topics/natural-language-processing
[7] IBM. (n.d.). What is computer vision? Retrieved from https://www.ibm.com/topics/computer-vision
[8] Expert System Team. (2017, March 16). What is machine learning? A definition. Retrieved from https://www.coursera.org/articles/what-is-machine-learning
[9] Kulkarni, A. (2019, October 24). How AI can help police departments analyze body camera footage. Retrieved from https://mindy-support.com/news-post/ai-can-help-police-departments-analyze-body-camera-videos/
[10] I-Team. (2020, October 29). I-Team: How St. Louis area departments are using AI in policing. KSDK. Retrieved from https://www.ksdk.com/article/news/investigations/how-st-louis-area-police-are-using-ai/63-eff17256-e2e1-4457-b253-1cf55eca1842
[11] Hsiung, C., & Chen, F. (2023, September 20). Exploring AI for law enforcement: Insight from an emerging tech expert. Police Chief Online. https://www.policechiefmagazine.org/exploring-ai-law-enforcement-interview/?ref=7df1c3f89c49b17e821c7e16ca2995d4
[12] Campbell, B. (2023, January 24). How AI can help solve crimes faster. Police 1. https://www.police1.com/police-products/police-technology/articles/how-ai-can-help-law-enforcement-agencies-solve-crimes-faster-xg3GdkdLJnzcXVQ3/
[13] Hsiung, C., & Chen, F. (2023, September 20). Exploring AI for law enforcement: Insight from an emerging tech expert. Police Chief Online. Retrieved from https://www.policechiefmagazine.org/exploring-ai-law-enforcement-interview/
[14] Hsiung, C., & Chen, F. (2023, September 20). Exploring AI for law enforcement: Insight from an emerging tech expert. Police Chief Online. Retrieved from https://www.policechiefmagazine.org/exploring-ai-law-enforcement-interview/
[15] Jacques, P. (2023, November 29). Introduction of AI a ‘significant step forward’ in force contact management. Police Professional. Retrieved from https://policeprofessional.com/news/introduction-of-ai-a-significant-step-forward-in-force-contact-management/
[16] Norris, D. (2019, June 12). Artificial intelligence and community-police relations. Police Chief Magazine. Retrieved from https://www.policechiefmagazine.org/ai-community-police-relations/?ref=65e6976ce736e32b298cbf0bfebb23c7
[17] WGU (2021). How AI is affecting information privacy and data. Retrieved from https://www.wgu.edu/blog/how-ai-affecting-information-privacy-data2109.html
[18] Murphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J. C., Malhotra, N., Lui, V., & Gibson, J. (2021). Artificial intelligence for good health: a scoping review of the ethics literature. Retrieved from https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00577-8
[19] U.S. Government Accountability Office (2021). Artificial intelligence: An accountability framework for federal agencies and other entities. Retrieved from https://www.gao.gov/products/gao-21-519sp
[20] Millwood, L. (2023). Facing our fears: AI and the role of humans in AI decision making Retrieved from https://www.forbes.com/sites/forbesbusinesscouncil/2023/11/01/facing-our-fears-ai-and-the-role-of-humans-in-ai-decision-making/?sh=87f02e69c260
[21] Mittelstadt, B. D., Russell, S., & Wachter, S. (2021). Explaining explanations in AI Retrieved from https://arxiv.org/pdf/1811.01439.pdf
[22] Dignum, V. (2021). Responsible artificial intelligence: how to develop and use AI in a responsible way. Retrieved from https://link.springer.com/book/10.1007/978-3-030-30371-6
[23] Zoufal, D., Coxon, S., Lewin, J., Wijsman, O., & Allen, C. (2023). Advancing policing through AI: Insights from the global law enforcement community. Retrieved from https://www.police1.com/iacp/articles/advancing-policing-through-ai-insights-from-the-global-law-enforcement-community-3SzYuRViccy8vwQ3/
[24] Police Professional. (2023). Introduction of AI a ‘significant step forward’ in force contact management. Retrieved from https://policeprofessional.com/news/introduction-of-ai-a-significant-step-forward-in-force-contact-management/
[25] National Police Chiefs’ Council. (2023). Covenant for using artificial intelligence (AI) in policing. Retrieved from https://science.police.uk/site/assets/files/4682/ai_principles_1_1_1.pdf
[26] Westendorf, T. (2022). Artificial intelligence and policing in Australia. Australian Strategic Policy Institute. Retrieved from https://www.aspi.org.au/report/ai_policing_australia
[27] INTERPOL. (2023, June). Toolkit for Responsible AI Innovation in Law Enforcement. Retrieved from https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit
[28] Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency, 279-288. Retrieved from https://arxiv.org/abs/1811.01439
[29] INTERPOL. (2023, June). Toolkit for Responsible AI Innovation in Law Enforcement. Retrieved from https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit
[30] Russell, C., Kusner, M., Loftus, J., & Silva, R. (2018). When worlds collide: integrating different counterfactual assumptions in fairness. Advances in Neural Information Processing Systems, 6414-6423. Retrieved from https://www.semanticscholar.org/paper/When-Worlds-Collide%3A-Integrating-Different-in-Russell-Kusner/c076c74179067422610917886c6566b2504e52e2
[31] INTERPOL. (2023, June). Toolkit for Responsible AI Innovation in Law Enforcement. Retrieved from https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit
[32] Westendorf, T. (2022). Artificial intelligence and policing in Australia. Australian Strategic Policy Institute. Retrieved from https://www.aspi.org.au/report/ai_policing_australia
[33] Privacy Act 1988 (Cth). Retrieved from https://www.legislation.gov.au/C2004A03712/2014-03-12/text
[34] Australian Human Rights Commission Act 1986 (Cth). Retrieved from http://classic.austlii.edu.au/au/legis/cth/consol_act/ahrca1986373/
[35] Department of Industry, Science and Resources. (2019). Australia’s artificial intelligence ethics framework. Retrieved from https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework
[36] Hajkowicz, S., Karimi, S., Wark, T., Chen, C., Evans, M., Rens, N., Dawson, D., Charlton, A., & Brennan, T. (2019). Artificial intelligence: Solving problems, growing the economy and improving our quality of life. CSIRO. Retrieved from https://www.semanticscholar.org/paper/Artificial-Intelligence%3A-solving-problems%2C-growing-Hajkowicz-Karimi/f78ae938f6c5852b774fee81d38c6d654dda6dea
[37] INTERPOL. (2023, June). Toolkit for Responsible AI Innovation in Law Enforcement. Retrieved from https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit
[38] INTERPOL. (2023, June). Toolkit for Responsible AI Innovation in Law Enforcement. Retrieved from https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit
[39] Wachter, S., Mittelstadt, B., & Russell, C. (2020). Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Artificial Intelligence and Law, 28(4), 741-767. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3547922
[40] INTERPOL. (2023, June). Toolkit for Responsible AI Innovation in Law Enforcement. Retrieved from https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit
[41] Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency, 279-288. Retrieved from https://arxiv.org/abs/1811.01439
[42] INTERPOL. (2023, June). Toolkit for Responsible AI Innovation in Law Enforcement. Retrieved from https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit