It’s obvious we are experiencing exponential growth in artificial intelligence (AI). Just a few short years ago, discussions among police leaders about the use of AI in policing were framed around their beliefs about AI many years in the future. Today, policing leaders must be concerned about AI now. Not a few years in the future. Now. The speed at which AI is advancing is breathtaking. A couple of years ago, for instance, who had heard of ChatGPT? Now its implications are an omnipresent issue for most of our society. There are predictions that in the very near future (like the next couple of years) we will witness the creation of true Generative AI (the ability of algorithms to learn and create all by themselves). This is truly when science fiction becomes reality.

Like almost every other sector of our economy, information systems/data vendors are searching for ways to increase their profitability with AI. This is also true for many developers of public safety-focused companies. Shortly, the inclusion of AI in police information products will be the norm, not the exception. As such, police leaders must develop an increased knowledge base about the nature of AI and the questions and concerns they should be developing – if for no other reason than because their key stakeholders are already beginning to do so.

For a variety of reasons, police leaders are at an impasse between tradition and innovation. Whether it be reputational needs, practical improvements, financial constraints, or professional interests in advancing their agencies, these leaders are rapidly being drawn into the AI world, with its promise of revolutionizing policing. And as this is occurring, fundamental questions regarding ethics, transparency, and community trust and are being raised. As police leaders consider embarking on this transformative AI journey, they should consider several key factors to ensure the responsible implementation of AI.

The decisions around using AI in policing are multi-faceted and cover a very wide breadth of topics. Here are just a few of the more prominent:

Focus on Outcomes

Achieving equitable technology in policing begins by being clear about what outcomes the police want to realize by using AI. Police leaders must question their assumptions as far as who will be affected by AI implementation, what past biases might seep into training data, etc. Only through actively seeking bias can we address it. Like so many other aspects of policing, leadership introspection is key for creating fair and equitable public safety. A key question leaders should also be asking themselves is: “Just because we can use Ai to achieve our desired outcome, should we?” This is one of the core principles in the responsible use of AI. Keeping the adage “just because we can do something doesn’t mean we should,” is a hallmark of a mature, wise, and ethically grounded leader.

Trust, Confidence and AI

Any decision to implement AI into policing should be deliberate and transparent. This is central to building or maintaining the public’s trust and confidence in the police. AI, however, often leaves us guessing as its inner workings remain hidden - leaving police leaders uncertain how best to implement it in their departments much less explain it to the public. Its mysterious allure makes for both promise and peril when contemplating AI integration. We must delve deeper into this topic to understand why transparency should not just be an afterthought but an absolute imperative in the coming age of algorithmic policing.

Imagine an AI system designed to predict crime hotspots or identify suspects. It processes voluminous amounts of data, crunches numbers, and makes recommendations without ever providing an explanation for its decisions. This is the so-called “black box” issue is at play here. Its opaqueness presents us with the challenge of understanding its conclusions. In turn, this presents problems for leaders trying to explain it to the people they serve.

Effective policing relies heavily on trust between communities and the police force. As the magic of AI makes decisions behind-the-scenes more opaque, trust can become even more tenuous. Police leaders must be prepared to offer sufficient explanation to illuminate any black boxes containing this process of decision-making in their communities. Community trust in the police is predicated, in part, on transparency and clarity of decisions and actions.

As more decision-making is delegated to AI, the urgency of transparency will increase. When AI-driven public safety these decisions impact peoples’ lives, or their sense of safety, trust hangs in the balance without understanding its operation. People are sure to ask, “Can we trust how you use AI without a rudimentary understanding of its workings?” Police leaders better be prepared to answer this and other AI-related questions or risk being perceived as uninformed at best.

Police leaders face an ethical balancing act in relation to AI technology. Its benefits can enhance public safety. Soon we will have the ability to “feed” unsolved murder cases to algorithms, for instance, and watch the AI make previously unknowable connections that will be key to solving horrific crimes and bring closure to loved ones of the victims. And the police are already using AI-enabled surveillance tools to help them stop or solve other acts of violence or extremism. Generally, the public will probably support such uses – but only if the police and their AI tech can be trusted. Refining models, creating accessible explanations, and encouraging community dialogue all help build trust and responsible AI practices.

Police leaders must communicate openly about AI adoption to build public trust in its purpose, benefits, and safeguards. And they must be “community-led” in their implementation of the technology to enhance community trust and confidence.

Opening the Black Box

Explaining to the public more granulated aspects of AI is not an easy task. First, leaders must understand enough about the technology to explain it. By gaining this knowledge, and sharing it, they demonstrate transparency and that they are intending to use to in a responsible manner.

There are efforts underway to bring transparency to AI decision processes. There exist models which attempt to demystify these decision processes by offering human observers insight into them. Such models make the nonlinear and complex more understandable but have their limitations. Currently, most observers remain unclear as to what exactly happens inside the AI black boxes. This is extremely troublesome when at times, it is the actual developers of the technology who admit they don’t completely understand the very thing they have created. If they don’t understand it, how are police leaders and the public supposed to grasp it well enough to make informed, ethically based decisions about its implementation? Clearly, this is a problem without any easy answers. Perhaps we need AI’s to understand the AI’s!

It is incumbent on police leaders contemplating the acquisition of advanced AI technology to push their vendors for adequate explanations of how their AI’s make the decisions they do. Only then can the police convince the public they’re being responsible with this very powerful technology.

Looking through a philosophical lens sheds light on why transparency matters. Trust doesn't emerge out of blind faith; rather it comes from experience and understanding. Once we comprehend how AI operates, we can gauge its trustworthiness more accurately. However, one important caveat remains: AI doesn't operate alone but instead resides within complex sociotechnical systems--interconnected networks of humans, algorithms, and institutions--that all play into its operation. So yeah, it’s complicated.

Imagine a police department adopting AI. Officers interact with it, citizens observe its impact, and policymakers establish boundaries for it. Trust flows freely as this takes place. The police can build on this level of trust by increasing trustworthiness of their entire system through ethical guidelines, accountability measures and community engagement--thereby improving trust in their AI systems.

Transparency Needn't Be All or Nothing

There doesn't need to be an either/or choice when it comes to transparency. Finding the appropriate balance is key. The police collect sensitive information about people’s lives. The law, and ethical principles, require a substantial degree of security in information held by the police. In the court of public opinion this is especially true. On the other hand, by sharing enough information that builds trust without jeopardizing privacy or security, policymakers, technologists, and citizens can work together on developing AI models with accessible explanations that promote discussion. In this way, the AI near future looks bright!

Transparency is at the core of responsible AI implementation. When police departments implement AI systems, their citizens deserve to know how decisions are being made and foster trust by being able to examine and comprehend algorithms which impact their safety and wellbeing. But they also deserve a high level of privacy where police information is concerned. Wise police leaders know how to balance these issues and continually work on their personal knowledge base regarding rapidly evolving technology like AI to ensure they stay abreast of these dynamics.

The Mitigation of Unintended bias: Navigating Choppy Waters

As I’ve alluded, AI offers great promise of efficiency, accuracy and data-driven decision making. Yet, under its surface lies an unyielding challenge: bias. Police leaders integrating AI into their departments must consider both bias mitigation and equity when making this choice. Progressive police leaders spend considerable time contemplating bias in policing. Now is the time for them to explore the topic of AI bias to better understand its significance and devise effective strategies to navigate ethical dilemmas.

Artificial intelligence algorithms, like mirrors reflecting society, may unwittingly perpetuate unintended biases within their system through various channels. These include:

  • Business Processes: When current processes - be they deployment plans, crime control strategies, arrest policies, or resource allocation decisions - contain biases that AI inherits as part of its model for augmenting or replacing them, these biases remain and can get worse over time. Leaders must ensure these processes are examined closely for bias.

  • Foundational Assumptions: When creating AI systems, developer assumptions regarding its goals, users, and context of use matter greatly - biased assumptions could lead to biased outcomes. It is important for police leaders to ask their AI vendors pointed questions about the assumptions they used when developing the technology they’re trying to sell.

  • Training Data: AI learns from historical data, so if that data exhibits systemic biases such as racial profiling, AI may perpetuate them. AI model architecture may also introduce bias if sensitive variables such as age, race, or gender influence predictions. Again, asking the tough questions of developers before buying their products is the “due diligence” most people expect the police to engage in before they buy AI.

Before embarking on AI acquisition, police leaders must ask themselves a fundamental question: “Is AI even appropriate in this setting?” Leaders should avoid the temptation to engage in “technosolutionism” – the belief that technology alone is the solution – by carefully scrutinizing assumptions related to its goals, impact on people and social context. Implicit biases like historic sexism or racism may worsen without explicit consideration being given for these effects.

Strategies to Combat Bias and Increase Fairness

As indicated, AI can inadvertently pick up on biases present in its training data. To help minimize bias in the AI systems they implement, policing leaders can employ the following strategies:

  • Use Fairness Metrics and Audits: Evaluate whether an AI treats different groups equally with regards to false positives and false negatives. Ensuring demographic parity that strives for equal acceptance rates between groups by assessing AI system regularly for bias issues, and adjusting accordingly, is crucially important.

  • Ethics and Accountability: Police agencies enjoying a strong and healthy relationship with the communities they protect are clear about the necessity of ethical alignment throughout the organization. Their values and ethics are in complete alignment with literally every aspect of the agency. This should also include AI deployment. The use of AI must align with departmental values, ethics, and legal regulations. Successful police leaders include ethics in every step of the AI procurement or development process by engaging multiple stakeholders - including community and elected representatives.

One method for ensuring widespread acceptance of AI by the police is through the adoption of a community-led policing mindset. In this framework, the police view community sentiment as a critical piece of any substantive initiative they undertake. This is foundational to the “co-production” of public safety and a fundamental underpinning of community policing philosophy.

One way policing can facilitate public acceptance and buy-in of its use of AI is by establishing community “technology advisory boards” or committees to help oversee AI deployment and provide input into critical decisions. These bodies can address ethical concerns and offer guidance from a community perspective. In addition, this moves the decision from a unilateral one made by the police to a collaborative, collective “we made the decision together” decision. Any police chief or sheriff who has had to stand alone in front of a bank of cameras explaining at a press conference why he or she made a controversial decision, or why a bad policing thing happened, can attest to the value of community members explaining with them why the decision was made, collaboratively and collectively.

Balancing Interests

As police chiefs venture into the use of AI, they face an uphill struggle between effectiveness, openness, and confidentiality. AI holds great promise to improve policing but must not come at the cost of public trust or the violation of individual privacy rights. AI is all about data. Sharing sensitive data compromises individual privacy while ensuring openness is a balancing act police leaders must perform. Even though open data sharing increases transparency, it comes with some risks. In addition to balancing their own use of sensitive data, leader must ensure their data is safe from malicious actors who intend to exploit “open data” intended to further police transparency. Admittedly, this is a tough challenge. Especially for smaller jurisdictions lacking adequate IT support. Nonetheless, it is a challenge leaders must face head on if they intend to use AI to enhance community safety.

Police leaders should incorporate privacy considerations early in AI systems' design processes to respect individual rights and maintain trust between AI systems and people. Privacy-by-design practices help guarantee this outcome. Furthermore, they must practice good “data governance.” To protect sensitive data such as victim or witness statements or confidential records, clear policies and procedures must be put in place regarding data collection, storage, and usage. As straightforward as this seems, the rapidly changing AI field-of-play will make this much more complicated in the future.

Conclusion: Charting a Responsible Course

For most police leaders, implementing AI in policing is very much like navigating uncharted waters. They must command their ships carefully while considering the needs and rights of their community members as well as the benefits of technological advances. By being transparent with their processes, and engaging the public, they can harness AI's potential while also upholding public trust and maintaining equity for all.

Police leaders traversing the AI landscape must remember that finding balance is not static; rather it requires constant vigilance, adaptability, transparency, and privacy commitments from all components of the community safety equation. Moving forward involves refining models, providing accessible explanations, and opening dialogue between all relevant parties.

In the future, as policing works towards safer and more equitable public safety practices, AI should be used as a force for good – an enabler that promotes justice over bias. By being intentional and introspective, police leaders working with elected leadership and the communities they serve, can create an AI policing future where the technology serves to help advance policing that is effective, empathetic and just.

 

Previous
Previous

AI for Police-to-Police Communication: Closing the Gap and Building Trust