Go Back Up

back to blog

AI Ethics: Questions We Should Be Asking

Ethics • May 13, 2025 11:10:36 AM • Written by: Kelly Kranz

Earlier this week, a lively discussion with a colleague centered on AI-generated content. They showcased how they'd automated their social media presence completely, while questions about authenticity and potential bias lingered. This conversation grew into deeper reflections about the responsible usage of these powerful tools. The exchange sparked the idea for this newsletter. AI is not merely changing what's possible—it's prompting us to revisit basic questions about fairness, openness, and human control. Let's explore AI ethics, examining key concepts, obstacles, and approaches to AI that prioritize openness, equity, and security.

The Foundation of AI Ethics

AI ethics combines philosophical concepts, computer science methods, and legal structures to guide AI development and application.

Six major themes define this field:

  1. Human Control and Supervision: Ensuring people maintain authority and can step in when needed. This principle recognizes that AI systems should augment human capabilities rather than replace human judgment entirely. When AI makes suggestions or takes actions, humans must have the ability to override these decisions, especially in high-stakes situations involving health, safety, or significant financial implications. For example, in medical diagnosis systems, doctors should always have the final say in treatment decisions, with AI serving as a supportive tool that offers possibilities based on data analysis. Similarly, in autonomous vehicles, human override capabilities remain essential safety features.

  2. Security: Stopping AI systems from causing damage. This encompasses both physical safety and protection against misuse. AI systems must be designed with robust safeguards against exploitation, whether through adversarial attacks, data poisoning, or other manipulation techniques. Security also extends to psychological safety. AI systems that interact with humans should be designed to avoid causing distress or harm through their communications. This becomes particularly important in applications like mental health chatbots or AI companions for vulnerable populations.

  3. Personal Data Protection: Safeguarding individual information and maintaining confidentiality. AI systems often require vast amounts of data to function effectively, but this creates significant privacy concerns. Ethical AI development demands clear policies about data collection, storage, usage, and deletion. Privacy-preserving techniques such as federated learning (where models are trained across multiple devices without exchanging the underlying data) and differential privacy (adding carefully calibrated noise to datasets to protect individual records) represent promising approaches to balancing functionality with privacy protection.

  4. Openness: Making AI decision processes clear and easy to comprehend. Black-box AI systems that cannot explain their reasoning undermine trust and accountability. Users should understand, at an appropriate level of detail, how AI systems reach their conclusions. This doesn't necessarily mean exposing proprietary algorithms or overwhelming users with technical details. Rather, it involves providing meaningful explanations tailored to different stakeholders' needs. A doctor using an AI diagnostic tool needs different explanations than a patient receiving an AI-influenced diagnosis.

  5. Equity: Ensuring AI systems avoid discrimination against any groups or individuals. AI systems can perpetuate or even amplify existing social biases if not carefully designed and monitored. Equity requires ongoing vigilance throughout the AI lifecycle, from problem formulation and data collection to deployment and evaluation. This principle extends beyond merely avoiding harm to actively promoting fairness. In some contexts, this might mean designing systems that help correct historical inequities rather than simply maintaining the status quo.

  6. Responsibility: Holding creators and users answerable for AI system actions. As AI becomes more autonomous, questions of liability become increasingly complex. Who bears responsibility when an AI system causes harm—the developer, the deployer, the user, or some combination? Ethical frameworks must establish clear lines of accountability while recognizing the distributed nature of modern AI development. This might involve new legal structures, insurance mechanisms, or certification processes that help allocate responsibility appropriately.

Confronting Bias

A major challenge in AI ethics involves bias. AI algorithms can continue biases found in their training data, resulting in unfair outcomes.

Here's how bias appears and what can be done:

  • Data-Related Bias: Training data may contain biases that reflect in AI decisions. These biases often stem from historical inequities captured in the data. For instance, hiring algorithms trained on past hiring decisions may perpetuate gender or racial discrimination if those patterns existed in the training data. 
    Addressing data bias requires careful curation of training datasets, with attention to representation across relevant demographic groups. It also demands ongoing monitoring of system outputs for signs of disparate impact. Some organizations now employ specialized bias audits before deploying AI systems in sensitive domains. Techniques such as balanced datasets, reweighting examples from underrepresented groups, and adversarial debiasing can help mitigate these issues. The challenge lies in identifying which biases matter in a given context and determining appropriate fairness metrics.

 

  • Personal Bias: People have unconscious biases affecting how AI systems are designed and used. These biases influence everything from which problems get addressed with AI to how success is defined and measured. Diverse development teams help catch these issues earlier in the process. When teams include people with varied backgrounds, experiences, and perspectives, they're more likely to notice potential problems that might otherwise go unexamined. Organizations should establish processes for questioning assumptions throughout the AI development lifecycle. This might include structured reviews where team members explicitly consider who might be harmed or excluded by a system, or how the system might perform differently across various communities.

The Value of Openness

Openness builds confidence in AI systems.

Here's why it matters and how to achieve it:

  • Understandable Decision Processes: Openness involves explaining the logic behind AI outputs, making decision processes accessible and clear. This goes beyond technical transparency to focus on meaningful explanations that help users understand what factors influenced a particular outcome. For high-stakes decisions, such as loan approvals or medical recommendations, users deserve to know which factors weighed most heavily in the AI's analysis. This information allows them to contest errors or provide additional context that the system might have missed.  Explanation methods vary based on the AI technique used. For rule-based systems, the logic can be directly presented. For complex neural networks, techniques like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) can highlight which features most influenced a specific decision.

  • Data Origins: Ensuring openness in data sources by documenting where data comes from, how it was gathered, and any processing steps helps identify and reduce potential biases. This practice, sometimes called "data provenance," creates an audit trail that supports accountability. Data documentation should include information about sampling methods, inclusion and exclusion criteria, known limitations, and potential gaps. This context helps users understand when a system might be operating outside its area of competence. Some organizations now use "datasheets for datasets" or "model cards" that standardize this documentation, making it easier to compare different AI systems and assess their suitability for specific applications.

Practical Implementation Challenges

Moving from ethical principles to practice presents numerous challenges:

  • Competing Values: Ethical principles sometimes conflict with each other or with business objectives. For example, maximizing accuracy might require more comprehensive data collection, potentially compromising privacy. Similarly, making systems more explainable might reduce their performance or expose proprietary information.  Organizations need frameworks for making these tradeoffs explicitly and consistently. This might involve establishing an AI ethics committee with diverse expertise, creating decision rubrics that weigh different considerations, or adopting formal impact assessment processes.

  • Global Variations: Ethical norms and legal requirements vary across cultures and jurisdictions. What's considered appropriate data use in one country might violate privacy laws in another. Similarly, values around autonomy, community welfare, and authority differ across cultures. Companies operating globally must navigate these differences while maintaining core ethical commitments. This might involve customizing certain aspects of AI systems for different markets while establishing minimum ethical standards that apply everywhere.

  • Measurement Challenges: Many ethical considerations resist simple quantification. How do you measure concepts like dignity, autonomy, or fairness? Without clear metrics, organizations may default to what's easily measurable, like accuracy or efficiency, potentially neglecting important ethical dimensions.  Developing better assessment tools represents an active area of research. This includes creating standardized benchmarks for testing bias, establishing audit methodologies for algorithmic systems, and designing user studies that capture subjective experiences with AI.

Building Ethical AI Literacy

As AI becomes more pervasive, ethical AI literacy grows increasingly important for everyone, not just specialists:

  • For Developers: Technical professionals need training in ethical reasoning and awareness of the social contexts in which their systems will operate. This might include coursework in philosophy and social science alongside technical subjects.
  • For Organizations: Companies deploying AI need processes for ethical risk assessment and mitigation throughout the product lifecycle. This includes establishing clear governance structures, documentation requirements, and escalation paths for ethical concerns.
  • For Users: The general public needs enough understanding to make informed choices about AI systems they encounter. This includes knowing what questions to ask about data usage, understanding basic concepts like algorithmic bias, and recognizing when to seek human alternatives.

Educational institutions, professional organizations, and public agencies all have roles to play in building this literacy. Some companies now offer ethics training for their AI teams, while others publish accessible guides to help users understand how their AI systems work.

Why This Matters

AI ethics directly influences technology, increasingly driving our economy, healthcare, education, and social connections. When ethical AI development becomes a priority, we create systems that respect human autonomy, protect privacy, and treat all people fairly. These concerns determine whether AI will enhance human potential or strengthen existing inequalities.

The positive aspect is that thoughtful application of ethical principles creates better AI. Systems designed with openness generate greater trust and adoption. Diverse, representative data yields more accurate and fair results. And AI respecting human control complements rather than replaces human judgment.

By adopting ethical AI development now, we establish a foundation for technology that boosts human capabilities while reflecting our shared values. The AI future isn't fixed—we can shape it into a powerful force for progress that benefits everyone.

As we continue this conversation about AI ethics, let's remember that these aren't abstract philosophical questions but practical design choices that shape how technology integrates into our lives. By asking the right questions early and often, we can guide AI development toward outcomes that align with our deepest values and aspirations.

Ready to Transform Your Business with a Collaborative AI Community?

Kelly Kranz