ethical cultures

 consulting group



About us

Our mission is to help organizations create more ethical cultures

Our mission is to assist organizations in fostering ethical cultures by providing the tools, guidance, and support necessary to uphold integrity and ethical standards. We believe that a strong ethical foundation is crucial for long-term success and trust within any organization. By promoting transparency, accountability, and ethical behavior, we aim to create environments where employees feel valued and empowered to make ethical decisions. Our commitment is to help organizations navigate ethical challenges and build a culture that prioritizes doing the right thing, ultimately leading to a more positive and sustainable impact on society.

 

4 Lenses to Create a Shared Understanding of Responsible AI

 

Toward a shared understanding of responsible AI

 

 

 

Responsible behavior is generally promoted, whether in a broad society or a smaller social group,
through one of two approaches: (1) demand compliance with rules, policies, laws and other standards
that have been set– and maintained– by some authority; or (2) encourage individuals to recognize and
ultimately advocate for shared values, thereby promoting individuals’ efforts towards sustaining an
ethical culture. Research (and, for many of us, our personal experience) tells us that a combination of
these two approaches is ultimately necessary. However, laws and other prescribed standards take time
and precedence to be established. This is especially true when considering encouragement of
responsible behavior with the use of technology.


The rate of development in technology far outpaces the creation of formal structures to regulate
appropriate use of technology. Essentially, we “can” do things before we yet know if we “ought” to be
doing them. Significant effort - in organizations across a variety of industries- is currently focused on
promoting responsible use of AI through the creation of policy. Disregarding the role of guidelines and
policies would certainly be a mistake. However, assuming that policies alone will keep us, collectively,
moving toward a shared understanding of responsible development and use of AI is also a mistake. A
values-based approach to setting a culture of responsibility to guide the development and use of
technology is critical.

 

Toward a shared understanding of responsible AI


Culture may be understood, most broadly, as a system of shared beliefs, values, and practices. AI is
becoming an increasingly predominant artifact in our culture, causing disruption not only in terms of
how we complete tasks, but also how we individually and collectively understand our social existence
and live our core values. The tensions created within our belief and values systems are connected with
the evidence of felt “AI Anxiety” and “Fear of Becoming Obsoleste” (FOBO). Indeed, grappling with any
ethical issue can at times feel overwhelming and unsolvable, usually because a single perceived issue is,
in fact, and tangle of multiple related issues. This is true of, and helps explain our struggle with, the
problem of responsible AI. Breaking complex issues down into smaller, more manageable components
enables focused attention that drives efficient action towards resolution. We must apply this “detangle”
technique as we seek to develop a shared understanding of responsible AI. Efficiency in addressing, and
progress towards a culture of, Responsible AI is achieved by addressing responsible AI through 4 key
lenses:

 

1. AI and our humanity 

 

Perhaps the most ambiguous dimension of Responsible AI, due to the challenge in understanding not
only AI but also the concept of humanity itself, requires attention to issues related to human flourishing.
For example, questions we need to ask and answer include ‘what is the purpose of human work, and
how does AI change this?’ and ‘will interactions with an LLM impact abilities to respectfully interact with
my human colleagues?’. While this line of questioning might create (un)welcomed flashbacks to a
college philosophy class, don’t let the level of abstraction that pertains to this category fool you. Our
collective efforts towards understanding these issues answer the question why we are (or are not) using
AI in particular ways, which @SimonSinek has convinced many of us is the first step towards inspiring
others to act.

 

2. AI and appropriate use 

 

Social norms, market pressures, and political mechanisms will continuously refine what constitutes
appropriate use in our culture writ large, but each organization or social unit needs to clarify the unique
challenges and opportunities for applying core values into their specific development and use of AI. The
appropriateness of using AI is guided by the ends we are seeking to achieve, thus tension between what
we can do and what we ought to do will exist in different ways for different organizations, across
different industries, and be influenced by different social environments. For example, it may be entirely
appropriate, effective, and efficient to use AI to generate a written communication to a particular
stakeholder group. However, if the purpose of the communication is to strengthen personal
relationships with the recipients, discovery that AI was relied upon to create the message could result in
harming existing relationships.

 

3. AI and responsibility for use

 

Trust is a cornerstone of our ability to form social relationships and engage in economic exchanges. My
perceptions of the trustworthiness of another is influenced by a variety of objective factors, which
scholars @Frances Frei and @Anne Morriss have skillfully explained in this HBR article.                                                                              We can go further, however, and recognize that when I am
assessing the trustworthiness of someone, the use of tools by this person has an impact on my
perceptions of competence (would you trust a carpenter who doesn’t know how to use a hammer?),
whether this ‘other’ cares about me (has your experience with an automated phone tree ever left you
questioning an organization’s commitment to customer service?), and whether I am interacting with the
‘real’ you (has the potential use of filters to alter images posted on social media ever led you to question
reality?). Whether someone uses AI as a tool, how it is used, and responsibility for (not) using AI factors
into our ability to trust those with whom we interact. Without addressing such issues as transparency of
use, our human relationships and the benefits of social cooperation are endangered.

 

4. AI and stakeholder impact 

 

Even with intent to appropriately use AI, there can be unintended consequences resulting from use that
have significant ethical implications. Organizations are more frequently incorporating stakeholder
analyses into their decision making processes, but the increasing presence of AI elevates the urgency of
this technique. What types of value are being created for different stakeholders when you use AI? How
is, and might, AI extract value from some stakeholders in order to create value for others? Will there be
a deskilling of employees? How will access to AI tools impact the success or failure of distinct
populations? What is the impact of AI on our natural environment? To properly address these questions,
we must work together and ensure stakeholders are represented throughout the development and use
of AI. This means that not only do we need to be engaging with each other, but we need to create a
culture – which is driven in part by formal processes – to normalize discussions on the responsible use of
AI.

 

Strengthen your organization’s culture


Ethical cultures are grounded in stable core values, but we must learn to apply these values in new ways
as unique challenges and opportunities arise; AI is certainly the occasion for both. While we need to
strive for a more global perspective on responsible AI, start building a shared understanding in your
organizations. If you are in an executive role, you have the power to influence more formal processes to
strengthen your culture. Regardless of your title, you have the potential to serve as a role model. Start

by normalizing the integration of questions into discussions about AI by relying on the four lenses
described above. Remember: innovation is not contrary to ethics – in fact, ethical leadership often
requires an entrepreneurial mindset. Innovation without a commitment to ethics, however, places
humanity in a precarious position.

Create Shared Value

Contact us to find out how we can help your organiztion have a more ethical culture creating shared value and accomplishing your mission.

Our team

Dr. Michelle Darnell has over 20 years experience in higher education teaching a researching related to ethical culutres.  She has taught at institutions including Purdue University, the University of Florida, and Penn State University.

Michelle Darnell

Founder