Original Source Here
How to build an AI ethics team at your organization?
So you’re working on AI systems and are interested in Responsible AI? Have you run into challenges in making this a reality? Many articles mention a transition from principles to practice but end up falling flat when you try to implement them in practice. So what’s missing? Here are some ideas that I think will help you take the first step in making it a reality.
Get leadership buy-in
Yes, this is important! Why? Well, different units within your organization have different incentives and goals that they are working towards. Achieving Responsible AI in practice requires coordination across different units. The leadership team can help provide a unifying mandate to bring together different units to achieve this goal.
More so, they can act as a central point of dissemination of the “why” behind pursuing Responsible AI at your organization. They have the authority to create policy and drive change en masse that can make on-the-ground work easier and more effective. Especially in cases where you face reluctance from colleagues, a clear message from leadership provides a North Star for everyone.
Finally, leadership plays an essential role in providing you with necessary resources and “air-cover” to experiment with tools and techniques as we (the research and practitioner community) figure out practical solutions to some very complex challenges in the field of AI ethics.
Setup feedback mechanisms
As a complementary point to the above recommendation, we should also make it easy for on-the-ground practitioners to provide feedback on the tools, techniques, and processes that work well and those that don’t. This is critical when you have a large organization with many teams working on very different product and service offerings. The guidelines and mandates coming top-down can suffer from a lack of context and nuance, which only gets clearer closer to the place of operation.
Effective feedback mechanisms have two qualities: they are easy to file and have transparency on which of the pieces of feedback have been acted upon. Many places fail on the second aspect, without which the entire exercise of feedback solicitation becomes fruitless. This also disincentivizes employees from sharing feedback in the first place and makes them lose trust in the process. Sharing results of the pieces of feedback that have been acted upon (which will often be visible through changes in the tools, techniques, and processes) and, more importantly, those that haven’t been acted upon along with a reason on why they have not been acted upon will evoke higher levels of trust from the employees in the organization.
Empower people to make decisions
Often, those closest to the problems and building solutions to address those problems have highly contextual insights. We can leverage these insights by empowering those people to make decisions. This empowerment is important because it ensures that people feel a greater sense of ownership in the solutions that they are building. They become more capable of solving problems that really matter to their users and customers.
A hierarchical organization can help to align different teams together towards a common vision. Still, when complemented with the bottom-up approach of generating solutions and empowering the staff to act on those solutions, we increase the likelihood of achieving our Responsible AI goals.
A practical approach to building up this empowerment is to start allocating small amounts of direct responsibility on amending product and service offerings and increasing the scope of that responsibility over time as people demonstrate aptitude and skill for it. More so, proactively supplementing this on-the-job experience with training programs that promote decision-making regarding Responsible AI will make this approach successful.
Align with organization values
One of the core places of dissonance occurs when AI ethics is framed in a context aligned to the organization’s mission and values. Drawing a clear connection between them helps boost uptake and leverage other evaluative instruments (like performance reviews) and policies within the organization.
It also helps percolate the idea of responsible AI as a key function of every person’s job role in the organization that helps with organic integration of these responsibilities in the existing job roles and making it easy to create new job roles in the organization that are tasked with implementing AI ethics within the organization.
Make RAI the norm rather than the exception
Just as Microsoft has invested years of efforts in tooling and processes to make accessibility a first-class citizen in their products and services, making Responsible AI the norm rather than the exception should be our North Star.
If, through investments, we can make the implementation of these ideas a default action and easy action, then not only will we get higher traction, but it will also disincentivize developers from doing anything other than the “right” thing. Yes, the last part is a little bit aspirational, but it isn’t unrealistic!
I’d love to hear from you if any of these ideas resonated with you and if you have tried any of them within your organizations.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot