This is a guest post from David Ryan Polgar, founder of All Tech Is Human.
Being a responsible technologist may be the new data scientist. But in order to make that happen, we need to focus on building the Responsible AI pipeline. This involves matching industry needs, educational offerings, and individual attraction and understanding to the nascent Responsible AI field. These three crucial parts need to operate in tandem, with education influenced by the needs of industry focused around Responsible AI and individuals having a better understanding about the field and how to enter it.
There is a good reason why working in Responsible AI will grow in popularity. Similar to the rise of data scientists which correlated with companies “wrestling with information that comes in varieties and volumes never encountered before,” companies are hiring responsible technologists now that they are wrestling with thorny societal impacts of artificial intelligence. The growing influence of AI on our daily lives and society at large necessitates a more thoughtful approach with its development and deployment.
New movies like Coded Bias (which features Cathy O'Neil) are ringing the warning bell that our individual civil liberties, and democracy, are being largely defined by our AI technologies and companies are aware that their responsibility is growing alongside the growing power of their technology. At the same time, it has become apparent to the tech industry that the next wave of responsible technologists cannot focus solely on technical problem-solvers. Although AI gets filed under the category of Tech, it cannot be divorced from its effect regarding power, equity, and fairness. This requires a new type of thinker.
By taking a look at an AI Ethics Job Board that was recently launched, we can see how companies both large and small are responding with roles that were unheard of until recently:
- Responsible Innovation Manager, Civil Rights (Facebook)
- Bias and Fairness Engineer, Applied AI (OpenAI)
- Senior Director, Responsible AI Engineering (Microsoft)
- Data Research Scientist, Algorithmic Responsibility (Spotify)
- Director, ML Ethics Transparency and Accountability (Twitter)
- Manager, AI Risks and Ethics, Data Risk Analytics (Deloitte)
For a lot of individuals, especially Generation Zers that are now entering the workforce and are looking to align their career with their values, the roles mentioned above sound incredibly attractive. I know this because I usually get a few CVs sent my way each week from a diverse range of individuals looking to break into the Responsible AI space. Likewise, I often hear from industry players that are looking to hire for these roles. These two groups often don’t know how to find each other, which is an impediment to growing the Responsible AI pipeline and something that needs to change.
As the founder of All Tech Is Human, an organization that is building the responsible tech pipeline, I operate as a connective tissue of sorts between the next generation of responsible technologists, industry hiring for these roles, and universities that are responding to the need with new tech ethics curricula, programs, and cross-sector exposure to develop the Responsible AI talent pipeline.
What I’ve learned from toggling between industry, academia, and the individuals looking to be part of the Responsible AI pipeline is that these three groups are often not in sync. One of the reasons why we are seeing an uptick in projects trying to “operationalize AI ethics” is because there is often a good deal of translation that needs to happen in order to match up the tremendous academic work around AI ethics with the specific needs of industry. Likewise, a successful Responsible AI pipeline is dependent on a greater flow of information between industry, academia, and interested parties that perpetually informs and therefore adjusts each group. All three groups are dependent on one another for a successful Responsible AI pipeline.
So, how can we make this happen? I like to think of the Responsible AI pipeline in terms of stages. Today, we are still in the toddler stage where there are still some basic building blocks that need to be put in place in order to progress. Here are three areas to focus on:
1. Nomenclature
In order for individuals to be attracted to a position (similar to the rise of the data scientist), there needs to be agreed-upon titles. We are still in the see-what-sticks phase, but over the next year we will begin to notice trends regarding what job titles and job descriptions (often starting from major companies) are being replicated and which ones are falling out of favor.
2. Multiple Discipline Exposure and Cross-Sector Experience
Individuals working in the Responsible AI space often have, as Yoav Schlesliger (Principal, Ethical AI Practice at Salesforce) states, a “delightfully non-linear career.” Cross-sector experience along with exposure outside of the traditional computer science degree is increasingly being seen as an advantage in the Responsible AI space. How can education evolve outside of the confines of a traditional major?
3. Building Entry Level Responsible AI Roles and Offering Initial Experience to Interested Parties
Right now, many of the Responsible AI roles are quite senior in level. In order to grow the pipeline, entry level positions will need to be created and college students, grad students, and other interested parties will need to understand the pathways to these entry level roles. There is often a catch 22 with these entry level positions, as candidates are struggling to find their initial experience in the Responsible AI space in order to build up their resume. The growth in new organizations, such as the Algorithmic Justice League and AI Now Institute, along with stalwarts in the space such as Data & Society and Mozilla, are planting the seeds for the Responsible AI field with fellowships and other forms of experience.
AI is a powerful technology that alters the human experience. That is a big deal, so it is incredibly important that the AI that we are developing and deploying is done in a responsible manner considerate of how it impacts our individual civil liberties and various communities. As the work being done grows in importance, it becomes equally important that the right people are involved in the process. By chipping away at some of the hurdles that are slowing down the Responsible AI pipeline, we can start building the tech future that we deserve.
AI is only as good as the people behind it.