SOCIO
A toolkit to engage with the complex more-than-human systems before and while designing and developing AI. The project is working with AI researchers and managers and has been published in isITethical website.
Project Duration:
July.2022-Dec.2022 (6months)
My role:
01. Summary
02. Secondary Research

Kate Crawford (Crawford & Joler, Anatomy of an AI system 2018) deployed complicated AI systems into three main categories which are material resources, human labor, and data. From the map we can see a key difference between artificial intelligence systems and other forms of consumer technology: they rely on the ingestion, analysis, and optimization of vast amounts of human generated images, texts, and videos. When the machines get a certain amount of information, they start to label humans.
Value-added model is one of the examples for labeling which is commonly seen for evaluation. They promoted how effective it is by using scoring systems. From weapons of math destruction (O’neil, C., 2017), in Washington DC, some schools implement IMPACT evaluation to weed out the low performing teachers. Nevertheless, there are some teachers who get high reviews from students and parents are kicked out by the algorithm just because they did not pass the “evaluation of effectiveness” in teaching math and language. Data is not the only way to define “good teacher”. When we allow algorithms to make the decision for us, how can we make sure it does not marginalize any innocent users?
Although AI is difficult to understand, it is still important to drill down the roots of how the systems work to create a more transparent product.

Trustworthy AI guideline
To reduce the unintended consequence from the AI machine and being mindful to various situations. It is necessary to consider ethical values as a reflection of the project. In the EU, companies are being asked to prepare the ethical package via the guidelines provided by EU trustworthy AI guidelines before publishing the AI project. My targets will be based in the UK; therefore, I took the ethics principles from the EU (digital strategy.ec.europa.eu. 2021) and frameworks from isITethical? as a reference and organized all values into four columns provided by the ethics lab (AI Ethics Lab 2021) which are autonomy, non-maleficence, beneficence, and justice. However, the terms are still vague, and it is hard to generate discussion.
Ethical practices
- IDEO has developed plenty of cards to lead people to think about users and how we can use AI to make things better for humans. They use the term “Augmented Intelligence” instead of “Artificial Intelligence” to emphasize the distinction. Data science is a tool to help us build a smarter world, but humans should remain the architects.
- IBM claimed that “SOCIO” is important in AI systems because we need to rely on algorithms to decide. Therefore, people who feed the data to the machine will be essential. Due to the research showing that more than 80% of the AI engineers are young white male (Picchi, A., 2019), therefore, reflecting on the culture of the team and reduce the possibility of tensional and intentional bias are necessary.

03. Interview
From the interview with AI researchers, software engineers, I realized that the biggest problem in the AI industry is that we have not put ethics into practice enough. It does not mean companies do not recognize the importance of ethics in AI but how much they value it. The widespread problem is that all the team members have limited time for doing the project. Therefore, they assumed that ethics should not be their domain, and someone out there was supposed to take on the job. In addition, there is a lack of real cases showing that ethics can make an obvious positive impact for their project and none of the companies have really created their ethical framework for evaluation. Another point is the most important but also the most difficult one – common disciplinary language. In the AI project teams might include designers, engineers, AI researchers and project managers etc. They all use quite different disciplinary language, sometimes even different native language. How can we create a space for them to have a conversation around ethics with each other without language barrier?

04. Findings
- Beena Ammanath (2022), head of Deloitte AI institute, claimed that tech innovation is growing faster than law building so self-regulation becomes necessary.
- Ethics is a word that sounds very intimidating. Therefore, some companies assumed that talking about ethics might go against business interests. For example, in recent years, Google fired the top AI researchers who tend to examine the downsides of google search product (Vincent, J., 2021) The case shows that company have not ready yet to face the issue does not align with business KPI and will be more cautious about the ethical issue being mentioned.
- There is no right answer in ethical conversation instead it is the method to help us reflect on the team culture and product’s values.
- How can we provide tech workers a unique way to “feel the AI machine”.
From the secondary research, I understood that
- Most of the time ethical conversation in the private sectors becomes a one-time practice outsourcing to the organizations.
- Lack of documentation and continuity for the conversation.
- We need a mechanism for team to trade-off and figure out the priority of the ethical values in their product.
- “Not my domain” should not be an excuse for skipping ethical conversation.

It is believed that AI is the big wave for the coming generation. As a result, it should not be a slogan or an advertisement for only getting the customer’s attention. Instead, it is necessary to think through the downsides AI could bring to reduce or predict the potential harm. For the next stage, I will define the how might we question and analyze where the service should be implemented.
HOW MIGHT WE
- How might we generate lighter and easier conversation via common disciplinary language?
- How to break down the ethical value and reduce the level of complexity?
- How can we provide a unique way of feeling the AI machine?
05. Service map
Service proposition
Value of the service
A successful service will be <span>creating a common disciplinary language for designers and engineers in the AI team to think broadly and deeply of the experience that users may come across while using the product.</span> After going through the service, the team can have a way to document and develop more ethical discussion around users during the project to reduce and prevent the harm from AI systems.
06. Audit Research




- Judgement Call is a game designed by Microsoft. They follow the ethical frameworks to help their developers reflect on the project before developing
- isITethical? is using the value at play process to design a game generates the discussion of ethics in AI. The game was a way to make this process available to others and invite others to participate and complete the discussion.
- The tarot cards of tech are a series of cards designed for creators to consider the impact of technology more fully. They aim to provide an innovative approach for “move fast and break things” to “slow down and ask the right questions”
- The thing from future is an imagination game that challenges players to describe objects collaboratively and competitively from a range of alternative futures.
Insights:
- Many games have implemented the idea of “role play” to let players immerse themselves into the situation and think outside reality.
- The methodology of fictional design has been widely used to predict the potential issues that might happen in the future.
- Imagination and creation are important in the process to trigger players to think creatively.
Inspiration:
- I want to develop the toolkit for my prototype because most of the games takes longer and the content is a bit overwhelming.
- Using varied materials to increase the engagement. For example, painting, acting, and creating, etc.
- Ideating the prototypes around human – “who create the algorithm” “who are the product’s users.” It is important to keep humans in the loop.
07. Methodologies & Theoretical frameworks
- Ethics through design
- Research through design fiction
- Value sensitive design
Through my design, I would like users to realize that:
- Unlearning Oppression
- Value at Play
- Transdisciplinary collaboration
08. Ideation
- Common disciplinary language
- Role play
- Break down the ethical values
- Engagement
- Keep human in loop
- Collaboration

A playful thought-provoking tool AI teams facilitate ethical conversation before product development begins. It allows players to immerse themselves in different situations users would face while interacting with an AI product. The aim of the game is to collaborate and create a unique ethical framework for AI projects.

1. Create project brief
In this step all the participants will go talk through the project that the team is currently working on and make sure everyone is on the same page. After answering “what, when, how and why” questions, participants can decide to drill down the project root even more by considering answering the secondary “why” question.
2. Brainstorm stakeholder map
All the participants come together and brainstorm potential users the product might have. Players need to consider direct users, indirect users and marginalised users. This helps your team consider target audiences in a much broader way.


3. Get ready to create persona!
For the next step, we are going to create a persona. Therefore, in this stage we need to make sure every player gets each of the listing tools
4. Create personas
Draw the persona cards based on who you are (user’s group and personal characteristics) / what is your value / your experience of using the products


5. Scale of the impact
What is the possible impact of their personal experience? In this step, all the players will come together and discuss about the scale of the impact from their experience.
6. Create team’s framework
It’s believed that all the team members should stand for the same values. In this stage, players will come together and pick up 3 to 5 values for the team as well as their projects.

Partners





- Value cards have too many limitations because one term in different projects could have various definitions. It is better to keep the conversation open and give freedom to participants to define their own words.
- Test it with a group of people less inclined to think about ethical issues. Bedsides, ask people about how to frame the values and in which contexts they would use the toolkit.
- Use the result as something that people can come back to at different points in their project to double check if it still applies to them or if they want to change it. They could play the game at various stages too.
- The toolkit is clear to understand and the conversation around ethics is easy to engage.
- I would like to introduce the toolkit into workshops for more people to try it out so let me know when it will be published!