The Concept of Human-Centricity in Sociological Studies of Artificial Intelligence

Document Type : Research Article

Authors

1 Department of Anthropology Faculty of Humanities and Social Sciences, University of Mazandaran, Babolsar, Iran

2 Department of Sociology, Faculty of Social Sciences, University of Tehran, Tehran, Iran

Abstract

Today, we are witnessing a wide range of concerns about the jeopardy of the human position in relation to artificial intelligence-based technologies. The study field of human-centricity artificial intelligence was established in response to these concerns, as it was necessary for human involvement in the development of these systems. The current research is an endeavor to encourage and fortify the convergence of two potential study forces—sociology and artificial intelligence—in order to establish the human-centricity concept.
A systematic review of the existing literature was implemented in this investigation. The projectile sampling strategy was implemented to conduct the sampling. The primer set and articles were selected in accordance with the PRISMA guidelines. To analyze the data, we used Double Diamond innovative design framework.
The research findings were categorized into eight levels, including: Human-centricity artificial intelligence; Problems of human-centricity artificial intelligence in the production of social sciences and sociology; Interdisciplinary research on the human-centricity concept in sociological studies of artificial intelligence; The position of human-centricity artificial intelligence in the methodology of social sciences; Principles and regulations governing the explainability of artificial intelligence; The significance of the human-centricity concept in the comprehension and perception of artificial intelligence by actors; Artificial intelligence and human interaction; Specific disciplines of study that are oriented toward human artificial intelligence.
Two problems related to artificial intelligence and three problems related to humans (in the general sense) were identified through the systematic review of the existing literature. The coexistence of the issues, as well as the adoption of varying perspectives by the articles under investigation in relation to the aforementioned issues, has resulted in the development of 22 distinct approaches to addressing these issues.

Keywords


  • Akhtar, J. (2020). An interactive multi-agent reasoning model for sentiment analysis: A case forcomputational semiotics. Artificial Intelligence Review, 53, 3987–4004.https://link.springer.com/article/10.1007/s10462-019-097856.
  • Ali, S., DiPaola, D., Lee, I., Sindato, V., Kim, G., Blumofe, R., & Breazeal, C. (2021). Children as creators, thinkers and citizens in an AI-driven future. Computers and Education: Artificial Intelligence2, 100040. https://doi.org/10.1016/j.caeai.2021.100040
  • Anantrasirichai, N., & Bull, D. (2022). Artificial intelligence in the creative industries: A review. Artificial Intelligence Review55(1), 589-656. https://link.springer.com/article/10.1007/s10462-021-10039-7
  • Ayobi, A., Stawarz, K., Katz, D., Marshall, P., Yamagata, T., Santos-Rodriguez, R., ... & O'Kane, A. A. (2021). Co-designing personal health? Multidisciplinary benefits and challenges in informing diabetes self-care technologies. Proceedings of the ACM on Human-Computer Interaction5(CSCW2), 1-26.
  • Babaeian, F., Safdari Ranjbar, M., & Hakim, A. (2023). Investigating the role of artificial intelligence in the public policy cycle; Metasynthesis approach. Journal of Improvement Management, 17(2), 115-150. https://doi.org/10.22034/jmi.2023.396945.2957 (In Persian)
  • Balcombe, L., & De Leo, D. (2022, February). Human-computer interaction in digital mental health. In Informatics(Vol. 9, No. 1, p. 14). MDPI. https://doi.org/10.3390/informatics9010014
  • Bawa, A., Khadpe, P., Joshi, P., Bali, K., & Choudhury, M. (2020). Do Multilingual Users Prefer Chat-bots that Code-mix? Let's Nudge and Find Out!. Proceedings of the ACM on Human-Computer Interaction4(CSCW1), 1-23. http://dx.doi.org/10.1145/3392846
  • Belfield, H. (2020, February). Activism by the AI community: Analysing recent achievements and future prospects. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society(pp. 15-21). http://dx.doi.org/10.1145/3375627.3375814
  • Bloomfield, B. P. (1988). Expert systems and human knowledge: A view from the sociology of science. Ai & Society2, 17-29.
  • Bolboli Qadikolaei, S., & Parsania, H. (2023). A systematic review of the ethical implications of using artificial intelligence in digital technologies and its relationship with the ethics of flourishing. Socio-Cultural Strategy, 12(3), 771-798. https://doi.org/10.22034/scs.2022.160772 (In Persian)
  • Bourdieu, P. (1975). The specificity of the scientific field and the social conditions of the progress of reason. Social Science Information14(6), 19-47. http://ssi.sagepub.com
  • British Design Council. [n.d.]. The Double Diamond - Design Council. https://www.designcouncil.org.uk/our-resources/the-double-diamond/
  • Chambers-Jones, C. (2021). AI, big data, quantum computing, and financial exclusion: Tempering enthusiasm and offering a human-centric approach to policy. FinTech, Artificial Intelligence and the Law, 193-210.
  • Wohlin, C. (2014, May). Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th international conference on evaluation and assessment in software engineering(pp. 1-10). https://doi.org/10.1145/2601248.2601268
  • Collins, R. (1994, June). Why the social sciences won't become high-consensus, rapid-discovery science. In Sociological forum(Vol. 9, pp. 155-177). Kluwer Academic Publishers-Plenum Publishers. https://doi.org/10.1007/BF01476360
  • Demir, M., McNeese, N. J., & Cooke, N. J. (2019). The evolution of human-autonomy teams in remotely piloted aircraft systems operations. Frontiers in Communication4, 50. https://doi.org/10.3389/fcomm.2019.00050
  • Dignum, F., & Dignum, V. (2020). How to center AI on humans. In NeHuAI 2020, 1st International Workshop on New Foundations for Human-Centered AI, Santiago de Compostella, Spain, September 4, 2020(pp. 59-62).
  • Dikmen, M., & Burns, C. (2022). The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending. International Journal of Human-Computer Studies162, 102792. http://dx.doi.org/10.1016/j.ijhcs.2022.102792
  • Esposito, E. (2017). Artificial communication? The production of contingency by algorithms. Zeitschrift für Soziologie46(4), 249-265. https://doi.org/10.1515/zfsoz-2017-1014
  • Eynon, R., & Young, E. (2021). Methodology, legend, and rhetoric: The constructions of AI by academia, industry, and policy groups for lifelong learning. Science, Technology, & Human Values46(1), 166-191. http://dx.doi.org/10.1177/0162243920906475
  • Floridi, L., Holweg, M., Taddeo, M., Amaya, J., Mökander, J., & Wen, Y. (2022). CapAI-A procedure for conducting conformity assessment of AI systems in line with the EU artificial intelligence act. Available at SSRN 4064091.
  • Floridi, L., Holweg, M., Taddeo, M., Amaya, J., Mökander, J., & Wen, Y. (2022). CapAI-A procedure for conducting conformity assessment of AI systems in line with the EU artificial intelligence act. Available at SSRN 4064091. https://doi.org/10.2139/ssrn.4064091
  • Ford, K. M., Hayes, P. J., Glymour, C., & Allen, J. (2015). Cognitive Orthoses: Toward Human‐Centered AI. AI Magazine36(4), 5-8. https://doi.org/10.1609/aimag.v36i4.2629
  • Gratch, J., Mao, W., & Marsella, S. (2006). Modeling social emotions and social attributions(pp. 219-251). Cambridge: Cambridge University Press.
  • Gu, H., Huang, J., Hung, L., & Chen, X. A. (2021). Lessons learned from designing an AI-enabled diagnosis tool for pathologists. Proceedings of the ACM on Human-computer Interaction5(CSCW1), 1-25. https://doi.org/10.1145/3449084
  • D. (2019). Analyzing the double diamond design process through research and implementation. MA Thesis. Aalto University.
  • Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society22(1), 70-86. https://doi.org/10.1177/1461444819858691
  • Hansen, S. S. (2022). Public AI imaginaries: How the debate on artificial intelligence was covered in Danish newspapers and magazines 1956–2021. Nordicom Review43(1), 56-78. https://doi.org/10.2478/nor-2022-0004
  • Hickman, E., & Petrin, M. (2021). Trustworthy AI and corporate governance: the EU’s ethics guidelines for trustworthy artificial intelligence from a company law perspective. European Business Organization Law Review22, 593-625. https://doi.org/10.1007/s40804-021-00224-0
  • Miyashita, H. (2021). Human-centric data protection laws and policies: A lesson from Japan. Computer Law & Security Review40, 105487. http://dx.doi.org/10.1016/j.clsr.2020.105487
  • How, M. L., Chan, Y. J., & Cheah, S. M. (2020). Predictive insights for improving the resilience of global food security using artificial intelligence. Sustainability12(15), 6272. https://doi.org/10.3390/su12156272
  • Huang, Y., Fei, T., Kwan, M. P., Kang, Y., Li, J., Li, Y., ... & Bian, M. (2020). GIS-based emotional computing: A review of quantitative approaches to measure the emotion layer of human–environment relationships. ISPRS International Journal of Geo-Information9(9), 551. http://dx.doi.org/10.3390/ijgi9090551
  • Huhtamo, E. (2020). The self-driving car: A media machine for posthumans?. Artnodes, 26, 1-14. http://dx.doi.org/10.7238/a.v0i26.3374
  • Jørgensen, R. F. (2023). Data and rights in the digital welfare state: the case of Denmark. Information, Communication & Society26(1), 123-138. http://dx.doi.org/10.1080/1369118X.2021.1934069
  • Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S. G., ... & Shestakofsky, B. (2021). Toward a sociology of artificial intelligence: A call for research on inequalities and structural change. Socius7, 2378023121999581. http://dx.doi.org/10.1177/2378023121999581
  • Kaasinen, E., Anttila, A. H., Heikkilä, P., Laarni, J., Koskinen, H., & Väätänen, A. (2022). Smooth and resilient human–machine teamwork as an industry 5.0 design challenge. Sustainability14(5), 2773. https://doi.org/10.3390/su14052773
  • Kloos, C. D., Dimitriadis, Y., Hernández-Leo, D., Alario-Hoyos, C., Martínez-Monés, A., Santos, P., ... & Safont, L. V. (2022, March). H2o learn-hybrid and human-oriented learning: trustworthy and human-centered learning analytics (tahcla) for hybrid education. In 2022 IEEE Global Engineering Education Conference (EDUCON)(pp. 94-101). IEEE. http://dx.doi.org/10.1109/EDUCON52537.2022.9766770
  • Kolesnichenko, O., Mazelis, L., Sotnik, A., Yakovleva, D., Amelkin, S., Grigorevsky, I., & Kolesnichenko, Y. (2021). Sociological modeling of smart city with the implementation of UN sustainable development goals. Sustainability Science16(2), 581-599. https://doi.org/10.1007/s11625-020-00889-5
  • Lee, F., & Björklund Larsen, L. (2019). How should we theorize algorithms? Five ideal types in analyzing algorithmic normativities. Big Data & Society6(2), 2053951719867349. http://dx.doi.org/10.1177/2053951719867349
  • Lee, J. A., Hilty, R., & Liu, K. C. (Eds.). (2021). Artificial intelligence and intellectual property. Oxford University Press.
  • Leonard, P., & Tyers, R. (2023). Engineering the revolution? Imagining the role of new digital technologies in infrastructure work futures. New Technology, Work and Employment38(2), 291-310. http://dx.doi.org/10.1111/ntwe.12226
  • Loi, M., Ferrario, A., & Viganò, E. (2021). Transparency as design publicity: explaining and justifying inscrutable algorithms. Ethics and Information Technology23(3), 253-263. https://doi.org/10.1007/s10676-020-09564-w
  • Marres, N. (2020). Co‐existence or displacement: Do street trials of intelligent vehicles test society?. The British journal of sociology71(3), 537-555. http://dx.doi.org/10.1111/1468-4446.12730
  • McGregor, S. (2022). AI incident database. https://incidentdatabase.ai
  • Mühlhoff, R. (2020). Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning. New Media & Society22(10), 1868-1884. http://dx.doi.org/10.1177/1461444819885334
  • Napoli, A. (2020). The human-centered AI and the EC policies: risks & chances. In Technological and Digital Risk: Research Issues.(pp. 67-92). Peter Lang.
  • Natale, S., & Ballatore, A. (2020). Imagining the thinking machine: Technological myths and the rise of artifcial intelligence. Convergence: The International Journal of Research into New Media Technologies, 26(1), 3–18. https://doi.org/10.1177/1354856517715164
  • Nielsen, J. (1993). Usability engineering. Imprint: Morgan Kaufmann. Elsevier Inc.
  • Olsson, T., & Väänänen, K. (2021). How does AI challenge design practice?. Interactions28(4), 62-64. https://doi.org/10.1145/3467479
  • Oravec, J. A. (2022). The emergence of ‘truth machines’? Artificial intelligence approaches to lieEthics and Information Technology, 24, 6. http://dx.doi.org/10.1007/s10676-022-09621-6
  • Perucica, N., & Andjelkovic, K. (2022). Is the future of AI sustainable? A case study of the European Union. Transforming Government: People, Process and Policy16(3), 347-358. https://doi.org/10.1108/TG-06-2021-0106
  • Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education, Research and Practice in Technology Enhanced Learning, 12(1), 22. https://doi.org/10.1186/s41039-017- 0062-8
  • Rajabi, M., & Nasrollahi, M. (2023). The cultural impact of artificial intelligence development on social media in Iran. Journal of Iranian Cultural Research, 16(2), 95-125. https://doi.org/10.22035/jicr.2023.3178.3481 (In Persian)
  • Riedl, M. O. (2019). Human-centered artificial intelligence and machine learning, Behav. Emerg. Technol, 1(1), 33–36. http://dx.doi.org/10.1002/hbe2.117
  • Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach. Harlow: PearsonEducation Limited.
  • Sabanovic, S. (2014). Inventing Japan's ‘robotics culture': The repeated assembly of science, technology, and culture in social robotics. Social Studies of Science, 44(3), 342–367. http://dx.doi.org/10.1177/0306312713509704
  • Schmid, S., Riebe, T., & Reuter, C. (2022). Dual-use and trustworthy? A mixed methods analysis of AI diffusion between civilian and defense R&D. Science and engineering ethics28(2), 12. http://dx.doi.org/10.1007/s11948-022-00364-7
  • Shen, Y. (2019). Create synergies and inspire collaborations around the development of intelligent infrastructure for human‐centered communities. Journal of the Association for Information Science and Technology70(6), 596-606. https://doi.org/10.1002/asi.24150
  • Shneiderman, B. (2016). The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proceedings of the National Academy of Sciences113(48), 13538-13540. https://doi.org/10.1073/pnas.1618211113
  • Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction36(6), 495-504. https://org/10.1080/10447318.2020.1741118
  • Shneiderman, B. (2021). Human-centered AI. Issues in Science and Technology, 56–61.
  • Sonetti, G., Naboni, E., & Brown, M. (2018). Exploring the potentials of ICT tools for human-centric regenerative design. Sustainability10(4), 1217. https://doi.org/10.3390/su10041217
  • Srnicek, N. C. (2016). Platform capitalism. Cambridge: Polity.
  • Steffen, D. (2021). Taking the Next Step towards Convergence of Design and HCI: Theories, Principles, Methods. In HCI International 2021-Posters: 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I 23(pp. 67-74). Springer International Publishing.
  • Suchman, L. (1987). Plans and Situated Actions: The Problem of Human-Machine Communication.
  • Suchman, L. A. (2007). Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
  • Tiersen, F., Batey, P., Harrison, M. J., Naar, L., Serban, A. I., Daniels, S. J., & Calvo, R. A. (2021). Smart home sensing and monitoring in households with dementia: user-centered design approach. JMIR aging4(3), e27047. https://doi.org/10.2196/27047
  • van Berkel, N., Tag, B., Goncalves, J., & Hosio, S. (2022). Human-centred artificial intelligence: a contextual morality perspective. Behaviour & Information Technology41(3), 502-518. https://doi.org/10.1080/0144929X.2020.1818828
  • Wajcman, J. (2017). Automation: Is it really different this time?. The British Journal of Sociology,68(1),119–127. http://dx.doi.org/10.1111/1468-4446.12239
  • Wirth, R., & Hipp, J. (2000, April). CRISP-DM: Towards a standard process model for data mining. In Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining(Vol. 1, pp. 29-39).
  • Wolfe, A. (1993). The Human Difference: Animals, Computers, and the Necessity of Social Science. Berkley, CA: University of California Press.
  • Woolgar, S. (1985). Why not a sociology of machines? The case of sociology and artificial intelligence. Sociology, 19(4), 557–572.
  • Xu, W. (2019). Toward human-centered AI, Interactions, 26(4), 42–46. https://doi.org/10.1145/3328485
  • Yang, S. J., Ogata, H., Matsui, T., & Chen, N. S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence2, 100008. https://doi.org/10.1016/j.caeai.2021.100008
  • Sah, Y. J. (2022). Anthropomorphism in human-centered AI: Determinants and consequences of applying human knowledge to AI agents. In Human-Centered Artificial Intelligence(pp. 103-116). Academic Press. http://dx.doi.org/10.1016/B978-0-323-85648-5.00013-X
  • Ziewitz, M. (2016). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Value, 41(1), 3–16. https://doi.org/10.1177/0162243915608948
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs