Article_snip

Johan Lundin , Marie Utterberg Modén , Tiina Leino Lindell , Gerhard Fischer
pp. 62 - 78, view paper, download
(https://doi.org/10.55612/s-5002-059-002), Google Scholar

Submitted on 15 Sep 2023 - Accepted on 23 Jan 2024

Interaction Design and Architecture(s) IxD&A Journal
Issue N. 59, Winter 2023

Abstract

This paper addresses concerns related to the ethical implications of artificial intelligence (AI) and its impact on human values, with a particular focus on fair outcomes. Existing design frameworks and regulations for ensuring fairness in AI are too general and impractical. Instead, we advocate for understanding fairness as situated in practice, shaped by practitioners’ values, allowing stakeholders control in the situation. To accomplish this, the paper contributes by conceptually exploring a potential synergy by combining Cultural-Historical Activity Theory (CHAT) and Meta-Design. By doing so, human activities can be transformed to deal with challenges, in this case, those emerging from adaptive AI tools. While professional software developers are essential for making significant changes to the tool and providing solutions, users’ involvement is equally important. Users are domain experts when it comes to determining practical solutions and aligning structures with their work practices. CHAT contributes through its emphasis on context, history, and mediation by tools. This enables a critical analysis of activity systems, helping to reveal underlying contradictions and identify areas where improvements or innovations are necessary. Meta-Design provides design concepts and perspectives that aim to empower participants, allowing them to actively shape the processes of tool design to align with their specific local needs and evolving conceptions of fairness in use-time. This offers an approach to empowering people and promoting more fair AI design.

Keywords: Fairness, Artificial intelligence, Education, Teachers, Educational technology, Cultural-historical activity theory, Meta-design

CRediT author statement: Johan Lundin: Conceptualization, Writing - Original Draft, Writing - Review & Editing, Supervision, Project administration, Funding acquisition. Marie Utterberg Modén: Conceptualization, Writing - Original Draft, Writing - Review & Editing, Investigation. Tiina Leino Lindell: Conceptualization, Writing - Original Draft, Writing - Review & Editing, Investigation. Gerhard Fischer: Conceptualization, Writing - Original Draft, Writing - Review & Editing

Cite this article as:
Lundin J., Utterberg Modén M., Leino Lindell T., Fischer G.: A Remedy to the Unfair Use of AI in Educational Settings, Interaction Design & Architecture(s) – IxD&A Journal, N.59, 2023, pp. 62–78, DOI: https://doi.org/10.55612/s-5002-059-002

References:

1. T. Susnjak, “ChatGPT: The end of online exam integrity?,” 2022, doi: 10.48550/ARXIV.2212.09292.
2. I. Tuomi, The Impact of Artificial Intelligence on Learning, Teaching, and Education: Policies for the Future. Luxembourg: Publications Office of the European Union, 2018.
3. Y. Engeström, “Activity theory as a framework for analyzing and redesigning work,” Ergonomics, vol. 43, no. 7, pp. 960–974, Jul. 2000, doi: 10.1080/001401300409143.
4. G. Fischer and E. Giaccardi, “Meta-Design: A framework for the future of end user development,” in End User Development, H. Lieberman, F. Paterno, and V. Wulf, Eds. Dordrecht, The Nederlands: Kluwer Academic Publishers, 2006, pp. 427–457.
5. S. Sun, Y. Zhai, B. Shen, and Y. Chen, “Newspaper coverage of artificial intelligence: A perspective of emerging technologies,” Telematics and Informatics, vol. 53, p. 101433, Oct. 2020, doi: 10.1016/j.tele.2020.101433.
6. A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in 2022 ACM Conference on Fairness, Accountability, and Transparency. Seoul, Republic of Korea: ACM, Jun. 2022, pp. 173–184, doi: 10.1145/3531146.3533083.
7. B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, “The ethics of algorithms: Mapping the debate,” Big Data & Society, vol. 3, no. 2, p. 205395171667967, Dec. 2016, doi: 10.1177/2053951716679679.
8. B. Schneiderman, Human-Centered AI. Oxford: Oxford University Press, 2022.
9. H. Felzmann, E. Fosch-Villaronga, C. Lutz, and A. Tamò-Larrieux, “Towards transparency by design for artificial intelligence,” Science and Engineering Ethics, vol. 26, no. 6, pp. 3333–3361, Dec. 2020, doi: 10.1007/s11948-020-00276-4.
10. A. Birhane, “Algorithmic injustice: A relational ethics approach,” Patterns, vol. 2, no. 2, p. 100205, Feb. 2021, doi: 10.1016/j.patter.2021.100205.
11. A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and abstraction in sociotechnical systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency. Atlanta, GA, USA: ACM, Jan. 2019, pp. 59–68, doi: 10.1145/3287560.3287598.
12. G. I. Zekos, Political, Economic and Legal Effects of Artificial Intelligence: Governance, Digital Economy and Society, in Contributions to Political Science. Cham: Springer International Publishing, 2022, doi: 10.1007/978-3-030-94736-1.
13. M. Dolata, S. Feuerriegel, and G. Schwabe, “A sociotechnical view of algorithmic fairness,” Information Systems Journal, vol. 32, no. 4, pp. 754–818, Jul. 2022, doi: 10.1111/isj.12370.
14. C. Haas, “The price of fairness – A framework to explore trade-offs in algorithmic fairness,” in 40th International Conference on Information Systems, ICIS 2019. Munich, Germany: Association for Information Systems, 2019.
15. A. Aler Tubella, F. Barsotti, R. G. Koçer, and J. A. Mendez, “Ethical implications of fairness interventions: What might be hidden behind engineering choices?,” Ethics and Information Technology, vol. 24, no. 1, p. 12, Mar. 2022, doi: 10.1007/s10676-022-09636-z.
16. N. A. Smuha, “From a ‘race to AI’ to a ‘race to AI regulation’: Regulatory competition for artificial intelligence,” Law, Innovation and Technology, vol. 13, no. 1, pp. 57–84, Jan. 2021, doi: 10.1080/17579961.2021.1898300.
17. European Commission (EC), “Proposal for a regulation of the European Parliament and of the council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts,” 2022. [Online]. Available: https://artificialintelligenceact.eu/the-act/
18. V. Charisi et al., “Artificial intelligence and the rights of the child: Towards an integrated agenda for research and policy,” Joint Research Centre, Seville, JRC Research Reports JRC127564, 2022. [Online]. Available: https://publications.jrc.ec.europa.eu/repository/handle/JRC127564
19. “AI and child rights policy,” UNICEF, New York, USA, Workshop report, Jun. 2019. [Online]. Available: https://www.unicef.org/globalinsight/media/661/file
20. A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, Sep. 2019, doi: 10.1038/s42256-019-0088-2.
21. M. Utterberg Modén, “Teaching with digital mathematics textbooks – Activity theoretical studies of data-driven technology in classroom practices,” Doctoral Dissertation, University of Gothenburg, 2021. [Online]. Available: https://gupea.ub.gu.se/bitstream/handle/2077/69472/gupea_2077_69472_1.pdf?sequence=1&isAllowed=y
22. W. Holmes and I. Tuomi, “State of the art and practice in AI in education,” European Journal of Education, vol. 57, no. 4, pp. 542–570, Dec. 2022, doi: 10.1111/ejed.12533.
23. B. Berendt, A. Littlejohn, and M. Blakemore, “AI in education: Learner choice and fundamental rights,” Learning, Media and Technology, vol. 45, no. 3, pp. 312–324, Jul. 2020, doi: 10.1080/17439884.2020.1786399.
24. “The Education Act,” Swedish Government Offices, SFS 2010:800, 2010. [Online]. Available: https://www.riksdagen.se/sv/dokumentlagar/dokument/svensk-forfattningssamling/skollag-2010800_sfs-2010-800
25. M. Utterberg Modén, M. Tallvid, J. Lundin, and B. Lindström, “Intelligent tutoring systems: Why teachers abandoned a technology aimed at automating teaching processes,” in 54th Hawaii International Conference on System Sciences, 2021, pp. 1538–1547.
26. B. A. Nardi, “Studying context: A comparison of activity theory, situated action models, and distributed cognition,” in Context and Consciousness: Activity Theory and Human-Computer Interaction, B. A. Nardi, Ed. Cambridge, MA: MIT Press, 1996, pp. 69–102.
27. Y. Engeström, Learning, Working and Imagining: Twelve Studies in Activity Theory. Helsinki: Orienta-Konsultit Oy, 1990.
28. L. S. Vygotsky, Mind in Society: Development of Higher Psychological Processes. Cambridge: Harvard University Press, 1980, doi: 10.2307/j.ctvjf9vz4.
29. Y. Engeström, Learning by Expanding: An Activity-Theoretical Approach to Developmental Research. Helsinki: Orienta-Konsultit Oy, 1987.
30. A. Sannino, Y. Engeström, and M. Lemos, “Formative interventions for expansive learning and transformative agency,” Journal of the Learning Sciences, vol. 25, no. 4, pp. 599–633, Oct. 2016, doi: 10.1080/10508406.2016.1204547.
31. Y. Engeström and A. Sannino, “Studies of expansive learning: Foundations, findings and future challenges,” Educational Research Review, vol. 5, no. 1, pp. 1–24, Jan. 2010, doi: 10.1016/j.edurev.2009.12.002.
32. Y. Engeström, “Expansive learning at work: Toward an activity theoretical reconceptualization,” Journal of Education and Work, vol. 14, no. 1, pp. 133–156, Feb. 2001, doi: 10.1080/13639080020028747.
33. G. Fischer, J. Lundin, and O. Lindberg, “Rethinking and Reinventing Learning, Education, and Collaboration in the Digital Age — from Creating Technologies to Transforming Cultures,” International Journal of Information and Learning Technology, vol. 37, no. 5, pp. 241-252, 2020, doi :10.1108/IJILT-04-2020-0051.
34. J. V. Wertsch, Mind as Action. New York: Oxford University Press, 1998.
35. A. Sannino and Y. Engeström, “Co-generation of societally impactful knowledge in Change Laboratories,” Management Learning, vol. 48, no. 1, pp. 80–96, Feb. 2017, doi: 10.1177/1350507616671285.
36. Y. Engeström, “Expansive visibilization of work: An activity-theoretical perspective,” Computer Supported Cooperative Work (CSCW), vol. 8, no. 1–2, pp. 63–93, Mar. 1999, doi: 10.1023/A:1008648532192.
37. Y. Engeström, “From design experiments to formative interventions,” Theory & Psychology, vol. 21, no. 5, pp. 598–628, Oct. 2011, doi: 10.1177/0959354311419252.
38. Y. Engeström, “Activity theory and learning at work,” in Tätigkeit – Aneignung – Bildung, U. Deinet and C. Reutlinger, Eds. Wiesbaden: Springer Fachmedien Wiesbaden, 2014, pp. 67–96, doi: 10.1007/978-3-658-02120-7_3.
39. Y. Engeström and A. Sannino, “From mediated actions to heterogenous coalitions: Four generations of activity-theoretical studies of work and learning,” Mind, Culture, and Activity, vol. 28, no. 1, pp. 4–23, Jan. 2021, doi: 10.1080/10749039.2020.1806328.
40. T. Leino Lindell, “Teachers’ challenges and school digitalization: Exploring how teachers learn about technology integration to meet local teaching needs.,” Doctoral Dissertation, KTH Royal Institute of Technology, Stockholm. [Online]. Available: https://www.diva-portal.org/smash/get/diva2:1690709/FULLTEXT01.pdf
41. D. Nussbaumer, “An overview of cultural historical activity theory (CHAT) use in classroom research 2000 to 2009,” Educational Review, vol. 64, no. 1, pp. 37–55, Feb. 2012, doi: 10.1080/00131911.2011.553947.
42. A. Sannino, “Teachers’ talk of experiencing: Conflict, resistance and agency,” Teaching and Teacher Education, vol. 26, no. 4, pp. 838–844, May 2010, doi: 10.1016/j.tate.2009.10.021.
43. R. A. Allen, G. R. T. White, C. E. Clement, P. Alexander, and A. Samuel, “Servants and masters: An activity theory investigation of human?AI roles in the performance of work,” Strategic Change, vol. 31, no. 6, pp. 581–590, Nov. 2022, doi: 10.1002/jsc.2530.
44. S. Karanasios, “Toward a unified view of technology and activity: The contribution of activity theory to information systems research,” ITP, vol. 31, no. 1, pp. 134–155, Feb. 2018, doi: 10.1108/ITP-04-2016-0074.
45. T. Tran, R. Valecha, and H. R. Rao, “Machine and human roles for mitigation of misinformation harms during crises: An activity theory conceptualization and validation,” International Journal of Information Management, vol. 70, p. 102627, Jun. 2023, doi: 10.1016/j.ijinfomgt.2023.102627.
46. G. Fischer, E. Giaccardi, Y. Ye, A. G. Sutcliffe, and N. Mehandjiev, “Meta-Design: A manifesto for end-user development,” Communications of the ACM, vol. 47, no. 9, pp. 33–37, Sep. 2004, doi: 10.1145/1015864.1015884.
47. G. Fischer and T. Herrmann, “Meta-Design: Transforming and enriching the design and use of socio-technical systems,” in Designing Socially Embedded Technologies in the Real-World, 1st ed., D. Randall, K. Schmidt, and V. Wulf, Eds., in: Computer Supported Cooperative Work. , London: Springer, 2015, pp. 79–109, doi: 10.1007/978-1-4471-6720-4.
48. G. Fischer, “End-user development: Empowering stakeholders with artificial intelligence, meta-design, and cultures of participation,” in End-User Development, vol. 12724, D. Fogli, D. Tetteroo, B. R. Barricelli, S. Borsci, P. Markopoulos, and G. A. Papadopoulos, Eds., in Lecture Notes in Computer Science, vol. 12724. Cham: Springer International Publishing, 2021, pp. 3–16, doi: 10.1007/978-3-030-79840-6_1.
49. T. Anderson and J. Shattuck, “Design-based research: A decade of progress in education research?,” Educational Researcher, vol. 41, no. 1, pp. 16–25, Jan. 2012, doi: 10.3102/0013189X11428813.
50. S. Barab and K. Squire, “Design-based research: Putting a stake in the ground,” in Design-Based Research, S. A. Barab and K. Squire, Eds. Psychology Press, 2016, pp. 1–14, doi: 10.4324/9780203764565.
51. F. Wang and M. J. Hannafin, “Design-based research and technology-enhanced learning environments,” ETR&D, vol. 53, no. 4, pp. 5–23, Dec. 2005, doi: 10.1007/BF02504682.
52. G. Fischer, “Adaptive and adaptable systems: Differentiating and integrating AI and EUD,” in End-User Development, vol. 13917, L. D. Spano, A. Schmidt, C. Santoro, and S. Stumpf, Eds., in Lecture Notes in Computer Science, vol. 13917. Cham: Springer Nature Switzerland, 2023, pp. 3–18, doi: 10.1007/978-3-031-34433-6_1.
53. G. Fischer, “User modeling in human-computer interaction,” User Modeling and User-Adapted Interaction, vol. 11, no. 1/2, pp. 65–86, 2001, doi: 10.1023/A:1011145532042.
54. L. Munn, “The uselessness of AI ethics,” AI Ethics, vol. 3, no. 3, pp. 869–877, Aug. 2023, doi: 10.1007/s43681-022-00209-w.
55. R. S. Baker and A. Hawn, “Algorithmic bias in education,” International Journal of Artificial Intelligence in Education, vol. 32, no. 4, pp. 1052–1092, Dec. 2022, doi: 10.1007/s40593-021-00285-9.
56. N. Selwyn, Education and Technology: Key Issues and Debates, 3rd ed. London: Bloomsbury Academic, 2022.
57. I. Roll and R. Wylie, “Evolution and revolution in artificial intelligence in education,” International Journal of Artificial Intelligence in Education, vol. 26, no. 2, pp. 582–599, Jun. 2016, doi: 10.1007/s40593-016-0110-3.
58. D. Schiff, “Out of the laboratory and into the classroom: The future of artificial intelligence in education,” AI & Society, vol. 36, no. 1, pp. 331–348, Mar. 2021, doi: 10.1007/s00146-020-01033-8.

back to Table of Contents