Winter 2023-2024
Table of contents – Authors index – Authors’ short bio
Alessandro Pagano, Anders Mørch, Vita Santa Barletta, Renate Andersen
(https://doi.org/10.55612/s-5002-059-001psi) download
In recent years, the field of artificial intelligence (AI) has garnered significant attention and received renewed interest. Its appearance on a global scale has been described using different metaphors, ranging from the new crude oil waiting to be refined [1] to the key to the future of education. United Nations Educational, Scientific, and Cultural Organization (UNESCO) states that the connection between AI and education involves three main areas: 1) learning with AI, 2) learning about AI, and 3) preparing for AI [2], which is the order they appear to students in higher education today. The goal of human-centered AI (HCAI) aligns with two of these areas (1 & 2) by putting human users at the center, emphasizing user experience design, measuring human performance, and empowering people [3]. Another essential feature of HCAI aligns with the third area by emphasizing how HCAI facilitates human self-efficacy, creativity, distributed responsibility, work performance, and social participation [3].
In this special issue, we contrast and explore different aspects of HCAI by two themes that we called for: 1) AI for humans and humans for AI, and 2) the relationship between AI and intelligence augmentation (IA). By AI for humans, we mean tools that enhance human learning and experiences through adaptive and adaptable learning environments, decision support systems, personal assistants, smart technology integration, and automation of tedious work tasks. On the other hand, humans for AI implies that humans play a crucial role in adapting AI systems and training machine learning programs, ranging from prompting and integrating new training data, refining AI algorithms, and finally to empirically studying how AI systems perform in practice (education, workplaces, and leisure activities). The human-centered approach is rooted in agency, inclusion, equity, responsible usage, and ethical considerations to avoid widening technological gaps.
For the second issue, we distinguish AI from IA. AI’s long-term aim is to replace human beings with intelligent systems, whereas IA focuses on empowering and augmenting human capabilities [4]. AI seems to have the power to replace humans by undertaking intelligent tasks that were once limited to humans, but another view on AI is to facilitate intelligence and carry out mundane and routine work tasks instead of replacing humans [5]. This is the approach we take in this special issue, which we refer to as IA or assistive AI. Engelbart defined IA as a combination of human and automated capabilities that augment rather than replace the human intellect [6]. The concept of IA takes advantage of machine learning techniques to assist humans rather than rendering them obsolete [7]. For instance, with the introduction of ChatGPT 3.5, we have observed a rise in the number of technological tools aiming to integrate generative AI, such as Bard in Google Search and Copilot in Microsoft Office.
Central to IA systems and tools is a stronger focus on human–computer interaction (HCI) and, in particular, symbiotic human–computer interfaces that take advantage of research in both AI and HCI [8]. In the intersection of AI and HCI, the distribution of control is one of the key features to balance, for example, in what situations the computer should be in control and in what situations humans should be in control. In Table 1, we compare key features of AI and IA as they appear in some currently profiled AI-integrated (software and hardware) systems seen from the perspective of human interaction and use contexts [9]. These features relate to the articles in the special issue as well as some specific domains of comparison to show IA’s broader appeal.
The central goal of human empowerment with AI is to enhance the experience and quality of life for human beings in their daily lives, work, leisure, and learning by fostering creativity, meaningful work, and healthy environments for individuals while also strengthening group effectiveness and community building at higher levels of participation. In the domains of newspaper production and journalism, Opdahl and colleagues [10] highlighted the growing importance of quality journalism and the current challenges it faces. The authors explored the potential of AI and machine learning in these domains, with a particular focus on increasing trust at all stages of the news production cycle. The authors hope that high-quality journalism can be achieved to counterbalance the spread of disinformation because of the news produced by AI without human involvement. In general, a variety of issues and ethical problems need to be addressed for the scale-up of AI from individual use to group interaction to community involvement for it to be robust (e.g., privacy intrusion, knowledge and competence loss, lack of control, group think, filter bubbles, massive unemployment, spread of false information and infectious ideas, and autonomous weapons, among others). However, it is not a matter of choosing between “AI for Humans” or “Humans for AI”; instead, it is about finding a balance to ensure that AI systems are used responsibly, safely, and ethically. HCAI has implications for a wide range of application domains involving both novices and experts. Therefore, in this issue, we address AI in education and educational technology, AI and healthcare, cybersecurity, and AI in design tools.
Table 1: Comparative analysis of AI and IA applications in specific domains (AI requires computers in control, whereas IA is assistive and requires humans in control).
Digital tools | Artificial Intelligence (full automation) | Intelligence Augmentation (assistive tools based on AI and human collaboration) | |
Robots | Fully autonomous robots operate independently and execute tasks without human intervention, often in controlled environments (e.g., factories without human workers) | Collaborative robots work alongside humans, respond to human input, learn from human actions, and enhance human capabilities. They are flexible and programmable (e.g., they support end-user development) | |
Educational Technology | AI-powered educational tools support production of content (text, images, videos, etc.) and can adapt content to individual needs, and provide real-time feedback (e.g., intelligent tutoring systems) | Critiquing systems enhance learning by offering personalized guidance in ill-structured domains, such as design, and assessing student essays for specific domains by providing adaptive, content-specific feedback | |
Healthcare Care Decision Support | AI in healthcare supports diagnostics, treatment planning, and drug discovery by analyzing medical data and providing insights (e.g., creating synthetic medical images for training) | Clinical decision support systems assist healthcare professionals in making informed decisions by integrating patient data, medical knowledge, and best practices (e.g., IA-enhanced image analysis for abnormality detection) | |
Smart Home Systems | AI-driven smart home devices automate tasks based on preset user preferences, adapt to routines, and optimize energy consumption | EUD (End User Development)-enabled smart home devices provide user-friendly interfaces, adaptation by rule-based systems, and offer personalized suggestions for optimized home management | |
Cybersecurity | AI in cybersecurity detects and responds to threats autonomously, analyzes patterns to identify anomalies, and enhances the efficiency of threat detection and mitigation | IA-enhanced cybersecurity systems utilize human expertise in analyzing complex threats, provide decision support, and enhance incident response through collaboration with security professionals | |
AI Design Tools | AI design tools employ AI in the design process, automating tasks like prototyping, generating design suggestions, and enhancing the creative workflow | IA-enhanced design tools support designers by providing intelligent suggestions, allowing collaborative design of new artifacts, using EUD to modify design tools, and adapting generative feedback to design trends | |
Newspapers production and Journalism | AI generates content autonomously, potentially leading to the creation of fake news. It uses natural language processing and generative AI | Collaborative journalism integrates AI with human authors to improve article quality by collaborative creation for more accurate and insightful reporting |
AI in Education and Educational Technology
The role of AI in education and learning with AI-empowered educational technology have received increasing attention over the years despite slow adoption in schools, even though this was a hot topic in academic circles. Learning, innovation, and knowledge are often considered the foundations of post-industrial economies and knowledge-based societies, which is now fueled by AI. Despite human learning and machine learning have many commonalities, they are also fundamentally different according to scientific foundation, i.e., natural sciences vs. design sciences [11,12]. Beyond the simple, though vague, controversial idea of automating teacher tasks [13], it is also suggested that AI’s transformational effect can increase human cognition in learning in human–computer symbiotic relations [8,14,15].
There is a lot of potential in applying AI in education; both teachers and students can benefit from using AI to both enhance the learning process and generate ideas as part of their working tools [16]. Teachers can use generative AI (GAI) tools to develop teaching materials and lectures, and students can use them to research topic areas, work on text assignments, create code, and solve math problems. However, we should be aware of the downsides of implementing AI in educational institutions. AI can never replace a human being as a teacher, and GAI tools do not align with contemporary learning theories that promote student agency during knowledge construction [17].
AI in education has the potential to transform learning and teaching methods. An overview of the recent literature suggests that a fruitful avenue for exploring the future of AI in education lies in the synergy between human teachers and AI technologies; the majority argue that human teachers possess unique qualities that make them irreplaceable. Tahiru [18] discussed the challenges and benefits of AI in education, emphasizing the need for further research in this area. Joshi [19] examined perceptions of AI in education and found that teachers and students generally have a positive view of its use. Fischer [20] supports the perspective of “AI for humans,” focusing on human-centered design and IA. In general, the literature suggests that AI can assist and enhance teaching and learning, but human teachers remain essential to mediating the two activities by providing critical thinking, creativity, collaboration, and social-emotional competencies that AI cannot replicate [21].
AI-empowered educational technology has succeeded in building knowledge-based systems that can model specific application domains and guide or adapt to individual students’ preferences and learning paths. Today, combining knowledge-based and data-driven approaches combines the best of two types of AI tools (top-down and bottom-up). Data-driven AI provides important basic information-processing functions, such as pattern recognition. For example, the EssayCritic system is trained for a specific application domain to provide automated feedback to stimulate learners to write better essays in this domain according to the criteria described in the school curriculum [22]. This aligns with educational goals that focus on the gradual development of domain knowledge and instructional theories of scaffolding, such as the zone of proximal development [23,24]. Consequently, in the educational context, the development of AI can be seen as the common development of human and artificial knowledge. This suggests that the future of educational technology with AI should be understood from the perspective of increasing the combined human and machine cognitive capabilities.
Healthcare Decision Support
GAI can be used in a broad range of activities in medicine and healthcare, from creating synthetic medical images for training to generating patient-specific treatment plans and recommendations and organizing administrative activities [25]. Autonomous AI systems in healthcare have made considerable progress in diagnostics, notably in imaging analysis for disease diagnosis using detailed body scans and specialized X-ray imaging. These tools support treatment planning using machine learning, especially in oncology, by analyzing patient data against the medical literature to formulate personalized treatment plans [26]. In drug discovery, AI accelerates the process by predicting potential drug compounds and streamlining patient recruitment for clinical trials [27]. Additionally, AI optimizes hospital operations by enhancing resource allocation and patient flow management and automating medical coding and billing, thus increasing financial efficiency [28].
On the other hand, assistive AI complements human expertise in healthcare, supporting professionals in decision-making and patient care. Clinical decision support systems [29,30] offer preliminary diagnostic suggestions that improve accuracy and efficiency in patient care. IA tools assist biomedical engineers in interpreting complex laboratory results and enhancing surgical precision in clinical work, especially in minimally invasive procedures. IA aids in surgical planning by enabling surgeons to visualize and plan complex surgeries effectively. AI transforms patient engagement and education by offering personalized advice and promoting better health outcomes. In radiology, assistive AI identifies anomalies in imaging scans for further analysis by radiologists and helps convert two-dimensional (2D) images into three-dimensional (3D) models, thereby improving diagnostic accuracy [31,32]. In mental health, AI aids therapists by providing cognitive behavioral therapy techniques and supports individuals through stress and mood monitoring apps, contributing to mental well-being [33,34].
Cybersecurity
As we venture into this new age of AI and collaborative intelligence, we acknowledge the myriad challenges and ethical dilemmas [35]. Issues such as privacy intrusions, potential unemployment, knowledge and competence loss, lack of control, and the ethical implications of autonomous weapons demand attention [36]. It is imperative that we navigate this landscape with a conscientious and ethical lens to ensure the responsible deployment of AI technologies.
AI plays a crucial role in enhancing cybersecurity capabilities. AI systems are becoming increasingly interconnected with cybersecurity due to advancements in hardware and software [37]. Cybersecurity involves implementing diverse measures, methods, and strategies to safeguard systems against threats and vulnerabilities, ensuring the efficient delivery of accurate services to users [38]. In recent years, there has been a surge in endeavors to create AI-powered solutions that cater to a diverse array of cybersecurity applications. This trend is driven, in part, by organizations’ increasing recognition of the pivotal role that AI plays in addressing and mitigating cyber threats [39]. By examining historical data and identifying patterns associated with malicious activities, AI algorithms can predict the nature and timing of future attacks. This proactive approach allows security teams to act before an attack occurs. The result is a more robust security infrastructure capable of preempting cyber threats.
Integrating human input is decisive in unlocking the full potential of AI, which is pertinent during the training of machine learning programs. For instance, reinforcement learning from human feedback (RLHF) is a machine-learning technique that leverages human feedback to train and enhance the accuracy of an AI model [40]. Therefore, AI-based models can be trained to detect and respond to potential threats more effectively by using human feedback to learn from real-world examples through RLHF.
There are several advantages to combining AI and human insights, which can provide several benefits in cybersecurity. First, it can lead to enhanced threat detection accuracy. Traditional cybersecurity solutions rely on predefined rules, but these rules can quickly become outdated, leading to a high rate of false positives and false negatives. By using RLHF, the model can learn from human feedback and continuously adapt to new threats as they emerge in the training data, resulting in improved threat detection accuracy. Second, by combining AI (rules) and RLHF, teams can better identify potential threats while also significantly reducing the organization’s risk posture. Third, AI and human insights can lead to improved security awareness. When users are trained and encouraged to report suspicious activity, they can provide valuable information on new and emerging threats that may not be detected by traditional security systems. Fourth, end users can help validate their cybersecurity strategy in real time by reporting suspicious activity. For example, a company’s security team can immediately review phishing emails reported by users, enabling them to learn and adapt to new threats more quickly. RLHF can dramatically reduce the time to detect and respond to threats when leveraged by a distributed team across different departments and time zones.
AI Design Tools
Designers use tools for creating visual artifacts (e.g., design diagrams, visual images, and architectural drawings) rather than writing texts, thus leveraging GAI tools other than those based on large language models [41]. Examples of such tools are computer-aided design (CAD) software and specialized tools such as shape grammars [42]. CAD software enables designers to create detailed 2D and 3D models of objects or spaces. These tools empower designers to explore and iterate on visual ideas by utilizing GAI techniques tailored to the specific demands of visual creation, pushing the boundaries of their creative process [43]. An example is Autodesk Dreamcatcher, which applies algorithms to automatically generate design options to solve design problems through goals and constraints. Designers define their design constraints, and the software generates a variety of optimized design alternatives. This helps designers quickly explore a wide range of possibilities.
AI design tools have transformed the design process by utilizing GAI and deep machine learning. They have been integrated into designers’ workbenches since the early days of AI through rule-based expert systems such as shape grammars [44]. The rules of a shape grammar generate designs, but the shape grammar approach is perhaps not best known for creating new architectural designs but to appreciate the existing designs, such as Palladian grammar for reconstructing a famous Venetian villa (Villa Malcontenta), consisting of 69 rules of classical art, which are applied throughout the eight stages [45]. Today’s GAI systems can provide designers with a range of innovative features that enhance the overall creative workflow and automate parts of the design process [46]. These design tools can not only generate design recommendations but also automate tasks, such as prototyping, generating design suggestions, and supporting the exploration of design alternatives, using AI transformation algorithms [47].
On the other hand, IA-enhanced design tools take AI-driven design even further by integrating intelligent assistance at every step. These tools not only provide suggestions but also actively collaborate with designers in the creation process [48]. With IA-enhanced design tools, designers can engage in collaborative design and leverage the power of end-user development to allow designers to modify the design tools themselves, tailoring them to their specific needs and preferences [49]. Additionally, these tools incorporate generative feedback, utilizing their adaptive capabilities to provide insights and recommendations aligned with emerging design trends and designers’ preferences [50]. By combining AI intelligence and the interactivity of IA, IA-enhanced design tools empower designers with enhanced creativity, efficiency, and flexibility in their design endeavors.
Overview of articles
We summarize the five articles in the special issue on AI for humans and humans for AI below:
A Research Framework Focused on AI and Humans instead of AI versus Humans, by Gerhard Fischer
This is a stimulated paper for the special issue [51] as it proposes a research framework contrasting two perspectives on AI: “AI versus humans,” which emphasizes AI replacing human abilities, and “AI and humans,” which focuses on AI augmenting and empowering human capabilities. The article argues for the latter approach, advocating for AI as a tool for human enhancement rather than replacement. It discusses various AI concepts, including artificial general intelligence, AI for specific purposes, and HCAI. This research focuses on the interplay between AI and human intelligence, advocating for a collaborative approach to AI development and application. The work extends the framework we have presented here and emphasizes the importance of ethical considerations, the need for AI to complement human intelligence, and the potential pitfalls of over-reliance on AI. It also explores the role of AI in education, particularly in learning environments, and the impact of technologies such as ChatGPT. The paper concludes with reflections on the future implications of AI in enhancing quality of life and addressing societal challenges.
A Triple Challenge: Students’ Identification, Interpretation, and Use of Individualized Automated Feedback in Learning to Write English as a Foreign Language, by Ingeborg Krange, Meerita Segaran, Siv Gamlem, Synnøve Moltudal, and Irina Engeness
This article [52] reports an empirical study of educational technology that explores the effectiveness of an AI-based automated essay assessment tool in enhancing eighth-grade students’ writing skills in English as a foreign language. The study, conducted in a naturalistic school setting, involved 56 students who received automatic feedback on their essays, which were then discussed with their teachers and peers. The analysis focused on the improvements made to the essays based on feedback, the interaction between students and teachers, and the frequency of feedback utilization. The findings suggest that automated essay assessment can be beneficial for student learning when supplemented with teacher guidance; they echo and elaborate on the findings reported above, which posit that a fruitful avenue for the future of AI in education is to explore the synergy between human teachers and AI technologies. This research contributes to understanding the role of AI in educational settings, particularly in fostering students’ writing skills and assessment literacy.
A Remedy to the Unfair Use of AI in Educational Settings, by Johan Lundin, Marie Utterberg Modén, Tiina Leino Lindell, and Gerhard Fischer
The article [53] discusses the need for AI tools in education to be tailored to local contexts and fairness values rather than using a one-size-fits-all approach. It suggests combining activity theory and meta-design to allow users to participate in the design and transformation of AI tools. This paper uses education as an example to highlight the practical application of this framework. It identifies two main issues with using AI tools in education: a conflict between abstract and situated fairness and a conflict between self-adapting and human-mediated adaptable tools. This paper proposes that meta-design can address these issues by enabling users to modify AI tools to suit their specific needs and local contexts, thereby promoting fairness and agency. The paper concludes that merging activity theory developed for adult education with workplace learning and meta-design provides an interesting perspective for understanding the evolving human–AI relationship and a potential method for empowering people and promoting fairer and more contextually appropriate AI design in workplace and educational settings.
College Students-in-the-Loop for Their Mental Health: A Case of AI and Humans Working Together to Support Well-Being, by Vania Neris, Vanessa Alves, Franco Garcia, and Conrado Saud
The article [54] presents a system, that uses AI and human-in-the-loop approaches to support the mental health of college students with depression. This system collects data from sensors, social networks, self-reports, and diaries and uses machine learning models to identify symptoms of depression and possible depressive profiles. Based on the detected symptoms, the system provides intervention content in the form of dialogs by a chatbot, which are evaluated by the students themselves. The paper describes the design and implementation of the system, as well as the evaluation and discussion of the results from a clinical study with 20 students who used the solution for 3 weeks. The article highlights the importance of involving users in the validation and feedback of the data and the dialogs, as well as the challenges and limitations of the project. The study, demonstrates that user participation enhances data relevance for AI-based predictions and interventions, highlighting the importance of human-centered AI in mental health applications.
Integrating Artificial Intelligence into Interior Design Education: A Case Study on Creating Office Spaces for “Avrupa Yakası” TV Series Characters, by M. Uğur Kahraman, Yaren Şekerci and Müge Develiler
This article [55] presents a case study integrating AI tools into interior design education, focusing on a warm-up assignment in which students used AI design tools to create office spaces for characters from a popular Turkish TV series, “Avrupa Yakası.” The aim was to introduce students to AI applications and prepare them for an office design project. This paper reviews the literature on end user–centered design, office design, and AI in education and describes the material and method of the assignment. This article analyzes 12 office designs created by students for four characters from the TV series using prompts and AI-generated images. The paper discusses the findings in terms of the AI’s responses to the prompts, the system’s knowledge and limitations, and the students’ creativity and challenges in adapting the results to their projects. The article concludes that AI’s responses are influenced by various factors, such as design styles, thematic elements, and contextual cues present in the prompts. AI’s strengths are its ability to match certain keywords and concepts with corresponding design elements from its database, but it lacks sufficient detailed information and overlooks certain aspects of real-world functionality. The article highlights AI’s potential for assisting in design inspiration, emphasizes the need for continuous AI database development, and contributes to understanding AI’s role in providing intelligent design suggestions to students in design education and its evolving impact on creative processes.
References:
1. Palmer, M. (2006). Data is the new oil. ANA marketing maestros, 3.
2. United Nations Educational, Scientific and Cultural Organization (UNESCO) (2022). Artificial intelligence in education. https://en.unesco.org/artificial-intelligence/education
3. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, 36(6), 495-504. https://doi.org/10.1080/10447318.2020.1741118
4. Barricelli, B.B, Fischer, G., Fogli, D., Morch, A., Piccinno, A., & Valtolina, S.. 2022. CoPDA 2022: Cultures of Participation in the Digital Age: AI for Humans or Humans for AI? In Proceedings of the 2022 International Conference on Advanced Visual Interfaces (AVI 2022). Association for Computing Machinery, New York, NY, USA, Article 90, 1-3. https://doi.org/10.1145/3531073.3535262
5. Hassani, H., Silva E.S., Unger S., TajMazinani, M., Mac Feely, S. (2020). Artificial Intelligence (AI) or Intelligence Augmentation (IA): What Is the Future? AI, 1(2), 143-155. https://doi.org/10.3390/ai1020008
6. Engelbart, D. (1962). Augmenting Human Intellect: A Conceptual Framework; Summary Report, Contract AF 49-1024; Stanford Research Institute: Palo Alto, CA, USA. https://doi.org/10.21236/AD0289565
7. Andersen, R., Mørch, A. I., & Litherland, K. T. (2022). Collaborative learning with block-based programming: Investigating human-centered artificial intelligence in education. Behaviour & Information Technology, 41(9), 1830-1847. https://doi.org/10.1080/0144929X.2022.2083981
8. Lyytinen, K., Nickerson, J. V., & King, J. L. (2021). Metahuman systems = humans + machines that learn. Journal of Information Technology, 36(4), 427-445. https://doi.org/10.1177/0268396220915917
9. Wegner, P. (1997). Why interaction is more powerful than algorithms. Commun. ACM, 40(5), 80-91.
https://doi.org/10.1145/253769.253801
10. Opdahl, A.L, Tessem, B., Dang-Nguyen, D.-T., Motta, E., Setty, V., Throndsen, E., Tverberg, A., & Trattner, C. (2023). Trustworthy journalism through AI. Data & Knowledge Engineering. DOI: https://doi.org/10.1016/j.datak.2023.102182
11. Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. John Wiley & Sons.
12. Giovannella C.: “Learning by being”: integrated thinking and competencies to mark the difference from AIs, Interaction Design & Architecture(s) – IxD&A Journal, N.57, 2023, pp. 8-26, DOI: https://doi.org/10.55612/s-5002-057-001
13. Wang, T., Lund, B.D., Marengo, A., Pagano, A., Mannuru, N.R., Teel, Z.A., & Pange, J. (2023). Exploring the Potential Impact of Artificial Intelligence (AI) on International Students in Higher Education: Generative AI, Chatbots, Analytics, and International Student Success. Applied Sciences, 13, 6716. https://doi.org/10.3390/app13116716
14. Molenaar, I. (2022). Towards hybrid human-AI learning technologies. European Journal of Education, 57(4), 632-645. https://doi.org/10.1111/ejed.12527
15. Tuomi, I. (2019). The Impact of Artificial Intelligence on Learning, Teaching, and Education: Policies for the Future. JRC Science for Policy Report. European Commission, ERIC.
16. Bahroun, Z., Anane, C., Ahmed, V., & Zacca, A. (2023). Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability, 15(17), 12983. https://doi.org/10.3390/su151712983
17. Mørch, A. I., & Andersen, R. (2023). Human-Centered AI in Education in the Age of Generative AI Tools. CoPDA 2023, co-located with IS-EUD 2023, Cagliari, Italy, June 6-8. CEUR Workshop Proceedings, 3408, CEUR-WS.org. URL: https://ceur-ws.org/Vol-3408/short-s2-08.pdf
18. Tahiru, F. (2021). AI in Education. Journal of Cases on Information Technology, 23(1), 1-20.
https://doi.org/10.4018/JCIT.2021010101
19. Joshi, S., Rambola, R. K., & Churi, P. (2021). Evaluating Artificial Intelligence in Education for the Next Generation. Journal of Physics: Conference Series, 1714(1), 012039.
https://doi.org/10.1088/1742-6596/1714/1/012039
20. Fischer, G. (2022). A Research Framework Focused on AI and Humans instead of AI versus Humans. CoPDA 2022, co-located with AVI-2022, Frascati, Italy, June. CEUR Workshop Proceedings, 3136, CEUR-WS.org. URL: https://ceur-ws.org/Vol-3136/paper-1.pdf
21. Tuomi, I. (2022). Artificial intelligence, 21st-century competencies, and socio-emotional learning in education: More than high-risk? European Journal of Education, 57(4), 601-619. https://doi.org/10.1111/ejed.12531
22. Mørch, A. I., Engeness, I., Cheng, V. C., Cheung, W. K., & Wong, K. C. (2017). EssayCritic: Writing to learn with a knowledge-based design critiquing system. Educational Technology and Society, 20(2), 213-223.
23. Chaiklin, S. (2003). The Zone of Proximal Development in Vygotsky’s analysis of learning and instruction. In Kozulin, A., Gindis, B., Ageyev, V. & Miller, S. (Eds.) Vygotsky’s educational theory and practice in cultural context. 39-64. Cambridge: Cambridge University. https://doi.org/10.1017/CBO9780511840975.004
24. Vygotsky, L. S. (1978). Mind in society: Development of higher psychological processes. Cambridge, MA: Harvard University Press.
25. Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare, Future Healthcare Journal, 6, 94-98. https://doi.org/10.7861/futurehosp.6-2-94
26. Jiang, F., Jiang, Y., Zhi, H., et al. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4). https://doi.org/10.1136/svn-2017-000101
27. Woo M (2019). An AI boost for clinical trials. Nature, Sep, 573 (7775), 100-102.
https://doi.org/10.1038/d41586-019-02871-3
28. Dawoodbhoy, F.M., Delaney, J., Cecula, P., Yu, J., Peacock, I., Tan, J., & Cox, B.M. (2021). AI in patient flow: Applications of artificial intelligence to improve patient flow in NHS acute mental health inpatient units. Heliyon, 7.
https://doi.org/10.1016/j.heliyon.2021.e06993
29. van Baalen, S., Boon, M., & Verhoef, P. (2021). From clinical decision support to clinical reasoning support systems. Journal of Evaluation in Clinical Practice, 27(3), 520-528. https://doi.org/10.1111/jep.13541
30. Lee, M. H., Siewiorek, D. P., Smailagic, A., Bernardino, A., & Bermúdez i Badia, S. (2021). A human-AI collaborative approach for clinical decision making on rehabilitation assessment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Article 392). Association for Computing Machinery. https://doi.org/10.1145/3411764.3445472
31. van Leeuwen, K.G., de Rooij, M., Schalekamp, S. et al. How does artificial intelligence in radiology improve efficiency and health outcomes? Pediatr Radiol 52, 2087-2093 (2022). https://doi.org/10.1007/s00247-021-05114-8
32. Waller J, O’Connor A, Rafaat E, Amireh A, Dempsey J, Martin C, Umair M (2022). Applications and challenges of artificial intelligence in diagnostic and interventional radiology. Pol J Radiol. 25;87, 113-e117. https://doi.org/10.5114/pjr.2022.113531
33. Torous, J., Larsen, M. E., Depp, C., et al. (2020). Smartphones, sensors, and machine learning to advance real-time prediction and interventions for suicide prevention: A review of current progress and next steps. Current Psychiatry Reports, 22(7), 33.
34. Barletta, V. S., Cassano, F., Pagano, A., & Piccinno, A. (2022). A collaborative AI dataset creation for speech therapies. In CEUR Workshop Proceedings (Vol. 3136, pp. 81-85). CEUR-WS.org. URL: https://ceur-ws.org/Vol-3136/paper-10.pdf
35. Barletta, V. S., Caivano, D., Gigante, D., & Ragone, A. (2023). A Rapid Review of Responsible AI Frameworks: How to Guide the Development of Ethical AI. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering (pp. 358-367). Association for Computing Machinery.
https://doi.org/10.1145/3593434.3593478
36. Barletta, V.S., Cassano, F., Pagano, A., & Piccinno, A. (2022, November). New perspectives for cyber security in software development: when End-User Development meets Artificial Intelligence. In 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT) (pp. 531-534). IEEE. https://doi.org/10.1109/3ICT56508.2022.9990622
37. Mengidis, P., Karyda, M., & Tsoukalas, L. H. (2019). Artificial intelligence and cybersecurity: A comprehensive review. Artificial Intelligence Review, 52(3), 1685-1704.
38. Elever, K., & Kifayat, K. (2020). Cybersecurity and artificial intelligence: A threat or an opportunity. Journal of Cybersecurity, 6(1), tyaa001.
39. Zhang, Z., Ning, H., Shi, F. et al. Artificial intelligence in cyber security: research advances, challenges, and opportunities. Artif Intell Rev 55, 1029-1053 (2022). https://doi.org/10.1007/s10462-021-09976-0
40. Lin, J., Ma, Z., Gomez, R., Nakamura, K., He, B., & Li, G. (2020). A Review on Interactive Reinforcement Learning From Human Social Feedback. IEEE Access, 8, 120757-120765. https://doi.org/10.1109/ACCESS.2020.3006254
41. Ashraf, S. (2023). Proposing Digital Design Methodology for Furniture Products by Integrating Generative Design Approach to Conventional Process. Journal of Technology and Systems, 5(1), 1-21. https://doi.org/10.47941/jts.1368
42. Celani, M. G. C. (2002). Beyond analysis and representation in CAD: a new computational approach to design education (Doctoral dissertation, Massachusetts Institute of Technology, Department of Architecture).
43. Peters, C., Samuels, I., Sanders, P., Partanen, J., & Lefosse, D. (2021). Rethinking Computer-Aided Architectural Design (CAAD) From Generative Algorithms and Architectural Intelligence to Environmental Design and Ambient Intelligence. In Proc. CAAD Futures 2021, Los Angeles, CA, USA, Selected Papers (p. 62). Springer Nature.
44. Stiny, G. (1980). Introduction to shape and shape grammars. Environment and Planning B: Planning and Design, 7, 343-351. https://doi.org/10.1068/b070343
45. Stiny, G., & Mitchell, W. J. (1978). The Palladian grammar. Environment and Planning B: Planning and Design, 5, 5-18. https://doi.org/10.1068/b050005
46. Hughes, R.T., Zhu, L., & Bednarz, T. (2021). Generative Adversarial Networks-Enabled Human-Artificial Intelligence Collaborative Applications for Creative and Design Industries: A Systematic Review of Current Approaches and Trends. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.604234
47. Cui, J., & Tang, M. X. (2017). Towards generative systems for supporting product design. International Journal of Design Engineering, 7(1), 1-16. https://doi.org/10.1504/IJDE.2017.085639
48. De Peuter, S., Oulasvirta, A., & Kaski, S. (2023). Toward AI assistants that let designers design. AI Magazine, 44(1), 85-96. https://doi.org/10.1002/aaai.12077
49. Mørch, A. I., Caruso, V., & Hartley, M. D. (2017). End-User Development and Learning in Second Life: The Evolving Artifacts Framework with Application. In F. Paternò & V. Wulf (Eds.), New Perspectives in End-User Development (pp. 333-358). Springer. https://doi.org/10.1007/978-3-319-60291-2_13
50. McAuley, J. (2022). Personalized Machine Learning. Cambridge University Press.
https://doi.org/10.1017/9781009003971
51. Fischer, G. (2024). A Research Framework Focused on AI and Humans instead of AI versus Humans. Interaction Design & Architecture(s) – IxD&A Journal, N.56, 2023, pp. …–…, DOI: https://doi.org/10.55612/s-5002-059-001sp
52. Krange, I., Segaran, M., Gamlem, S., Moltudal, S., & Engeness, I. (2024). A Triple Challenge: Students’ Identification, Interpretation, and Use of Individualized Automated Feedback in Learning to Write English as a Foreign Language. Interaction Design & Architecture(s) – IxD&A Journal, N.56, 2023, pp. …–…, DOI: https://doi.org/10.55612/s-5002-059-001
53. Lundin, J., Utterberg Modén, M., Leino Lindell, T., & Fischer, G. (2024). A Remedy to the Unfair Use of AI in Educational Settings. Interaction Design & Architecture(s) – IxD&A Journal, N.56, 2023, pp. …–…, DOI: https://doi.org/10.55612/s-5002-059-002
54. Neris, V., Alves, V., Garcia, F., & Saud, C. (2024). College Students-in-the-Loop for Their Mental Health: A Case of AI and Humans Working Together to Support Well-Being. Interaction Design & Architecture(s) – IxD&A Journal, N.56, 2023, pp. …–…, DOI: https://doi.org/10.55612/s-5002-059-003
55. Kahraman M. U., Şekerci, Y., Develiler, M. (2024). Integrating Artificial Intelligence into Interior Design Education: A Case Study on Creating Office Spaces for “Avrupa Yakası” TV Series Characters. Interaction Design & Architecture(s) – IxD&A Journal, N.56, 2023, pp. …–…, DOI: https://doi.org/10.55612/s-5002-059-004