İrem Kekilli, Muhammet Furkan Yiğitoğlu, Orkan Zeynel Güzelci
pp. 226 – 245, download
(https://doi.org/10.55612/s-5002-066-009)
Abstract
This study investigates the integration of machine learning and architectural lighting design by proposing a proof-of-concept adaptive lighting system driven by human actions and spatial position. A custom video dataset was created based on five actions—standing, sitting, walking, running, and dancing—and three positional categories within a defined space. Two different machine learning approaches were evaluated for human action recognition: a skeleton-based model using MediaPipe pose extraction with an LSTM architecture, and a pixel-based approach combining feature extraction from raw video frames with an MLP classifier. The classified action and position data were mapped to pre-defined lighting schemes generated parametrically using Grasshopper, enabling context-aware lighting recommendations. The results show that while action classification accuracy is limited due to dataset size, position recognition achieves high reliability. The study highlights the potential of action-oriented, human-centered lighting systems and outlines directions for future research involving larger datasets and user-centered evaluations.
Keywords: Machine learning, Human action estimation, Motion-based lighting interaction, Spatial lighting control, Adaptive lighting systems.
References
1. Li, Q., Zhang, Z., You, Y., Mu, Y., & Feng, C. (2020). Data driven models for human motion prediction in human-robot collaboration. IEEE Access, 8, 227690-227702. https://doi.org/10.1109/ACCESS.2020.3045994
2. Gopalakrishna A. K., Özçelebi T., Liotta A., Lukkien J. J.: Exploiting machine learning for intelligent room lighting applications. In: Proceedings of the 6th IEEE International Conference on Intelligent Systems (IS 2012), pp. 406–411. IEEE (2012). https://doi.org/10.1109/IS.2012.6335169
3. Martinez J., Black M. J., Romero J.: On human motion prediction using recurrent neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2891–2900 (2017). https://doi.org/10.1109/CVPR.2017.497
4. Kanpak H. N., Arserim M. A.: Human posture prediction by deep learning. Dicle University Journal of Engineering, 12(5), pp. 775–782 (2021). https://doi.org/10.24012/dumf.1051429
5. Chen Y., Xue Y.: A deep learning approach to human activity recognition based on single accelerometer. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1488–1493. IEEE (2015). https://doi.org/10.1109/SMC.2015.263
6. Song J., Zhu A., Tu Y., Huang H., Arif M. A., Shen Z., Zhang X., Cao G.: Effects of different feature parameters of sEMG on human motion pattern recognition using multilayer perceptrons and LSTM neural networks. Applied Sciences, 10, 3358 (2020). https://doi.org/10.3390/app10103358
7. Safibullayevna, B. S., Khanatovna, K. D., Karamatdinkizi, J. M., Faxriddinovich, S. F., & Khairullayevna, S. A. (2024, May). Regression and Machine Learning Methods for Predicting Human Movements Based on Skeletal Data. In 2024 IEEE 4th International Conference on Smart Information Systems and Technologies (SIST) (pp. 1-7). IEEE. https://doi.org/10.1109/SIST61555.2024.10629231
8. Chun S. Y., Lee C. S.: Applications of human motion tracking: Smart lighting control. In: IEEE CVPR Workshops, pp. 387-392 (2013). https://doi.org/10.1109/CVPRW.2013.65
9. Chun S. Y., Lee C. S., Jang J. S.: Real-time smart lighting control using human motion tracking from depth camera. Journal of Real-Time Image Processing, 10(4), pp. 805–820 (2015). https://doi.org/10.1007/s11554-014-0414-1
10. Putrada A. G., Abdurohman M., Perdana D., Nuha H.: Machine learning methods in smart lighting toward achieving user comfort: A survey. IEEE Access, 10, pp. 45137–45176 (2022). https://doi.org/10.1109/ACCESS.2022.3169765
11. Bütepage J., Black M. J., Kragic D., Kjellström H.: Deep representation learning for human motion prediction and classification. Computer Vision and Image Understanding, 144, pp. 14–26 (2017). https://doi.org/10.1109/CVPR.2017.173
12. UCF101 – Action Recognition Data Set. (n.d.). Center for Research in Computer Vision, University of Central Florida. https://www.crcv.ucf.edu/data/UCF101.php
13. UTD Multimodal Human Action Dataset (UTD-MHAD). (2015). University of Texas at Dallas. https://personal.utdallas.edu/~kehtar/UTD-MHAD.html
14. Pexels: Pexels. Stock video of human motion used for educational purposes [Video]. https://www.pexels.com/
15. Gillies M.: Understanding the role of interactive machine learning in movement interaction design. ACM Transactions on Computer-Human Interaction, 26(1), Article 5, 1–34 (2019). https://doi.org/10.1145/3287307
16. Roggio F., Trovato B., Sortino M., Musumeci G.: A comprehensive analysis of the machine learning pose estimation models used in human movement and posture analyses: A narrative review. Heliyon, 10(21), e39977 (2024). https://doi.org/10.1016/j.heliyon.2024.e39977
17. Luo Y., Ren J., Wang Z., Sun W., Pan J., Liu J., Pang J., Lin L.: LSTM pose machines. arXiv preprint arXiv:1712.06316 (2018). https://doi.org/10.48550/arXiv.1712.06316
18. Zhang Q., Yang L. T., Chen Z., Li P.: A survey on deep learning for big data. Information Fusion, 42, pp. 146–157 (2018). https://doi.org/10.1016/j.inffus.2017.10.006
19. LeCun Y., Bengio Y., Hinton G.: Deep learning. Nature, 521(7553), pp. 436–444 (2015). https://doi.org/10.1038/nature14539
20. Yu J., de Antonio A., Villalba-Mora E.: Deep learning (CNN, RNN) applications for smart homes: A systematic review. Computers, 11(2), 26 (2022). https://doi.org/10.3390/computers11020026
21. Talukdar J., Mehta B.: Human action recognition system using good features and multilayer perceptron network. In: Proceedings of the 2017 International Conference on Communication and Signal Processing (ICCSP), pp. 317–323. IEEE (2017). https://arxiv.org/abs/1708.06794
22. Naseer A., Jalal A.: Multimodal objects categorization by fusing GMM and multi-layer perceptron. In: 2024 5th International Conference on Advancements in Computational Sciences (ICACS), pp. 97–103. IEEE (2024). https://doi.org/10.1109/ICACS60934.2024.10473242
23. Liu J., Lou J., Zheng Y., Zhou K.: Automatic indoor lighting generation driven by human activity learned from virtual experience. In: Proceedings of the 2024 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 276–285. IEEE (2024). https://doi.org/10.1109/VR58804.2024.00050