Fuzzy Logic-based Input Evaluation Method for Interactive 2D Animation Scene Design using Computer Vision

Xiaobo Shi

Abstract


Two-dimensional (2D) interactive animation engages the user to command and communicate with the design using touch and pointer functions. The user input delegates specific tasks for the design replicated on the screen through precise detection. This article introduces an Input Evaluation for Design-Specific Function (IE-DSF) method to improve the response precision of the 2D models. The user input through touch or devices is evaluated for sensitivity and design region for receiving commands. Based on the difference between input and response time and input region variations, the monotonous response of the design is computed. This computation is fuzzified for its unanimity throughout different input sequences. In this process, fuzzy logic-based validation is employed to determine minimum and maximum response time from the sequence of 2d design interaction. The maximum variation is used to improve the design sensitivity, and the minimum variation is used to increase the design functions on the screen. Therefore, the different recommendations correlate with providing frame-based 2D sequences with precise computer vision technology. The variation changes are reverted in the independent frames without modifying the entire design. This feature improves the consistency and evaluation of various interactive designs. The proposed IE-DSF method achieved a significant improvement of 9.38% in consistency, an 11.31% reduction in response time, and an enhanced interaction response of 8.8% across various inputs. With a considerable decrease in design modifications, reducing them by 11.1% helps optimize 2D animation design interactions.


Full Text:

PDF

References


Shernoff, E. S., Von Schalscha, K., Gabbard, J. L., Delmarre, A., Frazier, S. L., Buche, C., & Lisetti, C. (2020). Evaluating the usability and instructional design quality of interactive virtual training for teachers (IVT-T). Educational Technology Research and Development, 68, 3235-3262.

Berkowitz, S. J., Kwan, D., Cornish, T. C., Silver, E. L., Thullner, K. S., Aisen, A., ... & Folio, L. R. (2022). Interactive multimedia reporting technical considerations: HIMSS-SIIM collaborative white paper. Journal of Digital Imaging, 35(4), 817-833.

Zheng, Q., Liu, Y., Lin, Z., Lischinski, D., Cohen-Or, D., & Huang, H. (2021). Weakly supervised 2D human pose transfer. Science China Information Sciences, 64, 1-16.

Tsai, Y. T., Jhu, W. Y., Chen, C. C., Kao, C. H., & Chen, C. Y. (2021). Unity game engine: Interactive software design using digital glove for virtual reality baseball pitch training. Microsystem Technologies, 27, 1401-1417.

Akman, A., Sahillioğlu, Y., & Sezgin, T. M. (2022). Deep generation of 3D articulated models and animations from 2D stick figures. Computers & Graphics, 109, 65-74.

Yu, R., Lu, W., Lu, H., Wang, S., Li, F., Zhang, X., & Yu, J. (2021). Sentence pair modeling based on semantic feature map for human interaction with IoT devices. International Journal of Machine Learning and Cybernetics, 12(11), 3081-3099.

Liu, R., Shen, J., Wang, H., Chen, C., Cheung, S. C., & Asari, V. K. (2021). Enhanced 3D human pose estimation from videos by using attention-based neural network with dilated convolutions. International Journal of Computer Vision, 129, 1596-1615.

Zhang, X., Hadwiger, M., Theußl, T., & Rautek, P. (2021). Interactive exploration of physically-observable objective vortices in unsteady 2D flow. IEEE Transactions on Visualization and Computer Graphics, 28(1), 281-290.

Loustau, T., & Chu, S. L. (2022). Characterizing the research-practice gap in children’s interactive storytelling systems. International Journal of Child-Computer Interaction, 34, 100544.

Zhang, Z., & Pan, W. (2021). Virtual reality supported interactive tower crane layout planning for high-rise modular integrated construction. Automation in Construction, 130, 103854.

Baxter III, W. V., Barla, P., & Anjyo, K. I. (2009). Compatible embedding for 2D shape animation. IEEE Transactions on Visualization and Computer Graphics, 15(5), 867-879.

Wang, Z., Xing, Y., Wang, J., Zeng, X., Yang, Y., & Xu, S. (2022). A knowledge-supported approach for garment pattern design using fuzzy logic and artificial neural networks. Multimedia Tools and Applications, 1-21.

Zhu, Q., Kumar, P. M., & Alazab, M. (2022). Computer application in game map path-finding based on fuzzy logic dynamic hierarchical ant colony algorithm. International Journal of Fuzzy Systems, 24(5), 2513-2524.

Magnenat, S., Ngo, D. T., Zünd, F., Ryffel, M., Noris, G., Rothlin, G., ... & Sumner, R. W. (2015). Live texturing of augmented reality characters from colored drawings. IEEE Transactions on Visualization and Computer Graphics, 21(11), 1201-1210.

Lin, J., Igarashi, T., Mitani, J., Liao, M., & He, Y. (2012). A sketching interface for sitting pose design in the virtual environment. IEEE Transactions on Visualization and Computer Graphics, 18(11), 1979-1991.

Wang, J., Silva, D. J., Kosinka, J., Telea, A., Hashimoto, R. F., & Roerdink, J. B. (2022). Interactive image manipulation using morphological trees and spline-based skeletons. Computers & Graphics, 108, 61-73.

Jin, Y., Ma, M., & Zhu, Y. (2022). A comparison of natural user interface and graphical user interface for narrative in HMD-based augmented reality. Multimedia Tools and Applications, 81(4), 5795-5826.

Choi, J., Lee, S. E., Lee, Y., Cho, E., Chang, S., & Jeong, W. K. (2021). DXplorer: a unified visualization framework for interactive dendritic spine analysis using 3D morphological features. IEEE Transactions on Visualization and Computer Graphics.

Gay, S. L., Pissaloux, E., Romeo, K., & Truong, N. T. (2021). F2T: a novel force-feedback haptic architecture delivering 2D data to visually impaired people. IEEE Access, 9, 94901-94911.

Velazco-Garcia, J. D., Shah, D. J., Leiss, E. L., & Tsekos, N. V. (2021). A modular and scalable computational framework for interactive immersion into imaging data with a holographic augmented reality interface. Computer Methods and Programs in Biomedicine, 198, 105779.

Cárdenas-Sainz, B. A., Barrón-Estrada, M. L., Zatarain-Cabada, R., & Ríos-Félix, J. M. (2022). Integration and acceptance of natural user interfaces for interactive learning environments. International Journal of Child-Computer Interaction, 31, 100381.

Zhang, Y., Li, G., & Shan, G. (2022). Time analysis of regional structure of large-scale particle using an interactive visual system. Visual Informatics.

Chover, M., Marín, C., Rebollo, C., & Remolar, I. (2020). A game engine designed to simplify 2D video game development. Multimedia Tools and Applications, 79, 12307-12328.

Xiang, Z., Xiang, C., Li, T., & Guo, Y. (2021). A self-adapting hierarchical actions and structures joint optimization framework for automatic design of robotic and animation skeletons. Soft Computing, 25, 263-276.

Wang, X., Jiang, X., Regedzai, G. R., Meng, H., & Sun, L. (2021). Gated neural network framework for interactive character control. Multimedia Tools and Applications, 80, 16229-16246.

Zhou, X., Teng, F., Du, X., Li, J., Jin, M., & Xue, C. (2022). H-GOMS: a model for evaluating a virtual-hand interaction system in virtual environments. Virtual Reality, 1-26.

Wang, T., & Zhou, M. (2020). A method for product form design of integrating interactive genetic algorithm with the interval hesitation time and user satisfaction. International Journal of Industrial Ergonomics, 76, 102901.

Shi, Y., & Wang, B. (2022). Optimization algorithm of an artificial neural network-based controller and simulation method for animated virtual idol characters. Neural Computing and Applications, 1-10.

Menezes, P., & Rocha, R. P. (2021). Promotion of active ageing through interactive artificial agents in a smart environment. SN Applied Sciences, 3(5), 583.




DOI: https://doi.org/10.31449/inf.v49i8.7023

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.