Abstract


Cultural events in colleges serve as vital platforms for students to showcase their artistic and creative talents. However, traditional talent identification methods rely heavily on subjective judgments, which may lead to biases and inconsistencies. This study proposes an automated talent identification framework using machine learning and multimedia data analysis. Audio, video, and image features of student performances are processed using deep learning models such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to classify and evaluate artistic abilities in domains such as singing, dancing, and dramatics. The system integrates feature extraction, performance assessment, and ranking mechanisms to ensure objective and scalable talent identification. Experimental results demonstrate the potential of machine learning techniques in improving accuracy, reducing human bias, and supporting fair recognition in student cultural events. This research contributes to the development of intelligent cultural event management systems that can enhance inclusivity, transparency, and efficiency in identifying emerging student talent.




Keywords


Automated talent identification, cultural events, machine learning, multimedia data, deep learning, audio analysis, video recognition, performance evaluation, CNN, RNN.