Volume 4 Number 1 (2025) Journal of Intelligent Communication(JIC)-Scilight

Journal of Intelligent Communication

Volume 4 Issue 1 (2025): In Progress

Review Article ID: 959

The Role of Artificial Intelligence in Advanced Engineering: Current Trends and Future Prospects

Artificial Intelligence (AI) is increasingly transforming various engineering disciplines, playing a pivotal role in design, manufacturing, maintenance, and optimization. This paper provides a comprehensive analysis of AI applications in advanced engineering, examining key trends, challenges, and future directions. The study systematically categorizes AI methodologies across different fields, including mechanical, civil, electrical, aerospace, and environmental engineering, as well as emerging areas such as biomedical engineering and material science. Through an extensive literature review and case study analysis, this work highlights the impact of AI-driven optimization in mechanical engineering, predictive maintenance in industrial applications, automation in manufacturing, and AI-enhanced smart infrastructure development.Methodologically, this research synthesizes findings from major scientific databases, including IEEE Xplore, PubMed, Scopus, and Web of Science, ensuring a robust and interdisciplinary perspective. The analysis identifies critical challenges in AI adoption, such as data privacy, scalability, and system integration, and explores strategies to address them. Furthermore, this paper discusses the ethical and societal implications of AI in engineering, emphasizing the need for transparent, explainable, and unbiased AI models.The findings suggest that AI has significantly improved engineering efficiency and innovation but also underline the necessity for interdisciplinary collaboration and standardized frameworks to maximize AI’s transformative potential. The study concludes by outlining future prospects, including the integration of AI with the Internet of Things (IoT) and blockchain, the evolution of AI-driven materials discovery, and the role of AI in personalized medicine and next-generation engineering solutions. Addressing these challenges and leveraging AI’s capabilities will be instrumental in shaping the future of engineering.

Read more

Article Article ID: 925

Unveiling the Power of Play: A DMAIC Analysis of AI’s Impact on User Engagement in Interactive Entertainment

The rapid advancement of artificial intelligence (AI) is revolutionizing various sectors, from healthcare to finance. AI-powered technologies, such as machine learning and deep learning, are enabling unprecedented breakthroughs in areas like disease diagnosis, drug discovery, and personalized medicine. This paper explores the influence of Artificial Intelligence (AI) features—such as personalized narratives, adaptive difficulty levels, and virtual companions—on user engagement within interactive and immersive entertainment experiences. Using the DMAIC (Define, Measure, Analyze, Improve, Control) framework, the study analyzes interaction data from 473 users, focusing on behavior patterns and sentiment toward these AI functionalities. Statistical analyses reveal that personalized narratives significantly enhance user sentiment, with an increase in positive sentiment from 45% to 60% after system improvements (t = 8.75, p = 0.0001). Adaptive difficulty levels contribute to sustained engagement, reflected in a notable growth in interaction frequency from 5.0 to 6.2 interactions per user (t = 4.23, p = 0.002). Virtual companions show mixed effectiveness, with their impact heavily influenced by implementation quality and user context. Correlation analysis highlights the importance of session length (r = +0.68, p < 0.001) and abandonment rates (r = -0.56, p < 0.001) as critical factors in shaping user sentiment. The paper includes visual representations of findings and provides actionable recommendations for developers and designers to optimize AI-driven interactive entertainment experiences.

Read more

Article Article ID: 351

Developing an Intelligent Recognition System for British Sign Language: A Step Towards Inclusive Communication

Effective communication is crucial for ensuring inclusivity, yet the hard-of-hearing community faces significant barriers due to a shortage of qualified interpreters. British Sign Language (BSL), officially recognised in the UK, relies on hand gestures, facial expressions, and body movements. However, limited interpreter availability necessitates technological solutions to bridge the communication gap between signers and non-signers. This study proposes a real-time, vision-based BSL recognition system using computer vision and deep learning to interpret fingerspelling and six commonly used BSL words. The system employs OpenCV for video capture, MediaPipe for hand feature extraction, and Long Short-Term Memory (LSTM) networks for sign classification. A dataset incorporating left- and right-handed signers achieved a 94.23% accuracy rate for 26 fingerspelling gestures and 99.07% for six words. To enhance usability, a graphical user interface was developed, enabling seamless real-time interaction. These findings demonstrate the potential of AI-driven sign language recognition to improve accessibility and foster more inclusive communication for the hard-of-hearing community.

Read more

Article Article ID: 993

Strings—Sounds from Human Collective Intelligence

Strings is a musical installation that explores, through an evaluation of the cooperative functions of collective intelligence, human interaction and transforms it into a multisensory experience. Equipped with biometric sensors, Strings captures in real time the physiological variations within a group of people, translating psycho-emotional shifts into a distinctive sound and visual design. During the performance, the audience actively contributes to generating sounds. Some members of the audience are invited to wear specific biometric sensors. A performer guides them so they can actively take part in the experience. This interaction generates a data flow that translates participants’ emotional state changes into real‑time sound textures. A musician handles live electronics, processing and modulating the generated sounds. The performance is enriched by an improvising solo musician, such as a guitarist or saxophonist, interacting with the evolving sound design. This dialogue between the audience, live electronics, and soloists forms an ever‑evolving, self‑regenerating sound cycle that responds to the collective emotions. Emotional engagement is amplified by real‑time coloured projections that respond to psycho‑emotional variations, enhancing the collective sense of connection. This research contributes to collective intelligence by providing an unexplored framework for integrating biometric data into artistic expression. Our investigation aims to demonstrate biofeedback’s potential in fostering collaborative, emotion‑driven interaction, bridging psychology, music technology, and human‑computer interaction. By engaging both the scientific community and artists, this work opens new avenues for interdisciplinary research and application in interactive media, emotion‑aware technologies, and collective creativity.

Read more

Article Article ID: 961

A Deep Learning-Based Approach for Emotion Classification Using Stretchable Sensor Data

Facial expressions play a vital role in human communication, especially for individuals with motor impairments who rely on alternative interaction methods. This study presents a deep learning-based approach for real-time emotion classification using stretchable strain sensors integrated into a wearable system. The sensors, fabricated with conductive silver ink on a flexible Tegaderm substrate, detect subtle facial muscle movements. Positioned strategically on the forehead, upper lip, lower lip, and left cheek, these sensors effectively capture emotions such as happiness, neutrality, sadness, and disgust. A data pipeline incorporating Min-Max normalization and SMOTE balancing addresses noise and class imbalances, while dimensionality reduction techniques like PCA and t-SNE enhance data visualization. The system’s classification performance was evaluated using standard machine learning metrics, achieving an overall accuracy of 76.6%, with notable success in distinguishing disgust (86.0% accuracy) and neutrality (81.0% accuracy). This work offers a flexible, cost-effective, and biocompatible solution for emotion recognition, with potential applications in rehabilitation robotics, assistive technologies, and human-computer interaction.

Read more