Digital Technologies Research and Applications

Volume 5 Issue 2: June 2026 (in progress)

Article Article ID: 2308

Algorithmic Bias in Automated Decision-Making: A Statistical Study with Legal and Regulatory Implications

The use of algorithmic decision systems is being expanded to high-risk areas like credit, recruiting, and distributing government resources. Despite the fact that these systems are usually claimed to be objective and efficient, there have been apprehensions about the likelihood of structural inequalities being perpetuated by the systems. This paper examines the effect of a fairness-aware pre-processing technique called reweighing on the performance of a predictive system in a controlled simulation environment. Using a synthetically created credit approval dataset with structural disadvantage embedded, we compare the performance of a logistic regression classifier with and without reweighing. Fairness is measured using demographic parity disparity (DPD), disparate impact ratio (DIR), and equalized odds difference (EO), along with predictive accuracy. In a single test scenario (seed = 42), reweighing does not improve all fairness metrics uniformly. However, when analyzed for robustness across 50 independent random seeds, we find modest average reductions in demographic parity disparity and equalized odds difference for reweighing, with little change in predictive accuracy. Threshold sensitivity analysis also shows that fairness metrics are sensitive to decision thresholds. These results show that fairness-aware pre-processing can lead to systematic improvements in expectation, although trade-offs across fairness metrics and performance remain context-dependent.

Article Article ID: 1780

Personality Traits and the Technology Acceptance of ChatGPT: Mediating Effects of Perceived Usefulness and Ease of Use

This study investigates how three personality traits from the Big Five framework—extraversion, neuroticism, and conscientiousness—influence individuals’ continuance intention to use ChatGPT, with a particular focus on the mediating roles of Perceived Ease of Use (PEOU) and Perceived Usefulness (PU). Drawing on the Technology Acceptance Model (TAM), this research aims to integrate personality-based differences into a well-established framework for understanding technology adoption and sustained usage. Data were collected through an online survey targeting active ChatGPT users and analyzed using structural equation modeling to examine both direct and indirect relationships among variables. The findings indicate that conscientiousness has a strong and positive impact on both PEOU and PU, and indirectly enhances continuance intention through these cognitive evaluations. Extraversion shows a limited but positive effect primarily through perceived ease of use, suggesting that socially oriented individuals may engage with the system when it is easy to navigate. In contrast, neuroticism does not demonstrate any statistically significant relationship with the key variables in the model. Consistent with TAM, PEOU significantly influences PU and continuance intention, with PU emerging as the most influential predictor of sustained usage. Overall, this study highlights the critical role of conscientiousness in fostering long-term engagement with generative AI systems and underscores the importance of cognitive perceptions in mediating personality effects. By integrating personality psychology with technology acceptance theory, the research provides theoretical and practical implications for designing personalized and adaptive AI interfaces.

Article Article ID: 2297

Mapping the Intersection of Artificial Intelligence and Sociolinguistics: A Bibliometric and Keyword-Based Content Analysis

This research investigates the dynamic relationship of Artificial Intelligence (AI) and Sociolinguistics through bibliometric mapping in association with keyword content analysis. Utilizing 69 extracted publications (2013–2024) after systematic deduplication, the study combines quantitative trend analysis with keyword-based thematic interpretation. From an initial collection of 98 records obtained from Scopus (n = 64) and Web of Science (n = 34), a subset of 48 publications was sampled further pursuant to their conceptual relevance. Bibliometric analysis with the software ScientoPy and VOSviewer was employed to reveal publication trajectories, top contributors, influential journals, geographic patterns, and knowledge hot spots. This mapping was supplemented with a qualitative examination of the space mapped using five major terms: Computational Sociolinguistics, Natural Language Processing (NLP), ChatGPT, language and machine learning enabling us to track prevalent themes and concepts structuring the field. These results indicate that scholarly interest in the sociolinguistic aspects of AI-mediated communication has grown substantially, especially pertaining to language ideology, identity construction, and algorithmic influence on discourse. Instead of portraying computational methods as passive and neutral tools, the findings imply that technology such as NLP and large language models can be seen as both reproducing and destabilizing linguistic hierarchies, bringing to light critical questions regarding representation, diversity, and equity in digital space. In this work, we map the intersection of AI and Sociolinguistics through a combination of bibliometric mapping and keyword-based interpretation, thus giving an overview of how the field has evolved over time. This finding implies that debates about ethical and culturally inclusive AI design are coalescing into prominence in the literature.

Article Article ID: 2254

The Pragmatic and Semiotic Role of Emojis in Contemporary Digital Communication

The paper is located within the dynamic relationship between language and computing, with the central objective being an increase in knowledge about how individuals interact with digitally mediated communication. Precisely, the paper attempts to explore the use of emojis as a tool of the digitally mediated communication system from a linguistic perspective, especially within the pragmatic and socio-linguistic framework among Jordanian university students. The main objectives involve four dimensions: determining the usage of emojis by members of this community, explaining subjects' tendencies towards emojis while conducting online discourse, understanding the meanings of a selected list of emojis derived from the subjects' point of view, and exploring possible gender differences within the Jordanian culture. Methodologically, this work followed the triangulation approach, wherein three research instruments were used: an online survey distributed across the sample population, consisting of 500 Jordanian university students from both genders, corpus analysis; and expert interviews. The sociolinguistic aspect of the analysis revealed that demographic focused on gender as a social variable regarding, some effects on emoji preferences and interpretations among participants. Finally, it contextualizes its findings within the general theoretical framework provided by three functional language, in relation to the phatic, conative, and expressive functions situated within a Jordanian cultural context. It finds the end by placing at the forefront of interest the dire need for interdisciplinary collaborations in exploring digitally mediated communication from linguistic vantages, particularly against a changing landscape where language and technology are crossed.

Review Article ID: 1745

A Comprehensive Review of Memory Architectures and Network Interfaces and Advancements in High-Bandwidth and Low-Latency Systems

This review discusses the significant advances in the architecture of memory systems, the enhancement of which has been among the most significant concerns in the process of creating systems with high bandwidth and low latency requirements. It has analyzed various fields that include virtually pipelined systems, millimeter-wave interface, and enhanced memory controllers, which would meet the requirements of high-performance computing (HPC). Among all the developments put in place, die-stacked DRAM (Dynamic Random Access Memory) systems offer a high bandwidth boost with the networks-on-chip scalable and meant to address the issues of congestion and incoherence in interconnecting cores. The literature survey also identifies some of the major developments in the mechanism of passing messages and memory management optimizations in distributed systems that have been found to be critical towards useful data transfer and processing in massive parallel computer systems. The advanced multi-port memory controllers are also observed to improve the bandwidth and efficiency in utilizing the resources. The review, however, points out some of the challenges that are still there, including the limitation of scalability of the centralized memory system, the latency issues of high-radix interconnects and the integration problems of heterogeneous computing systems. The evaluation highlights the need for new ways to tackle the limitations of current memory management techniques. Research will center on the possibilities of using machine learning to anticipate workloads and applying adaptive hybrid memory allocators to allocate across memory types dynamically. The goal of these techniques is to increase performance, bandwidth, latency, and energy efficiency in high-performance computing systems.

Article Article ID: 2315

User Interface Design of the JOFF Evaluation Application as a Derivative of DIVAYANA Evaluation Model

The user interface design of an application is not limited to applications in the field of informatics, but also applies to various fields that utilize information technology. This also applies to educational evaluation. A good evaluation application is the result of a good user interface design. The aim of this study is to demonstrate the user interface design of the JOFF (Justification–Observation–Finalization–Functionalization) evaluation application which is derived from the DIVAYANA (Description–Input–Verification–Action–Yack–Analysis–Nominate–Actualization) evaluation model. The improved version used is Borg and Gall with a focus on three stages, together with: layout development, preliminary trying out, and revision of the initial checking out effects. The number of respondents involved in conducting the initial testing of the user interface design of the JOFF evaluation application was 44 people. Four experts and 40 IT vocational school teachers at several IT vocational schools in Bali. The statistical series device in this study turned into a questionnaire. Questionnaires are used to obtain quantitative data from subjects (respondents) who have completed initial trials. Analysis techniques of quantitative records accumulated from the preliminary trials were then used to analyze the usage of quantitative descriptive strategies. The effects of the study confirmed the great person interface layout of the JOFF evaluation software, which is a derivative of the DIVAYANA evaluation model, is classified as superb with a pleasant percent of 87.81%. The impact of this research on the field of educational evaluation is the existence of new knowledge about the importance of good user interface design.

Article Article ID: 2265

Designing and Governing Trustworthy AI Marketing Systems in Educational Technology: A Managerial and Implementation Framework

This study proposes a dual-dimensional framework to design such technologies in the EdTech industry using the conceptual framework methodology, which is based on thematic content analysis and theory-driven and evidence-based validation using scholarly literature, industry reports, and policies from the top EdTech organizations worldwide. The framework identifies trustworthiness as the key construct that integrates two interrelated dimensions. The governing dimension is operational at the managerial level and includes governance principles based on an extended Unified Theory of Acceptance and Use of Technology (UTAUT) framework and personalization governance principles based on three dimensions, such as intensity, tempo, and boundaries. The designing dimension is operational at the implementation level and identifies the technical requirements related to virtual sales personnel systems and AI promotion systems. The results of the validation process against known platform practices demonstrate a mixed pattern of alignment, with stronger support in regulatory-related areas and weaker support in governance-intensive domains where the framework extends current industry practice. The extension of the UTAUT model from personal acceptance to organizational governance represents a theoretical contribution with a link to existing research in personal acceptance and its expanded applicability. The three-dimensional personalization governance model has more detailed mechanisms than the current one-dimensional approaches. For educational technology organizations, the framework offers systematic guidance in the development of AI-based marketing systems that are trustworthy to users while being effective for organizational goals. Validation results indicate tempo governance and recommendation explainability as areas that need development in the industry for effective engagement with users in a sustainable manner.

Article Article ID: 2305

Digital Innovation and Supply Chain Performance: Investigating the Moderating Effect of Technology Usage on Risk Management-Resilience-Performance Linkages

This study aims to investigate the interrelations among digital innovation, risk management, resilience, and performance in supply chain contexts, in which technology utilization is conceptualized as a moderator. Based on the concepts of dynamic capabilities and the resource-based view, the proposed model suggests risk management and resilience as sequential mediators between digital innovation and performance. The model is empirically validated using Partial Least Squares Structural Equation Modeling (PLS-SEM) with a sample of 256 manufacturing and service organizations selected from the World Bank Enterprise Surveys dataset. Findings show that digital innovation has both direct (β = 0.218) and indirect effects on supply chain performance through risk management and resilience. Digital innovation exhibits the strongest association with supply chain risk management (β = 0.521), followed by risk management's effect on resilience (β = 0.489) and resilience's effect on performance (β = 0.372). Technology usage strengthens both the risk management–resilience path (β = 0.127) and the resilience–performance path (β = 0.143). Sequential mediation accounts for 53.3% of the total effect, as measured by the Variance Accounted For (VAF) index, confirming partial mediation. This study also contributes to the supply chain management literature by providing empirical specification for the integrated paths in which digital innovation drives performance through risk management and resilience capabilities, and technology application as a boundary condition in these paths. The results indicate that supply chain managers should integrate digital innovation investments with risk management and resilience capabilities for the best performance in supply chain management.

Article Article ID: 2321

Assessing the Adoption and Influence of Artificial Intelligence as a Learning Partner among English Learners in Hail

Generative artificial intelligence tools offer transformative potential for English language learners, yet the factors driving their adoption remain insufficiently understood, particularly in Arab higher education contexts. This study examined AI tool adoption among undergraduate English learners at the University of Hail, Saudi Arabia, using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) as the theoretical framework. A quantitative survey of 318 students tested nine hypothesized relationships between seven UTAUT2 constructs and two outcomes: behavioral intention and use behavior. Partial least squares structural equation modeling (PLS-SEM) revealed exceptional explanatory power, accounting for 80.1% of the variance in behavioral intention and 80.9% in use behavior. Seven of nine hypotheses were supported. Habit emerged as the dominant predictor, exerting the strongest total effect on use behavior (β = 0.518), operating through both conscious intention and automatic behavioral pathways. Price value, reconceptualized to capture time and effort costs rather than monetary ones, was the second-strongest predictor of intention, followed by hedonic motivation and performance expectancy. Contrary to expectations, effort expectancy and social influence were non-significant. These findings extend UTAUT2 to generative AI in language learning, challenge assumptions about universal predictor applicability, and offer practical guidance for educators and policymakers seeking to promote sustained, effective AI integration in English language education.

Article Article ID: 2322

From Traditional to Intelligent: How AI Transforms English Language Learning and Teaching

In recent years, artificial intelligence has developed quickly, which is changing the English teaching and training in a radical way; the focus is shifting from the original mode to smart tools and software. In light of this, we adopted mixed research methods to investigate such changes in the present study. The research lasted for 16 weeks, using a quasi-experiment research method. Specifically, we explored what the functions of AI are in English learning and teaching. There were a total of 120 undergraduate students and 12 teacher participants in our research. Multiple data collection tools were used: questionnaire survey, class observation, interview, and analysis of learning behavior data. We found that there were obvious improvements in some respects. For example, intelligent pronunciation correction systems improved scores by 29.4 points; AI writing assistant tools brought about gains of 19.5 points; Conversational robots promoted oral ability growth of 20.9 points, with d = 1.96. The adaptive learning platform achieved personalized recommendation accuracy of 87.3%, while differentiated instruction enabled low-proficiency learners to improve by 43.0%. Teacher roles shifted markedly from knowledge transmission toward facilitation, with direct instruction time declining from 62% to 28%. Despite these promising outcomes, challenges including algorithmic bias, technology overdependence, data privacy risks, and digital equity concerns warrant careful consideration. The study concludes that optimal English education requires organic integration of technological efficiency and humanistic pedagogy.

Article Article ID: 2325

Digital Technology-Enabled Resource Decoupling and Economic Growth Quality in Ecological Agricultural Integration: Global Pathways

Using the three-proof method combining academic papers, policies, patents and so forth, this paper analyzes and compares 15 representative global cases in depth, and finds out for the first time: (1) An ecological element contribution elasticity coefficient of 0.234 proves that natural capital has its own independent economic value, and globally, the synergetic effect level is 0.86, realizing simultaneous output growth rate of 6%, resource efficiency promotion degree of 0.83. (2) Environment-friendly, economically feasible industrial concept prototypes were discovered, namely dry fermentation (return on investment ROl 196‰), Dutch factory recycling (ROl 164‰), Israel’s water-saving techniques (ROl 246‰) and so forth, with a payback period of 3–5 years, ROl of 18–25 per thousand. (3) The optimal threshold values are found out respectively for technology development intensity, corporate social responsibility fulfillment amount, and farmer diversification proportion rather than assuming that more is always better. Different development path guidance suggestions are provided for different countries/regions according to their actual situations. It is estimated that applying this framework could increase agricultural resource and energy efficiency by 15–25 percentage points and thus help achieve the SDGs of the UN. In future research, attention should also be paid to issues such as evaluation of system resilience under climate change scenarios, application based on blockchain AI technology integration, consideration of how small-scale producers can participate in integration, etc.

Review Article ID: 2136

The Impact of Artificial Intelligence Tools on Developing and Improving Academic Writing

This study explores how artificial intelligence (AI) tools influence the development and improvement of academic writing practices in higher education. To address this objective, a systematic literature review (SLR) was conducted to examine scholarly work related to the use of AI in academic writing. An initial search identified 199 documents, from which 32 studies were retained after applying predefined inclusion and exclusion criteria based on methodological rigor, relevance, and thematic alignment with the research focus. The review integrates systematic literature review procedures with bibliometric and narrative synthesis approaches to examine publication patterns, leading countries and institutions, dominant theoretical perspectives, and emerging research directions in this field. The findings indicate that AI tools increasingly support academic writing through functions such as language generation, editing assistance, idea organization, and improved research efficiency. At the same time, literature highlights important ethical, methodological, and pedagogical challenges that accompany the growing adoption of AI technologies in academic environments. The analysis further identifies several research gaps, particularly in relation to academic integrity, institutional governance, and the development of AI literacy among students and researchers. In response, the study proposes a set of practical recommendations aimed at promoting responsible and effective use of AI in academic writing. These include the development of structured training programmers, clearer institutional guidelines on ethical AI use, and stronger collaboration among educational institutions to enhance responsible technological integration and support the advancement of scientific research.

Article Article ID: 2327

Machine Learning–Based Behavioral Analysis and Natural Language Mining for Computer Learning Development

Programming education continues to face significant challenges, with failure and dropout rates exceeding 30% in many introductory courses. Existing learning analytics approaches largely rely on static behavioral indicators derived from the Felder–Silverman Learning Style Model (FSLSM), which often fail to capture the temporal dynamics of learning and the syntactic complexity involved in programming activities. These limitations are particularly evident in detecting the Sequential/Global learning dimension and understanding how students interact with programming tasks over time. This study aims to address these limitations by proposing CAFNet (Crossmodal Attention Fusion Network), a multimodal learning analytics framework that integrates behavioral machine learning with natural language and code analysis. The proposed architecture combines Temporal Convolutional Networks to model behavioral indicators, CodeBERT for forum discourse representation, and Tree-Transformer models for Abstract Syntax Tree-based code analysis. A hierarchical cross-modal attention mechanism aligns these heterogeneous data sources, while Federated Supervised Contrastive Learning ensures privacy-preserving deployment across institutions under differential privacy constraints (ε = 0.5). The framework was evaluated using three heterogeneous datasets comprising 14,308 learners from programming education environments. Experimental results show that CAFNet achieved 91.7% classification accuracy with an AUC-ROC of 0.947, outperforming classical machine learning and deep learning baselines by 17.5%. The model achieved 94.1% accuracy for the Sequential/Global dimension, representing a major improvement over previous studies. Additionally, early at-risk prediction reached 88.9% accuracy at week four of the course. These findings demonstrate that integrating behavioral, linguistic, and programming data provides a scalable and privacy-compliant approach for intelligent educational systems supporting personalized learning and early academic intervention.

Article Article ID: 2328

From Risk to Resilience: A Data-Driven Mediation Analysis of Supply Chain Risk Management and Firm Performance in Chinese Manufacturing

This paper investigates whether supply chain resilience serves as a mediator in the relationship between supply chain risk management and firm performance in China's manufacturing industry, leveraging big data analytics and digital technologies to fill the theoretical void on how risk management practices contribute to financial performance in the digital era. Applying resource-based theory and dynamic capabilities theory approaches to guide theory development constructs an integrated framework in which supply chain resilience serves as the key conduit or mediation variable that translates diversified strategies to better organizational effectiveness. Using panel data from 347 Chinese A-share listed manufacturing firms (2,418 firm-years, 2015–2023) drawn from the China Stock Market and Accounting Research (CSMAR) database, this study employs panel regression and bootstrap mediation analysis to test the proposed framework. Supply chain risk management is operationalized through supplier and customer diversification, while supply chain resilience is captured by financial slack, operational efficiency, supply chain redundancy, and relationship stability. Results from panel regression and bootstrap mediation analysis confirm that supply chain risk management significantly enhances firm performance, with supply chain resilience serving as a significant partial mediator that substantially channels the effect. The findings are robust across alternative variable specifications, lagged models, and winsorized samples. The findings suggest that building supply chain resilience is the primary channel through which risk management investments translate into improved financial performance.

Article Article ID: 2081

From Self-Learners to System-Dependents: The Negative Effects of AI on EFL Autonomy in Jordan

Generative AI technology such as ChatGPT, Grammarly, and DeepL has changed the way linguistic learning occurs. Although such technologies can be of great benefit in terms of linguistic scaffolding, overuse can disrupt learner autonomy by diminishing cognitive activities and metacognitive monitor. The study focuses on the impact of AI-mediated learning on the autonomy of Jordanian English as a Foreign Language (EFL) university students to fill a gap in the existing empirical research on the topic of cognitive offloading in non-Western learning institutions. A mixed-method design was used involving quantitative survey (N = 376) to measure the frequency and metacognitive strategies involved in the use of AI, and semi-structured interviews (n = 22) to understand perceptions held by students. Demographic variables were controlled through hierarchical regression, and thematic code of qualitative data were subjected to inter-rater validation. The results of the quantitative research are that intensive use of AI is a significant predictor of reduced self-regulation, confidence in error-correction, and retention of vocabulary (b = −0.46, p < 0.001). Qualitative data show there is a paradox of dependence, AI will give immediate feedback whilst it tends to discourage any strategic interaction. It is important to note that, students who utilised reflective strategies in addition to moderate use of AI had higher autonomy levels. Pedagogically, the findings suggest that digital literacy should be trained and that AI-based scaffolds should be designed so that they do not interfere with the agency of learners in EFL settings.