
www.ijcrsee.com
79
Petkova, D. (2025). Cross-modal Priming of a Music Education Event in a Digital Environment, International Journal of
Cognitive Research in Science, Engineering and Education (IJCRSEE), 13(1), 75-81.
controlled by the training parameters embedded in computer programs, meaning that errors can only
occur in the pitch of the musical sound. This characteristic requires systematic auditory reflection, while
simultaneously involving the input of musical notation, which is linked to observation and its transfer
into an interactive digital environment. The outcome in both models is musical activity— perception and
performance. Cross-modal priming also ensures the cross-modality of musical activities. During the per-
ception phase, auditory representations of the theme from the instrumental piece are formed. Auditory
observation is supported by a visualization of the sound structure, which also enables solfège (perfor-
mance). The result of notating a song is its performance, which combines both perception and execution.
This standardizes the process of developing skills necessary for engaging in musical activities.
This structured approach integrates into an operational model that transforms into musical compe-
tence. Musical competence is equivalent to musical behavior, as all independently initiated forms of musi-
cal activity, resulting from the internalization of the educational event, stem from competencies acquired
through cross-modal priming. A conducted survey on the attitudes toward applying specialized software in
music education in kindergartens and grades 1–4 found that 84% of students indicated they would use digi-
tal resources to support educational activities—76% for perception activities and 84% for song performance.
Artificial Intelligence (AI) emerges as a natural extension of the development of digital technolo-
gies. An increasing number of algorithms are embedded in technology to ease and optimize human ac-
tivity. A new, rapidly growing industry is being built around various AI services offered worldwide for the
creation, processing, and analysis of music.
Future in cognitive studies
Virtual models of musical activity are supported by specialized educational software products and
AI. Musical communication is built on the interaction between sensory stimuli and the analysis of the so-
norous structure in terms of pitch, meter-rhythm, timbre, and dynamic responsiveness, synchronized with
the semantic value of the musical piece for the recipient.
Platforms such as OpenAI, ChatGPT, and ChatGPT Plus are based on GPT models available at
chat.openai.com and generate information based on strategically formulated questions (Holster, J. 2024).
Applications have been developed that analyze musical structures by recognizing audio pieces (Shazam),
support sound environments to enhance intellectual activity (Brain FM), or analyze users’ aesthetic prefer-
ences (Spotify). A major focus of many projects is exploring AI’s ability to autonomously generate music or
collaborate with composers. An example is the European Research Council-funded project Flow Machines
by Sony CSL (Sony CSL, 1993–2025). Google Magenta (Magenta.js, 2023) is a research project launched
by Google that investigates the role of machine learning in creating synthetic music products. Algorithms
for generating songs have been developed. Aiva, created by AIVA Technologies, has an AI script capable
of composing “emotional” soundtracks for advertisements, video games, or films, as well as creating vari-
ations of existing songs. The first generator to create a musical structure based on emotional analysis is
Melodrive (Melodrive, 2025), while IBM Watson Beat is a project with the ability to harmonize melodies.
When analyzing the digital environment, it is important to consider that AI is rapidly gaining popu-
larity. It learns communication models through text and integrates into specialized music software. The
integrative potential of AI can be explored in the fields of text processing and sound generation using code
in various programming languages. From the perspective of the syntax of a musical event, this shifts the
focus to an alternative symbolic system, distinct from traditional musical notation. In Figure 2, an example
code snippet is presented for generating a musical tone with a frequency of 441 Hz corresponding to the
note “a
1
” with a duration equivalent to a quarter note, alongside a notation of the same event in the spe-
cialized scoring software MuseScore.