Amaliya, Halimatus Sa'diyah (2023) PENGARANG MUSIK OTOMATIS BERBASIS FREKUENSI NADA DAN RECURRENT NEURAL NETWORK (RNN). Undergraduate Thesis thesis, Institut Teknologi Telkom Purwokerto.
Text
Cover (1).pdf Download (1MB) |
|
Text
Abstract.pdf Download (71kB) |
|
Text
Abstrak.pdf Download (71kB) |
|
Text
BAB I.pdf Download (89kB) |
|
Text
BAB II.pdf Download (174kB) |
|
Text
BAB III.pdf Download (196kB) |
|
Text
BAB IV.pdf Restricted to Registered users only Download (120kB) | Request a copy |
|
Text
BAB V.pdf Download (8kB) |
|
Text
Daftar Pustaka.pdf Download (142kB) |
|
Text
Lampiran.pdf Restricted to Registered users only Download (4kB) | Request a copy |
Abstract
Music has many uses, so it makes people like music. For music lovers, it is normal to listen to music because music has become food for the human soul. However, it is different from listening to music, to compose or make music requires knowledge in composing music. Unfortunately, there are still many people who have minimal knowledge and have difficulty learning music, but most people have a high level of creativity, so they have the potential to compose music. Supported by increasingly advanced technology, technology can automate related arts, especially in the field of music, using Deep Learning. To produce music, the Recurrent Neural Network (RNN) Algorithm is used. Using a dataset consisting of 100 songs of the pop genre which are separated between vocals and background music, then cut through the windowing stage then extracted using the Discrete Cosine Transform (DCT) then the frequency results are grouped with the K-Means Algorithm into 200 categories to produce note. After that, training using 2 schemes. The first is without using Custom Training and the second is by using Custom Training with the Recurrent Neural Network (RNN) algorithm as the main model to predict the next output. By using Sparse Categorical Crossentropy so that it can map the input to the correct target class. The smallest loss obtained is 3.005 and the smallest validation loss is 3.3566. The whole model is very good because it is the bestfitting model, where the model does not have underfitting and overfitting. Keywords: Music, Automation, Deep Learning, RNN, Tone Frequency
Item Type: | Thesis (Undergraduate Thesis) |
---|---|
Subjects: | T Technology > T Technology (General) |
Divisions: | Faculty of Informatics > Informatics Engineering |
Depositing User: | repository staff |
Date Deposited: | 21 Jun 2024 03:30 |
Last Modified: | 21 Jun 2024 03:30 |
URI: | http://repository.ittelkom-pwt.ac.id/id/eprint/10515 |
Actions (login required)
View Item |