The current narrative around AI and music is often binary: it’s either the end of human creativity or a magical "make song" button for amateurs. But for those of us who have spent years honing our craft, the reality is far more nuanced. AI isn't my replacement; it's my collaborator, specifically filling the gaps in my technical skillset to help me realize the vision I've had for years.
My journey
into music didn't start with a ChatGPT prompt. It started with a pen and paper,
long before "generative AI" was a buzzword.
The
Foundation: 15 Years of Words and Beats
I began
writing lyrics and poetry back in 2009, at the age of 19. For me, the emotion
and narrative structure of a song have always been paramount. Two years later,
in 2011, I started teaching myself the art of mixing.
While
music has always remained a deeply ingrained hobby rather than a primary
profession, those years were crucial. They taught me about song structure,
rhythm, and the patience required to layer sounds.
However, I
always faced a significant bottleneck: my voice. Despite being able to write
the words and hear the melody in my head, I completely lack the vocal ability
to perform them. For years, my songs remained unfinished demos or
instrumentals, waiting for a vocalist that never arrived.
That is,
until now.
My
Creative Workflow: Human-Led, AI-Augmented
I do not
believe in typing "write a sad piano song" into an AI generator and
calling it my own. My process is deeply personal and still relies heavily on
traditional music theory and production techniques.
Here is
exactly how I compose music, integrating AI as a strategic tool within a
professional workflow.
1. The
Composition Starts at the Piano
Before I
touch a computer, I go to the notes. Having learned to play with musical notes
over the years, I compose the core melody and rhythm first at the piano (or via
a MIDI controller). This ensures that the song has a strong, emotional, human
foundation. I determine the key, the chord progression, and the bridge before
moving forward.
2.
Structuring in FL Studio
Once the
skeleton of the song is composed, I move into my DAW (Digital Audio
Workstation), FL Studio. This is where the heavy lifting happens,
utilizing the mixing skills I've been developing since 2011. I build the
arrangement, program the drums, layer the synthesizers or acoustic instruments,
and create the overall sonic landscape.
At this stage, the track is 80% finished, but it’s still an instrumental.
3.
Solving the Vocal Problem with Mureka and SUNO
This is
where the paradigm shift occurs. Instead of hiring a session singer (which is
often cost-prohibitive for a hobbyist), I leverage AI vocal tools.
Tools like
Mureka and SUNO have become indispensable, but not for generating
entire songs from scratch. I use them as "vocal synthesizers."
I feed my
pre-written lyrics (refined from my years of writing since 2009) into the tool.
I then instruct the AI on the specific melody I composed at the piano, the
desired vocal timbre, and the emotional delivery.
It’s about
steering the technology with precision. I might generate dozens of takes for a
single verse, listening for the right inflection, just as a producer would
direct a human singer in a booth.
4.
Post-Production and Final Mix
The raw AI
vocal stems are then brought back into FL Studio. They are EQ'd, compressed,
reverberated, and mixed into the track to ensure they sit perfectly within the
sonic space I created. This final stage requires a human ear and a solid
understanding of mixing principles—skills that AI cannot yet fully replicate.
The
Verdict: Augmentation, Not Replacement
Using AI in music production makes me a more capable producer. It allows me to bypass my physical limitations (my singing voice) and bring to life the poems I wrote when I was 19, backed by the production skills I've honed since 2011.
If you are
a musician afraid of AI, my advice is to stop looking at it as a threat and
start looking at it as the ultimate production tool. Don't just write
prompts—write music, and use AI to help you finish it.

