In AI music generation, the key to distinguishing creation modes isn't the final style, but the initial 'Source of Truth'. This trifold classification determines your Prompt strategy and role distribution.
Lyric-led
The logic flows from text structure to musical emotion. Start with lyrics, set the framework, then translate into music.
- Flow: Theme → Lyric Gen → Structure Confirmation → Style Translation → Music Gen
- Apply: Strong narrative, conceptual, or poetic adaptation
Music-led
Find the story within the auditory vibe. Generate emotions or melodies first, then let lyrics 'grow' from the rhythm.
- Flow: Vibe → Music Gen → Structure Analysis → Lyric Emotion Mapping → Lyric Gen
- Apply: Atmospheric-first, Beat making, or emotional release
Co-evolution
Converging blurry imagery simultaneously. Let lyrics and music iterate together to capture inspiration during evolution.
- Flow: Blurry Imagery → Simultaneous Gen → Structural Convergence → Version Evolution
- Apply: Experimental creation, flash inspiration, or style mashups
Daiwanmaru's Insight
Knowing your mode dictates your Agent's role:
Lyric-led requires a STRONG Music Translator.
Music-led requires a STRONG Lyric Mapper.