UX writing in a multi-modal world
UX writing is a conversation—one that’s getting louder and more layered. In a multi-modal world, your words don’t stay flat on a screen. They stretch across voice, gestures, visuals, and even the buzz of a watch. It’s a dance of adaptation, and as a UX writer, you’re the choreographer.
Start with the core. A button says “Save” on a mobile app—clean, direct. Now, a user barks “Save my work” at a smart speaker. Same intent but, a different rhythm. Later, they tap a wearable, and a quick “Saved” hums back. Multi-modal means your writing flexes for each mode without losing its spine.
Voice is the wild card. Users ramble—“Can you, uh, save this for me?”—so you write prompts that feel human, not robotic. “Sure, I’ll save it” beats “Command executed.” Screens demand less. “Next” works where “Please proceed” clogs the flow. Haptics? You’re silent, leaning on a vibration to say “Done.” Each mode is its own dialect.
The trick is testing. Real feedback, not guesswork, sharpens the edges. Tools like prototypes or A/B splits help, but nothing beats hearing someone stumble mid-sentence.
This isn’t chaos—it’s opportunity. Multi-modal writing ties a product together, whether someone’s swiping, talking, or feeling it. You’re not locked to pixels anymore. You’re crafting a thread that weaves through senses, guiding users with words that fit the moment. That’s the craft now: fluid, alive, everywhere.