How I Made AI Almost Human (and Survived)
I was staring at a Midjourney still — waist-deep water, dusk light, hair clinging, shoulders tense — thinking:
“Sure, this looks nice… but it doesn’t breathe. It’s a painting. A very pretty, overcooked digital painting.”

Then inspiration hit from a serious reference: The Emotion Thesaurus. 📚
Not “a little book” — this is a substantial, dense guide to human micro-behaviors, subtle gestures, postures, and cues. I thought: if humans read emotional subtext from tiny gestures, maybe AI could animate it — if I translated those cues properly. Naturally, I had ChatGPT dump lots of behavioral micro-cues into a Midjourney prompt and pretended I was Dr. Frankenstein, but with better color grading.

Ophelia standing in dark water at dusk, long hair drifting, looking slightly off-camera, soft ripples, muted tones, cinematic, reflective mood

woman standing motionless in waist-deep dark water, shoulders slightly drawn inward, fingertips barely breaking the surface as they drift outward with the current, chin tilted as if listening, gaze unfixed just past the camera, a slow breath visible in the rise of her collarbone, long hair clinging in damp strands, soft ripples spreading unevenly around her, overcast dusk light, desaturated blue-gray tones, stillness carrying quiet tension
Step 1: The Still That Started It All
Ophelia — my cinematic alter ego — appeared poised and fragile. Every pixel carried subtle tension: water ripples, collarbone rising slightly with breath, fingers barely disturbing the surface.
Pro tip: If you squint just right, you can almost convince yourself she’s thinking something deep and existential.
Problem? Still images are terrible at conveying life. Like mannequins with perfect posture but zero personality. Enter: SeeDance.
Step 2: The Naive “Animate Everything” Mistake
First attempt? Disaster. Feed a still into SeeDance, and it may freezes everything like a wax museum or over-interprets and exagerates, producing something like an epileptic fish trying to meditate. Not ideal.
I needed restraint with wiggle — micro-motion that signals life without turning the scene into a chaotic mess.
Step 3: Layered Micro-Motion — Environmental First
I started layering like an geeky scientist discovering rhythm for the first time:
- Environmental motion: gentle, uneven ripples from her fingertips.
- Physiological motion: slight finger drift, subtle hair movement, barely noticeable shoulder rise.
- Camera: slow push-in, almost imperceptible, no sudden motion.
Watching it… okay. Not alive, not dead, just existing. A digital ghost in water.
Step 4: The Blink Experiment (Sort Of…)
I introduced micro-events: optional blinks, subtle gaze shifts, tiny exhales — the signals humans unconsciously read.
The catch: we never got a fully realized blink. Only half-closures, faint downward-eye gestures, slight squints. Still, it was fascinating. Fingers shifted fractionally. Water ripples responded. Faint downward glances, tentative off-frame reaches, subtle darkening under her eyes — she wasn’t just moving. She was present.
Lesson learned: you don’t need a perfect blink to feel life. Half-measures, hints, micro-imperfections — your brain fills in the rest.
locked composition*, preserve original framing, begin in complete stillness, very subtle environmental movement emerges — gentle uneven ripples spreading outward from her fingertips, her fingers drift slightly with the current, a barely perceptible inhale lifts her shoulders, hair shifts lightly against her neck, camera performs an extremely slow push-in, almost imperceptible, no sudden motion, no change in pose, after a long still moment a single slow natural blink, subtle and unforced, maintain subdued continuous movement and quiet tension
Step 5: Eureka — Layered Motion Is Everything
Layered micro-motion = the secret sauce.
A hierarchy of movement combined with restraint and micro-imperfection creates a presence that feels “uncannily real.”
Layers as I see them:
- Environmental: water, leaves, dust. Continuous, predictable, low-energy.
- Physiological: breath, finger drift, hair. Subtle, partially synchronized.
- Autonomic / Micro-events: blinks, gaze shifts, exhale. Rare, probabilistic, natural.
- Camera: slow push-in, subtle lateral drift. Deliberate, controlled.
- Psychological nuance: tiny posture or expression shifts emerging after micro-events. Unforced, emergent.
Not flashy. Not hyperrealistic. Just alive.
Step 6: The Macro Workflow (So You Don’t Waste Your Life)
For fellow AI filmmakers:
1️⃣ Start with a Midjourney still prompt — behavior over emotion. Don’t tell the AI “she’s sad.” Describe what sadness looks like: drooping shoulders, hesitant gaze, fingers fidgeting.
2️⃣ Identify layers of motion: environment → physiology → micro-events → camera → psychological nuance.
3️⃣ Translate layers into probabilistic instructions: optional micro-events, staggered timing, subtle randomness.
4️⃣ Maintain restraint: nothing sudden, nothing over-animated. Let layers interact in unsynchronized rhythm.
5️⃣ Iterate: tweak timing, offsets, micro-blinks, hair shifts. Watch obsessively.
💡 Side Tip: Micro-imperfection beats spectacle every time. One imperfect half-blink is worth a hundred particle effects.
Step 7: The Reward
Ophelia finally had a fragile human rhythm. Water ripples led. Fingers drifted. Hair swayed independently. Half-blinks emerged. Gaze softened. Camera pushed in gently.
Subtle, restrained, emergent, beautifully imperfect.
Watching it, I realized: the AI isn’t “creating life.” It’s pattern-matching. Layered micro-motion tricks your brain into perceiving life.
Yes, it’s repeatable. Interiors, landscapes, multi-character scenes — same pipeline works anywhere you want life to peek out of stillness.
Step 8: The Takeaway (Snarky Version)
Stop asking AI to “animate life.” Start giving it tiny, optional imperfections and watch your static images start breathing. Use The Emotion Thesaurus as your guide for human micro-cues. Layer motions, stagger them, respect restraint, add a blink (or half-blink), trust the AI to make tiny choices.
Ophelia in the water is proof. She hesitates. She breathes. She almost reaches. Quiet, restrained, alive.
That fleeting moment when your brain whispers: “She’s alive.”
And that, friends, is exactly why we do this. Not for spectacle, not for applause, but for subtle, fragile presence.
TL;DR for the TerminallyBored
- Still images = mannequins.
- Micro-motion = life.
- Layer it. Stagger it. Randomize it. Half-blinks count.
- Your brain will thank you.
🎬 Moral: Life lives in micro-fractions. The rest is just water and pixels.
Steve Teare
video alchemist
*NOTE:
“locked composition” is a filmmaking/visual design term. In the context of AI-generated images and videos, it basically means:
The framing, camera angle, and overall arrangement of elements in the scene are fixed — nothing moves or changes their relative positions.
Once “locked,” the AI or camera doesn’t shift perspective, crop, or alter the composition, so the subject, background, and key visual relationships stay exactly as intended.
For example: in this Ophelia water scene, a locked composition would mean:
She stays centered (or wherever you placed her).
The horizon, water line, and background lighting remain consistent.
The AI can add subtle motion (ripples, hair, micro-gestures) but cannot “reframe” the shot or move her around in the scene.
Why it matters for SeeDance/video:
Locked composition preserves the “still image feel” while layering motion.
It prevents the AI from introducing unwanted movement that could break continuity or disrupt the carefully crafted tension in your shot. (Like some random character walking into the scene).
You get micro-motions (breath, water ripples, hair, half-blinks) without losing the cinematic frame you designed in Midjourney.
In short: it’s about keeping your camera and framing choices fixed so the “scene architecture” remains intact while life is added in tiny layers.
SeeDance videos are $0.28 per 5-second video output
