From Neck Movements to AI Voice Restoration
Speaking is not just about vibrating vocal cords. Every word we say leaves a hidden trail on the surface of the neck: tiny shifts in muscles and skin that form a kind of “movement map.” Researchers at POSTECH have turned this subtle, usually invisible map into the basis of a new silent speech wearable. Their system combines a multiaxial strain mapping sensor with artificial intelligence to perform AI voice restoration for people who can no longer speak. Instead of listening for sound, the device reads microscopic neck movements and feeds them into a deep-learning model that reconstructs what the user intended to say. The result is synthetic speech technology that can generate an audible, synthesized voice in real time—even when no sound is produced at all. For people living with voice loss, this throat sensor device could offer a pathway back to fluent conversation.

How the Light-Based Throat Sensor Device Works
The heart of this silent speech wearable is a soft, choker-like band that sits comfortably around the throat. Inside it, a silicone layer is patterned with tiny black markers, illuminated by an LED and watched by a miniature camera and microscope lens. As you form words silently, your throat muscles expand, contract, and twist. Those movements stretch the skin by as little as 0.02 percent—far too small to see, but enough for the multiaxial strain sensor to track. Unlike older sensors that captured motion in only one direction, this system maps both the size and direction of strain, creating a rich picture of how the neck moves during speech. An AI pipeline, blending a convolutional neural network with a transformer model, decodes these patterns and reconstructs speech in your own synthesized voice, with latency low enough for natural, almost real-time conversation.
What It Feels Like to Use: Comfort, Latency, and Naturalness
Although still a research prototype, the design focuses on everyday comfort and reliability. The throat sensor device is worn like a flexible neck brace, with position and tightness adjusted to each person. When you put it on, the system first records a baseline “stress map” of how the band sits on your skin, so the AI can automatically correct for small shifts the next time you wear it. You can then mouth words silently—no whisper, no vibration of the vocal cords—and hear them spoken back as synthetic speech. Because the system reads motion instead of sound waves, it remains accurate even in loud environments where microphones fail. Early tests show high stability across thousands of movement cycles, suggesting it can handle real-world use. The goal is for the synthetic voice to sound like you, restoring not just speech but a sense of identity and emotional nuance.
Who Could Benefit: Beyond Traditional Voice Prosthetics
This new form of voice loss assistive tech is especially promising for people who have lost their voice due to surgery, illness, or neurological conditions. For patients who have undergone removal of the voice box, traditional options like electrolarynx devices often produce a metallic, robotic voice that can be fatiguing and stigmatizing. By contrast, the POSTECH system can be trained on recordings of a user’s pre-surgery voice, then use synthetic speech technology to recreate that familiar sound. Because it does not rely on airflow or vocal cord vibration, it could also support users with severe respiratory or neuromuscular challenges. Beyond medical needs, the silent speech wearable could enable private communication in libraries, theaters, or high-noise industrial settings, and even secure, soundless exchanges in sensitive operations, all without the bulky electrodes or lab-bound setups seen in many EMG- or EEG-based systems.
The Future: Personalized Voices and Seamless, Silent Communication
Looking ahead, throat-mounted silent speech systems could merge with several fast-moving technologies. Personalized voice cloning could make the synthetic output almost indistinguishable from a user’s natural voice, even if only a small archive of recordings exists. As processors become smaller and more efficient, more of the AI voice restoration pipeline could run directly on the wearable or a paired device, improving privacy by keeping raw sensor data off the cloud. In the longer term, these throat sensor devices might complement brain-computer interfaces, giving users multiple pathways—from brain signals to neck movements—to express speech without sound. Together, these advances point toward a future where losing one’s physical voice does not mean losing the ability to speak with nuance, identity, and spontaneity, whether in a noisy factory, a quiet library, or at home with family and friends.
