Votes
0
Product:
Character Creator 4
Version:
4.33
Status:
Active
Issue 10947
In the Expression Editor: Include a test vowel sound for each Viseme and additionally activate it's morph for ideal calibration
Please, create some kind of change.
I find the mouth and lip combinations are off in several places, I can give you one correction so far - saying the word "Works" for example, I mostly corrected the W.

As I've explained in the past on the feedback tracker, You seem to have an extra Viseme sound added that makes the female characters double the movement that i think is meant to be one particular vowel but visually it looks like a glitch.

Being able to hear the vowel for the given slider is extremely important and immediately after making an adjustment instead of abruptly exiting the Expression editor and search the vowel sound on the Talk_F animation really makes it difficult.

Even providing a complete word written out next to the slider would be better than what exists already.

Some Default human morph sliders that still need to be corrected:
"T" - Talk default shape not accurate
The word "Digital" - as in Digital Character in the Talk_F animation example.
Doesn't visually show the word "Expressions" accurately - lips close on S sound.
"Character" is slightly off
"lively" is also off.
"foreword' is off - I have also fixed this one in a different file.

I understand that normally for each character that we would need to make slight adjustments to the existing morphs but these morphs are very off as if I'm watching old kung fu movie that's dubbed.

I think another way you could refine the accuracy is to record your voice speaking these words slowly then create the lip-sync animation data to test and refine how the morph should look with the related viseme.

Lastly, maybe in the future, if you can get the AI to match the shape of the person's face expression with Accuface motion capture to "capture" expressions to a neutral head, then you could allow facial expressions and Visemes to be created INSTANTLY without sculpting and have it saved to calibration tool on the Accuface camera window that and if it's decent, users could then apply that morph capture to the Facial expression file.

The tracking dots in the video window seems to recognize depth because if a person sits back, the dots become smaller! Everyone would be able to quickly make expression morph match for any Character.. perhaps better than headshot 2! without digital scans to capture facial dimension. Create a high resolution grid of facial tracking points that is matched and attached to a head then use your cloth tracking technology that conforms cloth to morphs on the face such as the beard and reverse it so the face area on the head is the cloth and it has to conform to the captured Accuface depth information and bake the shape into the head.
OS: Windows 11
Attachment:
  •  0
  •  78
Submitted byAscensi
COMMENTS (0)