My animatronic werewolf head.

David BoccabellaDavid Boccabella Brisbane, Australia Moderator

Hi Folks.

Been working on this for a while but here is a picture of the tongue in situ. It can extend from the muzzle, curl up, down, left, right, and then retract back into the muzzle.
Controlled using 3 servo's

It runs in and out of the muzzle on a slide mechanism.

Made out of sections of acrylic with wires to control things.
I'll put up some more pictures in time.

 

Comments

  • nice. What are you using as a buck?
  • David BoccabellaDavid Boccabella Brisbane, Australia Moderator

    err. sorry.. Buck??

     

  • the plastic underskull.

  • David BoccabellaDavid Boccabella Brisbane, Australia Moderator

    Hi there Chris.

     

    I did the traditional WED clay sculpt and mould system. The underskull is made of fibreglass.
    This is my first attempt a sculpt and mould so it's still a big learning process for me.

    I have my own gallery but recently I upgraded and the threw all of my carefully organised photo's into disorder. But your welcome to look at the various pictures.

    http://www.marcwolf.org/gallery/?aid=11

    Its the "Sculpting the Head album but the other will interest you too :)

    Dave

  • couple of things. The visions system...AWESOME. Love the use of sonar to change the displays. As for the controllers you could use linear servos and wires feed it through an arduino and have pads that attach to you face. Something like the Facial Waldo (http://www.youtube.com/watch?v=bFW2azvVEdI)
  • David BoccabellaDavid Boccabella Brisbane, Australia Moderator

    Hi Chris

    Actually I am well on the way and several versions on :), Just haven't updated the website lately and my camera ALWAYS has flat batteries when I need it.

    Ok - I am working with idea's along this for controlling the suit. http://zachraddingdesigns.com/tongue-twister/wordpress/

    But I am using a little magnet temporarily glued to the end of my tongue and hall effect switches. The prototype works very well so I can quickly send codes to the controllers.

    So the character can lip sync I will be using a Raspberry Pi with it's camera and SimpleCV. The idea is that the camera is focused on my lips and with coloured dots it can map the lips movements, and thus translate them to micro servo on the characters lips.

    With the vision system, with additional experimentation I found that I did not need the sonar/servo system as my own vision adjusts for that. The main thing is to keep it stereoscopic.

    The controller system works by having a library of different subsystem movements like ears, muzzle, eyes. I can then preprogram and synchronise these together into 'emotes'  to which I assin a number. Then it is easy to have a Command, count, and active system with 3 of the mouth buttons. The additional buttons can be used for fast access like "Snarl", or "Play and be Cute"

    Thanks for your interest, and feel free to ask me anything re the construction etc. All of my idea are public domain so that if someone can do better then I'll learn from them :)
    Dave

  • Have you seen andrew ng's Artificial intelligence course on Coursera? 

    So I had this thought, you could always Program servos Task specific to 'react' to different movements but in a completely natural way. 

    The machine could 'learn' how to express different emotions and you could simply program it by telling it yes or no, best yet the puppet gets better with every performance.

    What are your thoughts on this kind of autonomous control system?
  • 1st time onto the forums since joining, and smiling wide at these ideas!
Sign In or Register to comment.