Summary of the paper

Title Annotation of Human Gesture using 3D Skeleton Controls
Authors Quan Nguyen and Michael Kipp
Abstract The manual transcription of human gesture behavior from video for linguisticanalysis is a work-intensive process that results in a rather coarsedescription of the original motion. We present a novel approach fortranscribing gestural movements: by overlaying an articulated 3D skeleton ontothe video frame(s) the human coder can replicate original motions on apose-by-pose basis by manipulating the skeleton.Our tool is integrated in the ANVIL tool so that both symbolic interval dataand 3D pose data can be entered in a single tool. Our method allows arelatively quick annotation of human poses which has been validated in a userstudy. The resulting data are precise enough to create animations that matchthe original speaker's motion which can be validated with a realtime viewer.The tool can be applied for a variety of research topics in the areas ofconversational analysis, gesture studies and intelligent virtual agents.
Language Tools, systems, applications
Topics Corpus (creation, annotation, etc.), Discourse annotation, representation and processing, Tools, systems, applications
Full paper Annotation of Human Gesture using 3D Skeleton Controls
Bibtex @InProceedings{NGUYEN10.952,
  author = {Quan Nguyen and Michael Kipp},
  title = {Annotation of Human Gesture using 3D Skeleton Controls},
  booktitle = {Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)},
  year = {2010},
  month = {may},
  date = {19-21},
  address = {Valletta, Malta},
  editor = {Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odjik, Stelios Piperidis, Mike Rosner, Daniel Tapias},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {2-9517408-6-7},
  language = {english}
 }
Powered by ELDA © 2010 ELDA/ELRA