If you have any questions, please contact us at email@example.com
Marco Gillies is a senior lecturer at Goldsmiths, University of London. He has done research in applied and interactive machine learning in the fields of Virtual Reality, Computer Animation and Intelligent Virtual Agents. He has organised several research workshops including two workshops at the UK based Artificial Intelligence and Simulation of Behavior conference, and the BT AHRC Research Network: Digital Reconstruction in Archaeology and Contemporary Performance (of which he was director). He has been on the organising committee of Intelligent Virtual Agents (IVA) and New Interfaces for Musical Expression (NIME) and the programme committee of many conferences.
Rebecca Fiebrink is a lecturer at Goldsmiths, University of London. Her research focuses on using machine learning as a tool for designing interactive systems, especially systems for creative expression and embodied interaction. She is the author of the Wekinator software for interactive machine learning. She was the General Co-Chair of the 2014 conference on New Interfaces for Musical Expression.
Atau Tanaka is Professor of Media Computing at Goldsmiths, University of London; formerly professor at Newcastle University, and researcher at Sony Computer Science Laboratory (CSL) Paris. He creates musical instruments using sensing technology to capture movements and gestures of musicians to produce computer generated sound. He has worked at IRCAM, has been artistic ambassador for Apple Computer, and Artistic Co-Director of STEIM in Amsterdam. He is a member of the Embodied Audio Visual Interaction (EAVI) research unit at Goldsmiths.
Baptiste Caramiaux is is a Marie Sklowodska Curie Fellow at McGill University (Canada) and IRCAM (France). His research focuses on understanding and modelling the cognitive processes of motor learning in musical performance, and on the design of expressive motion-based interactive systems using machine learning. He conducted academic research at Goldsmiths University of London, and was responsible for machine learning and interaction design in the London-based music tech startup Mogees Ltd.
Jérémie Garcia is a postdoctoral researcher at Goldsmiths, University of London. His research focuses on user-centered methods to observe, design and evaluate new interactive systems able to support the most creative aspects of music composition such as free expression, interactive exploration and refinement of musical ideas.
Saleema Amershi is a researcher in the Machine Teaching group at Microsoft Research (Machine Teaching is machine learning with a focus on the human user or “teacher”). Her research lies at the intersection of human-computer interaction and machine learning. In particular, her work involves designing and developing tools to support both end-user and practitioner interaction with machine learning systems. Amershi received her Ph.D. in computer science from the University of Washington’s Computer Science \& Engineering Department in 2012.
Bongshin Lee is a Senior Researcher at Microsoft Research. Her research interests include Information Visualization, Visual Analytics, Human-Computer Interaction, and User Interfaces & Interaction Techniques. Her research focuses on the design, development, and evaluation of interactive technologies for people to create visualizations, interact with their data, and visually share data-driven stories, leveraging Natural User Interfaces (NUIs) including pen and touch. She received her Master of Science and Ph.D. in Computer Science from University of Maryland at College Park in 2002 and 2006, respectively.
Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris. His research concerns the modeling of movement-sound interactions, and the design and development of gesture-based interactive systems. He holds a master degree in physics and a Ph.D. in Biomedical Optics from EPFL in Lausanne. From 1999 to 2003 he was a researcher at the University of California Irvine. In 2003 he joined IRCAM as a researcher on gesture analysis for music and performing arts.
Nicolas d’Alessandro is a postdoctoral researcher at UMONS and head of performative media at the Numediart Institute for Creative Technologies. He holds a PhD in Applied Sciences from UMONS Faculty of Engineering, related to gesturally-controlled synthesis of expressive speech and singing. He is co-founder of Hovertone, a startup for creative experience design.
Joëlle Tilmanne is a postdoctoral researcher at UMONS and head of the motion capture and analysis research group at the Numediart Institute. She holds a PhD in Applied Sciences from UMONS Faculty of Engineering, in the field of motion capture data analysis and Hidden Markov Model based motion synthesis. She is co-founder of Hovertone, a startup for creative experience design.
Alexis Heloir leads the Sign Language Synthesis and Interaction junior research group in Saarbrücken and is an assistant professor at the University of Valenciennes, France. His research interests are interactive control and animation of three-dimensional assets and automated generation of intelligible Sign Language utterances using avatars. He was previously a post-doc at the German Research Center for Artificial Intelligence (DFKI).
Fabrizio Nunnari is a postdoctoral researcher at the German Research Center for Artificial Intelligence (DFKI). He works in the field of digital character animation for the production of Sign Language animation. He also researches the use of Natural User Interfaces for animation authoring.
Wendy Mackay is a Research Director at Inria, France, where she heads the ExSitu research lab in Human-Computer Interaction. She has served as Vice President for Research at the University of Paris-Sud and as a visiting professor at Stanford University and Aarhus University. Wendy is a member of the ACM CHI academy, is a past chair of ACM/SIGCHI, chaired CHI’13 and recently received the ACM/SIGCHI Lifetime Acheivement Service Award.
Todd Kulesza recently completed a Ph.D. in computer science at Oregon State University, working under the guidance of Margaret Burnett. His research interests are in human interactions with intelligent systems, with a focus on enabling end users to personalize such systems efficiently and effectively. He was co-chair of the 2013 IUI workshop on interactive machine learning.