![]() |
Aleix M Martinez | |||||||||
The Ohio State University |
||||||||||
![]() |
||||||||||
![]() |
||||||||||
AR FACE DATABASE :: FOR PROSPECTIVE STUDENTS :: EmotioNet Database :: EmotioNet Challange :: CBCSL | ||||||||||
![]() |
||||||||||
![]() |
||||||||||
205 Dreese Laboratory Department of Electrical and Computer Engineering The Ohio State University 2015 Neil Avenue Columbus, OH 43210 USA E-mail: aleix@ece.osu.edu Phone: (614) 688-8225 or (614) 292-2571 New: 2020 CVPR Best Paper Award Nominee. Read our paper: PDF
|
Short Bio |
Aleix M. Martinez is a Professor in the Department of Electrical and Computer Engineering at The Ohio State University (OSU), where he is the founder and director of the the Computational Biology and Cognitive Science Lab. He is also affiliated with the Department of Biomedical Engineering and to the Center for Cognitive Science where he is a member of the executive committee. Prior to joining OSU, he was affiliated with the Electrical and Computer Engineering Department at Purdue University and with the Sony Computer Science Lab. He has served as an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transaction on Affective Computing, Computer Vision and Image Understanding, and Image and Vision Computing. He has been an area chair for many top conferences and was Program Chair for CVPR 2014 in his hometown, Columbus, OH. He is also a member of the Cognition and Perception study section at NIH and has served as reviewer for numerous NSF, NIH as well as other national and international funding agencies. Dr. Martinez is the recepient of numerous awards, including best paper awards at ECCV and CVPR, Lumely Research Award, and a Google Faculty Research Award. Dr. Martinez research has been covered by numerous national media outlets, including CNN, The Huffington Post, Time Magazine, CBS News and NPR, as well as intrernational outets, including The Guardian, Spiegel, El Pais and Le Monde. A selection of recent stories is available here. |
Tutorials & Videos |
2018 CCN Workshop: Multidisciplinary Approaches Facial Color Transmits Emotion, PNAS 2018. to Understanding Face Perception. Compound Facial Expressions of Emotion, PNAS 2014. A Neural Basis of Facial Action Recognition, JNeuro 2016. EmotioNet: A millionm images in the wild The not face (Scientific American) ![]() Video Lectures: Deciphering the Face, CVPR 2011. If you want to cite the work in this presentation, read and cite the paper "A Model of the Perception of Facial Expressions of Emotion by Humans: Research overview and perspectives" listed below. |
Publications |
2020
GANimation: One-Shot Anatomically Consistent Facial Animation
What does it mean to learn in deep networks? And, how does one detect
adversarial attacks? Context may reveal how you feel Cross-Cultural and Cultural Specific Production and Perception of Facial Expressions of Emotion in the Wild Discriminant Functional Learning of Color Features for the Recognition of Facial Action Units and their Intensities Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements 2018 Facial Color Is an Efficient Mechanism to Visually
Transmit Emotion Learning Facial Action Units from Web Images with Scalable Weakly Supervised Clustering
2017 A simple, fast and highly-accurate algorithm to recover 3d shape from 2d landmarks on a single image Recognition of Action Units in the Wild With Deep Nets and a New Global-Local Loss Visual perception of facial expressions of emotion Computational models of face perception 2016 A Neural Basis of Facial Action Recognition in Humans EmotioNet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild Fast and precise face alignment and 3D shape reconstruction from a single 2D image The Not Face: A grammaticalization of facial expressions of emotion Labeled Graph Kernel for Behavior Analysis 2015 Compound Facial Expressions of Emotion: From basic research to clinical applications Multiple Ordinal Regression by Maximizing the Sum of Margins Precise Fiducial Detection Face Recognition, Overview 2014 Compound Facial Expressions of Emotion Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language Multiobjective Optimization for Model Selection in Kernel Methods in Regression Minimizing Nearest Neighbor Classification Error for Nonparametric Dimensionality Reduction 2013 Salient and Non-salient Fiducial Detection Using a Probabilistic Graphical Model Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion 2012 A Model of the Perception of Facial Expressions of Emotion by Humans: Research overview and perspectives Learning Deformable Shape Manifolds Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion Automatic Selection of Eye Tracking Variables in Visual Categorization for Adults and Infants 2011 The Resolution of Facial Expressions of Emotion Computing Smooth Time-Trajectories for Camera and Deformable Shape in Structure from Motion with Occlusion Kernel Optimization in Discriminant Analysis Kernel Non-Rigid Structure from Motion Deciphering the Face Non-Rigid Structure from Motion with Complementary Rank-3 Spaces 2010 Features versus Context: An approach for precise and detailed detection and delineation of faces and facial features A Computational Shape-based Model of Anger and Sadness Justifies a Configural Representation of Faces Rigid Structure from Motion from a Blind Source Separation Perspective Bayes Optimal Kernel Discriminant Analysis 2009 Rotation Invariant Kernels and Their Application to Shape Analysis Emotion Perception in Emotionless Face Images Suggests a Norm-based Representation Low-Rank Matrix Fitting Based on Subspace Perturbation Analysis with Applications to Structure from Motion Modelling and Recognition of the Linguistic Components in American Sign Language Active Appearance Models with Rotation Invariant Kernels Support Vector Machines in Face Recognition with Occlusions 2008 Can low level image differences account for the ability of human observers to discriminate facial identity? Who Is LB1? Discriminant Analysis for the Classification of Specimens Using the information embedded in the testing sample to break the limits caused by the small sample size in microarray-based classification Bayes Optimality in Linear Discriminant Analysis
Pruning Noisy Bases in Discriminant Analysis Face Recognition with Occlusions in the Training and Testing Sets Precise Detailed Detection of Faces and Facial Features 2007
Spherical-Homoscedastic Distributions: The equivalency of spherical and Normal distributions in classification Spherical-homoscedastic Shapes Sparse Kernels for Bayes Optimal Discriminant Analysis
Recovering the Linguistic Components of the Manual Signs in American Sign Language 2006
Subclass Discriminant Analysis
A Weighted Probabilistic Approach to Face Recognition from Multiple Images and Video Sequences Selecting Principal Components in a Two-Stage LDA Algorithm
Three-dimensional Reconstruction of Shape and Motion for the Analysis of American Sign Language
A Blind Source Separation Approach to Structure from Motion
Perturbation Estimation of the Subspaces for Structure from Motion with Noisy and Missing Data 2005
Where Are Linear Feature Extraction Methods Applicable?
Robust Motion Estimation under Varying Illumination Evaluation of the Modeling of Local Areas and Errors of Localization in FRGC’05 2004
On Combining Graph-Partitioning with Non-Parametric Clustering for Image Segmentation A Local Approach for Robust Optical Flow Estimation under Varying Illumination
Recognition of Expression Variant Faces Using Weighted Subspaces
Optimal Subclass Discovery for Discriminant Analysis
From Static to Video: Face Recognition Using a Probabilistic Approach 2003
Matching Expression Variant Faces Template-based Recognition of Static Sitting Postures Recognizing Expression Variant Faces from a Single Sample Image per Class 2002
Recognizing Imprecisely Localized, Partially Occluded and Expression Variant Faces from a Single Sample per Class Purdue ASL Database for the Recognition of American Sign Language Physical Correlates of Prosodic Structure in American Sign Language 2001 Clustering in Image Space for Place Recognition and Visual Annotations for Human-robot Interaction PCA versus LDA 2000 and prior Learning Mixture Models Using a Genetic Version of the EM Algorithm A New Approach to Object-Related Image Retrieval Recognition of Partially Occluded and/or Imprecisely Localized Faces Using a Probabilistic Approach Semantic Access of Frontal Face Images: The expression-invariant problem Face Image Retrieval Using HMMs Semantic Access to a Database of Images: An approach to object-related image retrieval Book Chapters
Face Recognition, Overview Face Recognition, Component-based A Biologically Inspired Model for the Simultaneous Recognition of Identity and Expression Subset Modeling of Face Localization Error, Occlusion, and Expression Physical Correlates of Prosodic Structure in American Sign Language |