Equine Pain Behavior Classification via Self-Supervised Disentangled Pose Representation
Authors: M. Rashid, S. Broomé, K. Ask, Elin Hernlund, P. Andersen, H. Kjellström, Yong Jae Lee
Journal: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
Summary
# Editorial Summary Identifying pain in horses remains challenging because animals often mask clinical signs and detailed behavioral annotation is labour-intensive and costly. Researchers at KTH Royal Institute of Technology and collaborators developed a computer vision system that classifies pain from multi-angle surveillance footage of unobserved horses using only weak (temporally sparse) pain labels, employing self-supervised learning to isolate postural features from appearance and background variables. The model achieved 60% accuracy—exceeding expert human performance—by first training a generative model to disentangle horse pose representation, then applying a novel multi-instance learning approach to leverage limited pain labels effectively. Crucially, the system's learned pose features proved viewpoint-invariant and independent of coat colour or background, and qualitative analysis confirmed that the algorithm's pain classifications aligned with established veterinary pain assessment scales. For practitioners, this work demonstrates a practical pathway toward scalable, objective pain detection tools that could support earlier intervention in lameness, orthopaedic conditions, and acute pain states without requiring horses to be specifically observed or extensively labeled during data collection.
Read the full abstract on the publisher's site
Practical Takeaways
- •Automated video-based pain detection systems could help identify hidden pain in unobserved horses, particularly valuable for detecting subtle orthopaedic pain when horses are alone
- •This technology could reduce reliance on manual behavioral assessment and time-intensive video annotation, making pain monitoring more scalable for stud farms and larger operations
- •The system's alignment with existing equine pain scales suggests potential integration into routine welfare monitoring protocols, though field validation in working environments is needed
Key Findings
- •Machine learning model achieved 60% accuracy in equine pain classification from video, exceeding human expert performance
- •Self-supervised disentangled pose representation successfully separated horse body language from appearance and background in multi-view surveillance video
- •Model-identified pain symptoms showed correspondence with established equine pain scales used in veterinary practice
- •System successfully used temporally sparse video-level pain labels rather than requiring detailed frame-by-frame behavior annotation