Browsing by Issue Date, starting with "2025-05-29"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- Computer vision tools to detect the precursors of tail biting in pigs: first steps towards methodological frameworkPublication . Paula, Ana Margarida Gomes Farinha; Oliveira, Jorge; Kasper, Claudia; Nasser, HassanAbstract: Tail biting is one of the most significant problems affecting production in terms of animal welfare, productivity, and health. In response, this project presents the application of existing computer vision and machine learning tools to detect the precursors of tail biting in pigs under field conditions. Computer vision systems can facilitate phenotyping to select more stress-resistant pigs and help detect problems at an early stage, both at the individual and group levels. Behavioural changes, such as activity in the pen, frequency of contact between pigs in a group, or handling of objects, can be indicative of the development of tail biting, among other problems. The proposed work curated an animal identification dataset that will support the creation of a reference database for detecting these changes. The data included approximately 1.800 h of video recordings of he daily lives of two groups of 12 fattening pigs, each weighing 100-140 kg, at Agroscope's experimental station in Switzerland. A bibliographical survey of existing computer vision systems for pigs allowed for the selection of the annotation and training tools and the development of an ethogram that was best suited to the project. From the initial set of images, we obtainde a first dat subset of 2,500 images automatically selected using Lightly for annotation. The annotation algorithm chosen was CVAT (Computer Vision Tool), which enabled 280 images to be annotated with bounding bozes and 520 with semi-automatic segmentation using the Segment Anything Model (SAM) for the classes of objects present in each image (ID of each pig, head, tail), location, and additional attributes, such as poses (lying down, standing, or sitting/keeling). The 280 images annoted with bounding bozes were use to train YOLOVv8, achieving and accuracy of 0.93 for the heads and 0.84 for the tail of the pigs. Although we did not achieve the desired accuracy, the results were quite satisfactory, given the number of frames for training. Althoug we could not train the object detection model with the images annoted with the Segment Anything Model within the allocated project time, we expect such an effort to yield better results, as the annotation is more accurate and provides greater amounts of ground truth, hence better accuracy.