{"id":6199,"date":"2026-01-29T09:30:03","date_gmt":"2026-01-29T07:30:03","guid":{"rendered":"https:\/\/neuro-x.epfl.ch\/en\/news\/ai-enables-a-whos-who-of-brown-bears-in-alaska\/"},"modified":"2026-01-29T09:30:03","modified_gmt":"2026-01-29T07:30:03","slug":"ai-enables-a-whos-who-of-brown-bears-in-alaska","status":"publish","type":"news","link":"https:\/\/neuro-x.epfl.ch\/en\/news\/ai-enables-a-whos-who-of-brown-bears-in-alaska\/","title":{"rendered":"AI enables a Who's Who of brown bears in Alaska"},"content":{"rendered":"<p>Being able to distinguish individual animals \u2013 including their unique history, movement patterns and habits \u2013 can help scientists better understand how their species function, and therefore better manage habitats and study population dynamics. Today, most computer vision systems for tracking animals are effective on species with patterns and markings, such as zebras, leopards and giraffes. The task is much more complicated for unmarked species where individual differences are harder to spot. Distinguishing a particular brown bear from its peers in a non-invasive way requires an incredible eye for detail and years of viewing the same bears over time. What\u2019s more, these bears emerge from hibernation in the spring with shaggy fur and having lost quite a bit of weight and then substantially increase their body weight feasting on salmon, as well as fully shedding their winter coat \u2013 that\u2019s enough to throw off experts as well as AI algorithms. A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual brown bears over time in photos, despite changes in the bears\u2019 appearance and the difficulties associated with image capture for these elusive and far-ranging animals.<\/p>\n<blockquote class=\"blockquote\">\n<p>Our biological intuition was that head features combined with pose would be more reliable than body shape alone, which changes dramatically with weight gain. The data proved us right \u2013 PoseSwin significantly outperformed models that used body images or ignored pose information<\/p>\n<footer class=\"blockquote-footer\">Alexander Mathis<\/footer>\n<\/blockquote>\n<p><strong>Machine learning based on head and posture<\/strong><\/p>\n<p>The McNeil River State Game Sanctuary in Alaska is home to the world\u2019s largest seasonal population of brown bears. Every summer, nearly 150 of these animals move through this area undisturbed over 500 km\u00b2 of pristine land. They gather on high-protein sedge meadows, and at large, low-grade waterfalls to catch salmon, providing an opportunity for the few humans allowed in the sanctuary to observe them. \u201cThe latter are strictly supervised; this is bear territory!\u201d smiles Alexander Mathis, a professor at EPFL\u2019s Brain Mind Institute and Neuro-X Institute. This remote area is also home to Beth Rosenberg, a researcher at the Fisheries, Aquatic Science, and Technology Laboratory at Alaska Pacific University, for four months of the year. She has built up an extraordinary database of brown-bear images: between 2017 and 2022, she took over 72,000 photos of 109 different brown bears under all sorts of conditions \u2013 in the rain, in varying times of day, and with bears in every available behavior and posture (or angle) \u2013 in order to fully depict the bears in their natural habitat.<\/p>\n<p>To develop their AI program, called PoseSwin, the scientists drew on their biological expertise to focus on four characteristics of bears that change surprisingly little over time: the shape of the muzzle (which has minimal fatty tissue), the brow bone angle, and the placement of the ears. Crucially, they incorporated pose information \u2013 analyzing photos of bears from various angles including frontal, profile, and tilted views. \"This pose-aware approach enabled us to use as many pictures as possible, even those that do not clearly show the bear's face perfectly,\" says Mathis. \"Our biological intuition was that head features combined with pose would be more reliable than body shape alone, which changes dramatically with weight gain. The data proved us right \u2013 PoseSwin significantly outperformed models that used body images or ignored pose information. \"<\/p>\n<p><strong>Capturing a bear\u2019s true identity<\/strong><\/p>\n<p>The architecture behind the scientists' program is based on transformers \u2013 the same fundamental technology that powers large language models like ChatGPT \u2013 but adapted specifically for image analysis. \u201cWe used a technique called metric learning to train a transformer to understand the relationships between different parts of the images,\u201d says Mathis. That means the algorithm learned not only to recognize individual bears based on the characteristics mentioned earlier, but also to compare two images of bears. The team exposed the algorithm to groups of three photos: two of the same bear taken at different times and one of another bear. The algorithm projected the images onto a multidimensional mathematical space, placing the photos of the same bear near each other and pushing those of the other bears further away. \u201cIt is a real game of attraction and repulsion, a digital tug-of-war where images shuffle around until they form coherent groups,\u201d says Mathis. \u201cEach bear ended up being represented as a unique constellation of points, which suggests the AI program was able to capture something fundamental \u2013 not just a bear\u2019s appearance but something closer to its identity.\u201d PoseSwin can also flag bears that it has never seen before, which is a major advantage for studies in unenclosed areas where new individuals can appear regularly.<\/p>\n<p>The next step was to apply the program in a new environment. For that, the scientists turned to citizen science: they collected photos taken by visitors to Katmai National Park and Preserve, located just over 60 km from McNeil River, and fed them into the PoseSwin algorithm. The program clearly recognized several of the bears, indicating specifically where the animals move seasonally in search of food. \u201cThis is a concrete example of the PoseSwin model\u2019s potential,\u201d says Beth Rosenberg. \u201cThe technology could eventually be used to analyze the thousands of pictures that visitors take every year and help to build a map of how brown bears use this expansive area. This helps us to understand what they need, how their population dynamics work, and many other important ecological questions.\u201d<\/p>\n<p><strong>\u201cA bear is a complicated version of a mouse\u201d<\/strong><\/p>\n<p>Thanks to photos of the bears and some virtual measurements of their morphology, scientists are now able to track Sloth, Rocky, That Bear and around 100 of their peers without interfering with them physically. \u201cThe better we can distinguish individual bears, the better we can understand them and their behaviors at the species level,\u201d says Rosenberg. \u201cBears are at the top of the food chain and ensure the proper functioning of their ecosystem. They are critical to maintaining healthy systems.\u201d<\/p>\n<p>PoseSwin will make field work more broadly applicable for the scientists involved in the study, as well as for other scientists working in other contexts. It also achieved excellent accuracy on benchmark datasets of macaques, suggesting its broad applicability beyond bears. \u201cBears are perhaps the hardest species to recognize individually,\u201d says Mathis. \u201cWe focused on them first with the idea that our program could be adapted to other species from mice to chimps, which seem to exhibit much less visual variation.\u201d The team has provided open-source access to their algorithm and the data used to develop it so that other researchers can use and adapt it as needed.<\/p>\n<p>The scientists plan to continue developing PoseSwin for Alaskan brown bears. Because the program is scalable, they are already able to add data collected in other seasons and from other locations. Their goal is to automate much of the system so that it can help monitor wild animal populations over the long term.<\/p>\n<\/p>\n<\/p>\n<\/p>\n<\/p><\/p>\n","protected":false},"featured_media":6200,"template":"","project":[],"faculty":[],"public":[],"themes":[],"news-category":[23],"class_list":["post-6199","news","type-news","status-publish","has-post-thumbnail","hentry","news-category-research"],"_links":{"self":[{"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/news\/6199","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/news"}],"about":[{"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/types\/news"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/media\/6200"}],"wp:attachment":[{"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/media?parent=6199"}],"wp:term":[{"taxonomy":"project","embeddable":true,"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/project?post=6199"},{"taxonomy":"faculty","embeddable":true,"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/faculty?post=6199"},{"taxonomy":"public","embeddable":true,"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/public?post=6199"},{"taxonomy":"themes","embeddable":true,"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/themes?post=6199"},{"taxonomy":"news-category","embeddable":true,"href":"https:\/\/neuro-x.epfl.ch\/en\/wp-json\/wp\/v2\/news-category?post=6199"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}