Browsing by Issue Date, starting with "2024-05-14"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- Enhancing Interpretability of Neural Networks in Food Recommendation SystemsPublication . Rebelo, João Edgar Lucas; Cunha, Carlos Augusto da Silva; Duarte, Rui Pedro Monteiro AmaroABSTRACT: Over the years the risk of developing diseases related to poor alimentation has been increasing. Many of these diseases are caused by obesity. Obesity is a silent disease related to being overweight, which due to its rapid growth has become a public health problem. Worldwide obesity has nearly tripled since 1975. Obesity can lead to health problems like type 2 diabetes, cardiovascular disease, and even cancer. The main factors that result in obesity are a sedentary lifestyle and a poor diet. Although obesity is uncured, it can be avoided/treated through a healthier lifestyle and diet. Amid so much information about diets and healthier recipes, it can be difficult to find a diet that meets the needs of each person. Recommendation systems can filter from a large dataset, the information that best suits the profile of each user. Due to the constant in crease in information and computational power, recommendation systems have evolved from a traditional approach to a deep-learning one. Recommendation systems are a hot topic in deep learning. Research in the food recommendation systems area has seen little development when compared to recommendations systems in other areas, such as leisure and entertainment. A powerful tool to use in food recommendation systems is neural networks. Neural networks play an important role in our society, for their capacity to learn from complex and high-dimensional data. One side down of neural networks is the difficulties if not impossibility in understanding how the predictions are being made. The behind the-scenes often remain opaque, leading neural networks to be characterized as “black boxes”. With this research, we aim to give contribute to understanding how neural networks operate underneath and make them more transparent and so more trustworthy. With this goal in mind, we propose the use of a secondary model to predict the errors of a primary neural network. By analyzing the error predictions of the second model, we aim to gain insights into its decision-making process. With this approach, we hope not only to help to understand the func tioning of neural networks but also to provide an idea of how to improve their performance. Improving neural networks’ understanding can make them more simple and accessible. With the work developed through this research, we look to stride towards making neural networks more transparent and explainable, thereby enhancing trust in these powerful models.