Offre de thèse de doctorat, équipe SIAM IBISC : « Remote Beehive Health Analysis using Embedded System and Relevant Audio Features »

/, Offre Doctorat/Post-Doctorat/Stage, Recherche, Recrutement/Offre de thèse de doctorat, équipe SIAM IBISC : « Remote Beehive Health Analysis using Embedded System and Relevant Audio Features »

Offre de thèse de doctorat, équipe SIAM IBISC : « Remote Beehive Health Analysis using Embedded System and Relevant Audio Features »

Title of the project : « Remote Beehive Health Analysis using Embedded System and Relevant Audio Features »

Beginning : Oct. 2021 – Ending : Sep. 2024

Thesis director(s) : Hichem MAAREF (PR IUT Evry, IBISC SIAM team, 40%)

Co-advisor : Dominique Fourer (MCF Univ. Evry, IBISC SIAM team, 60%)

Team / Laboratory : SIAM – IBISC – Evry University / Paris-Saclay

Industrial partner : Starling Partners https://starling.partners/

School : Collège doctoral 580 – STIC

Contact : dominiqueDOTfourerATuniv-evryDOTfr

Keywords : Audio analysis, smart beehive monitoring, deep learning, precision beekeeping, feature selection, embedded system.

Abstract: Bees are very important pollinating insects contributing to preserve natural ecosystems. However, they are also sensitive to various external factors such as weather, diseases, predators or pollution which can have severe impacts on their health. This explain the recent researches based on IA to develop smart beehive monitoring systems to assist beekeepers [17]. The audio analysis approach for precision beekeeping gained interest due to the capability of audio signal to convey accurate information about the health state of a beehive using a simple microphone (e.g. the number of bees, stress factors, the absence of the queen, etc.). Hence, estimating relevant information from audio signals requires robust acoustic features and the adequate preprocessing (e.g. signal separation and denoising) which could lead to promising result when combined with a deep learning approach. Moreover, the usage of an embedded system introduces constraints about the computational cost and the amount of transmitted data that should be optimized to be as low as possible.

The goal of this PhD thesis is to design a complete method based on deep learning allowing to collect data and to efficiently predict the state of a beehive using an embedded measurement system in a real-world field recording scenario.

Scientific problem

Information retrieval from an audio signal is an interdisciplinary problem which remains intensively investigated by researchers since decades, especially for music by the MIR community [5]. To be efficiently addressed, this problem first requires to compute efficient signal representations allowing to separate the relevant components from the irrelevant part often associated to undesired noise. As audio recordings are multicomponent non-stationary signals, they can contain harmonic components modeled by a set of parameterized sinusoids, transients that can be modeled by impulse components and a stochastic part that is related to possibly colored noises. Hence, the high-level analysis of the audio content requires to compute the signal features and signal parameters such as the predominant fundamental frequencies (F0) which can be related to physics-based harmonic models allowing to estimate the characteristics of the source which produced the analyzed audio signal. From another hand, audio signals analysis may require an efficient preprocessing such as source separation and denoising to reduce the irrelevant noise and to adapt the required frequency bandwidth and sampling rate of the captured data. This step is eased when the properties of the signal of interest are known, such as its spectral envelope or the spectral distribution of its signal components. Finally, this thesis aims at developing a complete solution for precision beekeeping that can operate with a smart embedded measurement system which can predict the health state of the monitored beehive. The implementation step will also require to deal with the possible hardware limitations related to the computation complexity and dimension of the transmitted data that should be optimized.

Goals

The objectives of this thesis can be summarized as follows.

  • Identifying the most efficient and robust audio features for supervised, and non-supervised audio classification scenarios.
  • Development and comparative assessment of new deep learning methods for identifying a beehive health state from recorded audio signals.
  • Optimal pre-processing and denoising to enhance the audio signal of interest (e.g. audio segmentation and event classification).
  • Design of a complete solution based on an embedded system allowing to capture signal and to predict the state of a beehive.

Proposed methodology

This thesis focuses on information retrieval from acoustic signal with an application to beehive health prediction which recently gained of interest [14, 3, 19, 1, 4, 9, 15, 16, 18]. The PhD candidate will first investigate the state-of-the-art methods and attempt to contribute to the (Detection and Classification Acoustic Scenes and Event) DCASE Challenge [12]. To this end, an experimental protocol will be developed to comparatively assess with baselines the future new proposed methods applied to the actual and future new proposed datasets built from the data provided by the company Starling Partners through their project “Des Abeilles & Nous”. The future developed methods will pay attention to compute efficient data representation and to select the most relevant audio features allowing to use supervised, semi- and non-supervised machine learning frameworks. This work could benefit of the recent advances in the field of time-frequency analysis based on synchrosqueezed transforms [8, 7, 6] which opens new perspectives of real-world applications.

This research work will focuses on deep neural networks [10] which actually provides the best stateof-the-art results when combined with a suitable signal representation [14]. Our work will attempt to propose new neural architectures with a efforts to optimally reduce the dimension (from a information theory point of view) of the required input features leading to the best prediction accuracy. Moreover, we also expect to adopt new strategies to deal with the lack of training data such as deep transfer learning[20], adversarial training[13], and promising data augmentation techniques such as Curriculum Learning [2]. For this work, an effort is expected for analyzing and understanding the meaning of the learned features and representations allowing to detect non-labeled beehive states. Hence, the final step of this thesis will be to design a measurement system based on a low-cost embedded circuit such as Arduino uno or Raspberry pi, allowing the real-time monitoring of one or several connected beehives. To this end, new sensors such as temperature, hygrometry, or atmospheric pressure, could also be investigated to improve the robustness of the future beehive smart system by merging all the available information using a multimodal approach [11].

Related works

  • Master thesis of Agnieszka Orlowska (supervised by D. Fourer), (Feb. 2021 – Sept. 2021).
  • Master internship of Leila Khellouf (supervised by D. Fourer), “Audio Signal Processing for Remote Beehive Health Analysis”. IBISC Lab, Univ. Evry, Apr. 2020 – Sept. 2020.

Required profile

  • good machine learning and signal processing knowledge
  • mathematical understanding of the formal background
  • excellent programming skills (Python, Matlab, C++)
  • good motivation, high productivity and methodical works
  • an interest for AI and new technologies

References

[1] Prakhar Amlathe. Standard Machine Learning Techniques in Audio Beehive Monitoring: Classification of Audio Samples with Logistic Regression, K-Nearest Neighbor, Random Forest and Support Vector Machine. PhD thesis, Utah State University, 2018.

[2] Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009.

[3] Stefania Cecchi, Alessandro Terenzi, Simone Orcioni, and Francesco Piazza. Analysis of the sound emitted by honey bees in a beehive. In Audio Engineering Society Convention 147, Oct 2019.

[4] Tymoteusz Cejrowski, Julian Szyman`ski, Higinio Mora, and David Gil. Detection of the Bee Queen Presence Using Sound Analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 10752 LNAI, pages 297–306, 2018.

[5] J Stephen Downie. Music information retrieval. Annual review of information science and technology, 37(1):295– 340, 2003.

[6] Fourer, F. Auger, and P. Flandrin. Recursive versions of the Levenberg-Marquardt reassigned spectrogram and of the synchrosqueezed STFT. In Procz IEEE ICASSP, pages 4880–4884, Shanghai, China, March 2016.

[7] Fourer, F. Auger, and G. Peeters. Local AM/FM parameters estimation: application to sinusoidal modeling and blind audio source separation. IEEE Signal Processing Letters, 25:1600–1604, October 2018.

[8] Dominique Fourer and Francois Auger. Second-Order Time-Reassigned synchrosqueezing transform: Application to Draupner wave analysis. In EUSIPCO 2019, A Corun˜a, Spain, September 2019.

[9] Vladimir Kulyukin, Sarbajit Mukherjee, and Prakhar Amlathe. Toward Audio Beehive Monitoring: Deep Learning vs. Standard Machine Learning in Classifying Beehive Audio Samples. Applied Sciences, 8(9):1573, September 2018.

[10] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.

[11] MV Lima, JPAF De Queiroz, LAF Pascoal, EP Saraiva, KO Soares, and A Evangelista-Rodrigues. Smartphonebased sound level meter application for monitoring thermal comfort of honeybees apis mellifera l. Biological Rhythm Research, pages 1–14, 2019.

[12] Annamaria Mesaros, Toni Heittola, Emmanouil Benetos, Peter Foster, Mathieu Lagrange, Tuomas Virtanen, and Mark D Plumbley. Detection and classification of acoustic scenes and events: Outcome of the dcase 2016 challenge. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(2):379–393, 2017.

[13] Takeru Miyato, Shin-ichi Maeda, Shin Ishii, and Masanori Koyama. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 2018.

[14] Ines Nolasco, Alessandro Terenzi, Stefania Cecchi, Simone Orcioni, Helen L Bear, and Emmanouil Benetos. Audio-based identification of beehive states. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8256–8260. IEEE, 2019.

[15] Ines Nolasco and Emmanouil Benetos. To bee or not to bee: Investigating machine learning approaches for beehive sound recognition. arXiv:1811.06016 [cs, eess], November 2018. arXiv: 1811.06016.

[16] Ines Nolasco, Alessandro Terenzi, Stefania Cecchi, Simone Orcioni, Helen L. Bear, and Emmanouil Benetos. Audio-based identification of beehive states. arXiv:1811.06330 [cs, eess], November 2018. arXiv: 1811.06330.

[17] NicolA¡s P˜ A˜ c rez, Florencia JesA˜os, Cecilia PA˜ c rez, Silvina Niell, Alejandro Draper, NicolA¡s Obrusnik, Pablo˜ Zinemanas, YamandA˜o Mendoza Spina, Leonidas Carrasco Letelier, and Pablo Monz´on. Continuous monitoring of beehives’ sound for environmental pollution control. Ecological Engineering, 90, 2016.

[18] Antonio Robles-Guerrero, Tonatiuh Saucedo-Anaya, Efrén Gonzàlez-Ramérez, and Carlos E Galvan-Tejada. Frequency Analysis of Honey Bee Buzz for Automatic Recognition of Health Status: A Preliminary Study. Research in Computing Science, 142:89–98, 2017.

[19] Antonio Robles-Guerrero, Tonatiuh Saucedo-Anaya, Efr´en Gonz´alez-Ram´ırez, and José Ismael De la Rosa-Vargas. Analysis of a multiclass classification problem by lasso logistic regression and singular value decomposition to identify sound patterns in queenless bee colonies. Computers and Electronics in Agriculture, 159:69–74, 2019.

[20] Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. A survey on deep transfer learning. CoRR, abs/1808.01974, 2018

WP to LinkedIn Auto Publish Powered By : XYZScripts.com