Applications Demos

SPG Applications

Indoor Acoustic Environment Monitoring System for a Robotic Platform [S. Grama, L. Grama, C. Rusu, "Using of a Robotic Platform to Detect Acoustic Events for Indoor Environments," in Lecture Notes in Computer Science - Computer Aided Systems Theory - EUROCAST 2024, vol. 15173, pp. 27-39. Springer, Cham, DOI: 10.1007/978-3-031-82957_4-4]
 
  1. Intuitive GUI to facilitate the usage of the application by an elderly person:
    • Select any available test signal or use one recorded in real-time by OMNI-Z,
    • Extract MFCC-34 and MFCC-64 features,
    • Identify the acoustic event by jointly use of six classification models based on kNN, SVM and LDA
  2. The robotic platform selects the appropriate task based on the identified event:
    • If the event corresponds to a normal action (moving a chair, opening or closing a door, using the washing machine or the microwave, etc.) OMNI-Z sends a notification email to the person responsible for the permanent remote monitoring of the home,
    • If the event is a potentially alarming one (a strong cough that indicates a potential choking of the person, a request for specific types of medication, etc.) OMNI-Z sends an alert email both to the person who has to intervene in such cases and to the person who permanently monitors the home remotely
(Python implementation)
 
 
Audio Data Stream Recording Module [L. Grama, C. Rusu, "Extending Assisted Audio Capabilities of TIAGo Service Robot," in Proceedings of the Conference on Speech Technology and Human Computer Dialogue, Oct. 10-12, 2019, Bucharest, Romania, pp. 1-8, DOI: 10.1109/SPED.2019.8906635]
 
  1. Record a signal through the appropriate/choosen audio input device,
  2. Remove silence,
  3. Segment data stream into isolated audio events,
  4. And save them
(Matlab implementation)
 
 
Isolated Audio Event Identification Module [L. Grama, C. Rusu, "Extending Assisted Audio Capabilities of TIAGo Service Robot," in Proceedings of the Conference on Speech Technology and Human Computer Dialogue, Oct. 10-12, 2019, Bucharest, Romania, pp. 1-8, DOI: 10.1109/SPED.2019.8906635]
 
  1. Choose a '.wav' signal (same class as the ones presented in the database)
    • from the isolated sound events recorded by 'Audio Data Stream Recording Module'
    • or any other sound
  2. Evaluate MFCC-34 features,
  3. And detect the audio event (by use of a kNN classifier)
(Matlab implementation)

(Python implementation)