KEMBAR78
Django Backend Documentation | PDF | Mobile App | Software
0% found this document useful (0 votes)
57 views2 pages

Django Backend Documentation

Uploaded by

achahan736
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views2 pages

Django Backend Documentation

Uploaded by

achahan736
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Backend & Django Integration: Project

Documentation
1. Overview
This document describes the backend and Django structure of the Voice Mood Detection
web application. The system uses a trained LSTM model to predict emotional tone from a
user's recorded voice and is built using a Flask-like Python model logic alongside Django for
web interaction.

2. Django Project Structure


The Django project is structured as follows:

voice_mood_web/
├── manage.py # Django's command-line utility
├── voice_mood_web/ # Main project config directory
│ ├── settings.py # Project settings (DEBUG, static files, apps)
│ ├── urls.py # Main URL router, includes app-level URLs
│ └── wsgi.py # WSGI entry point for deployment
├── mood_django/ # Custom Django app for voice processing
│ ├── views.py # Main logic for handling POSTed audio and returning results
│ ├── urls.py # Maps root URL to index view
│ ├── templates/ # Contains index.html (user interface)
│ └── static/ # Holds CSS and JavaScript assets

3. manage.py
This file is the entry point to manage your Django project (runserver, makemigrations, etc.).

4. settings.py
Defines configuration such as static file handling, installed apps (including 'mood_django'),
and allowed hosts.

5. voice_mood_web/urls.py
Includes the app's routing using `include('mood_django.urls')`. It connects the base URL ('/')
to the voice detection app.
6. mood_django/urls.py
Routes requests to the index view that handles both GET and POST operations.

7. mood_django/views.py
Contains the core view function. On POST:
- Saves uploaded audio to `user_recordings`
- Converts it to mono and resamples using pydub
- Prepares it for prediction by loading the trained LSTM model from `models/`
- Returns predicted emotion and confidence as JSON

8. Audio Processing Flow

1. The HTML page records audio using MediaRecorder and sends a `.wav` blob via POST.
2. Django saves this to disk using `request.FILES['audio_data']`.
3. The file is converted using pydub to 44.1kHz, mono, normalized.
4. This waveform is then transformed into features using the same pipeline as training.
5. The model predicts the emotion and returns it with a confidence score.

9. mood_detector.py
Standalone Python module that contains the class `VoiceMoodDetector`. It handles:
- Safe model loading
- Audio feature extraction (MFCCs, spectral centroid, ZCR)
- Model training and prediction
- Utilities like plotting and dataset loading

10. models Directory

Contains the trained components:


- model.keras: Trained LSTM model.
- label_encoder.pkl: Encodes emotion labels.
- scaler.pkl: Normalizes input features as in training.

11. Frontend Integration


The frontend consists of a styled HTML form in `templates/index.html`. It uses basic JS to
capture audio, send it to Django, and display the results. CSS is included from
`static/style.css`.

You might also like