---slug: /guides/sentiment-analysissidebar_position: 16x-custom: ported_from_readme: true---# Sentiment Analysis - PythonThis guide will show you how to easily identify the sentiment and emotion of a call. This is good for POST analysis or with some simple modification, it can be used in real-time to route the caller actively.In this guide, we will accept calls and send audio to the NLP service allowing for a web application to monitor sentiment in real-time. **Sentiment is a score from 0 to 1 that identifies the callers emotion.** You point a SignalWire phone number to the /voice_entry endpoint, and it will assign a score and read it back to you. Once a call has three consecutive negative results, the API dispatches an SMS to a supervisor for assistance.Keep in mind this is an example. However, depending on the sentiment score, you can route the call however your business sees fit, send follow-up surveys for customer satisfaction purposes, begin call recordings, and so much more. The possibilities are truly endless.## What do I need to run this application?View the full code on our Github [here](https://github.com/signalwire/guides/tree/main/Voice/Sentiment%20Analysis%20with%20Python)!You will need the Flask framework and the SignalWire [Python SDK](pathname:///compatibility-api/rest/overview/client-libraries-and-sdks#python) downloaded.You will also need your API key from [Microsoft Cognitive Services](https://azure.microsoft.com/en-us/services/cognitive-services/) in order to use their sentiment analysis.## Running the Application### Build and Run on Docker1. Use our pre-built image from Docker Hubdocker pull signalwire/snippets-sentiment-analysis:python(or build your own image)1. Build your imagedocker build -t snippets-sentiment-analysis .2. Run your imagedocker run --publish 5000:5000 --env-file .env snippets-sentiment-analysis3. The application will run on port 5000### Build and Run NativelyTo run the application, execute export FLASK_APP=app.py then run flask run.You may need to use an SSH tunnel for testing this code if running on your local machine. – we recommend [ngrok](https://ngrok.com/). You can learn more about how to use ngrok [here](/guides/how-to-test-webhooks-with-ngrok).## Step by Step Code WalkthroughIn the Github repo, there are 4 files:- Dockerfile- README.md- .env file- app.pyWe can ignore the dockerfile and README.md - we will start with the .env file and then go over the app.py file below.### Setup Your Environment File1. Copy from example.env and fill in your values2. Save new file called .envYour file should look something like this.# This is your key for Microsoft cognitive servicesMICROSOFT_KEY=### Configuring app.pyThe first part of our code is defining the function that will actually utilize Microsoft cognitive services API to analyze the sentiment of the user-given message. Calling this function with the input_text (recorded speech from the caller) and the input_language (the language of the speaker) as parameters will perform the sentiment analysis and store the response in a JSON to make it easier to query later.pythondef get_sentiment(input_text, input_language): base_url = https://eastus2.api.cognitive.microsoft.com/text/analytics path = /v2.0/sentiment constructed_url = base_url + path headers = { Ocp-Apim-Subscription-Key: subscription_key, Content-type: application/json, X-ClientTraceId: str(uuid.uuid4()) } # You can pass more than one object in body. body = { documents: [ { language: input_language, id: 1, text: input_text } ] } response = requests.post(constructed_url, headers=headers, json=body) return response.json()The route /voice_entry will be used to handle incoming calls to your phone number. Here we will use [Gather](pathname:///compatibility-api/xml/voice/gather) to gather a speech response and [Say](pathname:///compatibility-api/xml/voice/say) to play a prompt advising the user to begin speaking. After they are doing speaking, we will hang up the call and redirect to the /sentiment route specified in the action URL.python@app.route(/voice_entry, methods=[GET,POST])def voice_entry(): # instantiate a voice response response = VoiceResponse() # Prompt user gather = Gather(action=/sentiment, input=speech, speechTimeout=auto, timeout=10, method=GET) # Append say to gather to produce TTS gather.say(Please say a phrase or statement, and we will than analyze the verbiage and tell you the sentiment. ) # Append the gather response.append(gather) # Hangup the call response.hangup() # return response return str(response)In the /sentiment route, we will get the SpeechResult parameter from the HTTP request to this route. We will use this along with the input_lang (English) as parameters for our get_sentiment() function. We will round the score to the nearest two digits and convert it into words that represent the emotion of the caller as this is more human-friendly than plain numbers. Lastly, we will use [