Detecting Microphone Input Levels in Your iOS Application | by Andronick Martusheff | Jun, 2022

Setting up audio capture in your apps

Photo by Jonathan Velasquez on Unsplash

If you’re familiar with setting up an Xcode project, skip ahead to “Setting up our Audio Capture Session”; otherwise, let’s get started on building our iOS sound detection app!

Let’s start with creating our starter iOS application.

After opening Xcode, and selecting “Create a new project”, we will be prompted to select a platform and Application type.

We will be selecting the iOS default app configuration as shown below.

Next, we will complete our starter project setup by selecting Storyboard as our interface.

After filling in the rest of the required info (project name, organization identifier..etc) we can select next and then save the project.

After doing so, we will be thrown into our new Xcode project.

Getting Started

Let’s make sure we understand how we are getting the audio reading, and how where we can start adding our code.

For this, we will start in the view controller (pictured above). In order to gain access to audio-related programming functionality, we are going to need more than what Swift provides by default.

We will be using the AVFoundation library, so next to where we import UIKit we are going to import AVFoundation .

Now with AVFoundation imported, we are going to want to turn our attention over to the viewDidLoad() function.

To put it simply, once we run our application, everything we put inside viewDidLoad(will execute once the view loads. (You can try this out yourself by adding a print("Hello Medium!") print statement within the function, run the app, and see your message print to console once your iOS simulator view loads. Its naming scheme is pretty intuitive).

With the basics covered (and your ViewController.swift file set up exactly as above), we can now move on to setting up our audio capture session.

Setting up our Audio Capture Session

Underneath our viewDidLoad() function we are going to start the setup for our audio capture session.

In our new function (shown above) we start by creating a recordingSession with the help of AVAudioSession (provided by the library we had previously imported).

This audio session basically serves as a middle man between our app and the operating system’s access to the host’s audio hardware.

Next, we’ll notice that the bulk of the code here is wrapped in a do/catch block. This is because at every step of configuration, there is the potential for unwanted behavior to emerge (ie. interacting with our recordingSession may throw an exception, and needs to be handled adequately).

If we catch any exception thrown by the following setup, it will be caught and our error message will be printed to console.

Our recording session will require the following setup:

  1. Set the category for the recording session to be playAndRecord
  2. Activate the recording session with setActive
  3. Attempt to receive microphone permissions from the device with requestRecordPermission

Now that we have set up our audio session setup within our app, we have to set audio up as an option that is available to even be requested otherwise our application will crash on launch.

Enabling Microphone Permission Requests

  1. Navigate over to your projects Info.plist file.
  2. Hover over a row that’s already in the table, and select the ‘+’ button to add your own property.
  3. For the key, enter Privacy — Microphone Usage Description .
  4. The value is the message your application will read out when asking for microphone permissions. Put in something descriptive.

And that’s all there is to it!

When running your application now, you should be prompted for microphone access with the message you had just entered in.

If for whatever reason you’re still experiencing issues, below is a current working example of where we should be with our view controller.

Now to the fun stuff.

Capturing Audio

So we had just created our instance of a shared audio session and have setup the request for microphone permissions.

We’re now able to set up our AVAudioRecorder (also provided by AVFoundation).

captureAudio 1 of 2
  • Because we are setting up a recording session, by definition we need somewhere to save the recording. documentPath is generating a file path dynamically that will work cross system.
  • The audio recording that we’re creating will also require a name. (If security is a concern here, you have the option of not keeping the recorded file saved, but for our purpose here we will be fine.)
  • Then we are going to create a dictionary of settings for our audio recorder. We’re keeping the settings relatively simple here. AVFormatIDKey specifies the type of audio file we will be working with. AVSampleRateKey specifies the audio sample rate (in hertz). AVNumberOfChannelsKey specifies the number of audio channels. AVEncoderAudioQualityKey specifies our desired audio quality levels. (Continued)
captureAudio 2 of 2
  • With the last of the setup complete, we are now able to create our audio recorder, and again, we’re putting it into a do/catch block.
  • We are going to create out audioRecorder using AVAudioRecorder and pass in the URL and settings dictionary we just created.
  • We’re going to begin the recording using record() on our audioRecorder instance.
  • By default, an audio recording session doesn’t have metering enabled because it uses computing resources, so we’re going to want to enable it to get access to the metered info of the audio recorder. This metered info is what contains the decibel reading.
  • We also of course want to read the metered decibel information more than just once, which is where a Timer comes into play. In short, we will be using this timer to run some code repeatedly over a set time interval.
  • Every time the interval strikes, we will update the meters to our audio recording. After updating the meters, we will finally get our desired db info by retrieving the .averagePower(forChannel: 0) . This will return the average decibel reading over the last second of recording from range -160dBFS (minimum) to 0dBFS (maximum). (Recall our settings, when we passed in our number of channels. Since we have only created 1 channel, we are basically just getting the first channel here.)
  • Lastly, print this decibel reading to console.

Now, if you run your app again you should see your console print out the recorded decibel reading every 0.1 seconds. The louder you get, the closer the value gets to 0.

I will have a follow-up video showing how we can build a responsive UI using the decibel data retrieved using an AVAudioRecording. Hope I could help!


If for any reason you run into issues, here is my final ViewController.

As long as this and the Info.plist files are as shown, you should be seeing the decibel reading print.

If you’re still running into issues, feel free to reach out, and I’ll lend as much of a hand as I can!

Thanks for reading.

Leave a Comment