AVAudioEngine

RSS for tag

Use a group of connected audio node objects to generate and process audio signals and perform audio input and output.

Posts under AVAudioEngine tag

52 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

MIDI output form Standalone MIDI Processor Demo App to DAW
I am trying to get MIDI output from the AU Host demo app using the recent MIDI processor example. The processor works correctly in Logic Pro, but I cannot send MIDI from the AUv3 extension in standalone mode using the default host app to another program (e.g., Ableton). The MIDI manager, which is part of the standalone host app, works fine, and I can send MIDI using it directly—Ableton receives it without issues. I have already set the midiOutputNames in the extension, and the midiOutBlock is mapped. However, the MIDI data from the AUv3 extension does not reach Ableton in standalone mode. I suspect the issue is that midiOutBlock might never be called in the plugin, or perhaps an input to the plugin is missing, which prevents it from sending MIDI. I am currently using the default routing. I have modified the MIDI manager such that it works well as described above. Here is a part of my code for SimplePlayEngine.swift and my MIDIManager.swift for reference: @MainActor @Observable public class SimplePlayEngine { private let midiOutBlock: AUMIDIOutputEventBlock = { sampleTime, cable, length, data in return noErr } var scheduleMIDIEventListBlock: AUMIDIEventListBlock? = nil public init() { engine.attach(player) engine.prepare() setupMIDI() } private func setupMIDI() { if !MIDIManager.shared.setupPort(midiProtocol: MIDIProtocolID._2_0, receiveBlock: { [weak self] eventList, _ in if let scheduleMIDIEventListBlock = self?.scheduleMIDIEventListBlock { _ = scheduleMIDIEventListBlock(AUEventSampleTimeImmediate, 0, eventList) } }) { fatalError("Failed to setup Core MIDI") } } func initComponent(type: String, subType: String, manufacturer: String) async -> ViewController? { reset() guard let component = AVAudioUnit.findComponent(type: type, subType: subType, manufacturer: manufacturer) else { fatalError("Failed to find component with type: \(type), subtype: \(subType), manufacturer: \(manufacturer))" ) } do { let audioUnit = try await AVAudioUnit.instantiate( with: component.audioComponentDescription, options: AudioComponentInstantiationOptions.loadOutOfProcess) self.avAudioUnit = audioUnit self.connect(avAudioUnit: audioUnit) return await audioUnit.loadAudioUnitViewController() } catch { return nil } } private func startPlayingInternal() { guard let avAudioUnit = self.avAudioUnit else { return } setSessionActive(true) if avAudioUnit.wantsAudioInput { scheduleEffectLoop() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) do { try engine.start() } catch { isPlaying = false fatalError("Could not start engine. error: \(error).") } if avAudioUnit.wantsAudioInput { player.play() } isPlaying = true } private func resetAudioLoop() { guard let avAudioUnit = self.avAudioUnit else { return } if avAudioUnit.wantsAudioInput { guard let format = file?.processingFormat else { fatalError("No AVAudioFile defined.") } engine.connect(player, to: engine.mainMixerNode, format: format) } } public func connect(avAudioUnit: AVAudioUnit?, completion: @escaping (() -> Void) = {}) { guard let avAudioUnit = self.avAudioUnit else { return } engine.disconnectNodeInput(engine.mainMixerNode) resetAudioLoop() engine.detach(avAudioUnit) func rewiringComplete() { scheduleMIDIEventListBlock = auAudioUnit.scheduleMIDIEventListBlock if isPlaying { player.play() } completion() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) if isPlaying { player.pause() } let auAudioUnit = avAudioUnit.auAudioUnit if !auAudioUnit.midiOutputNames.isEmpty { auAudioUnit.midiOutputEventBlock = midiOutBlock } engine.attach(avAudioUnit) if avAudioUnit.wantsAudioInput { engine.disconnectNodeInput(engine.mainMixerNode) if let format = file?.processingFormat { engine.connect(player, to: avAudioUnit, format: format) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: format) } } else { let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: stereoFormat) } rewiringComplete() } } and my MIDI Manager @MainActor class MIDIManager: Identifiable, ObservableObject { func setupPort(midiProtocol: MIDIProtocolID, receiveBlock: @escaping @Sendable MIDIReceiveBlock) -> Bool { guard setupClient() else { return false } if MIDIInputPortCreateWithProtocol(client, portName, midiProtocol, &port, receiveBlock) != noErr { return false } for source in self.sources { if MIDIPortConnectSource(port, source, nil) != noErr { print("Failed to connect to source \(source)") return false } } setupVirtualMIDIOutput() return true } private func setupVirtualMIDIOutput() { let virtualStatus = MIDISourceCreate(client, virtualSourceName, &virtualSource) if virtualStatus != noErr { print("❌ Failed to create virtual MIDI source: \(virtualStatus)") } else { print("✅ Created virtual MIDI source: \(virtualSourceName)") } } func sendMIDIData(_ data: [UInt8]) { print("hey") var packetList = MIDIPacketList() withUnsafeMutablePointer(to: &packetList) { ptr in let pkt = MIDIPacketListInit(ptr) _ = MIDIPacketListAdd(ptr, 1024, pkt, 0, data.count, data) if virtualSource != 0 { let status = MIDIReceived(virtualSource, ptr) if status != noErr { print("❌ Failed to send MIDI data: \(status)") } else { print("✅ Sent MIDI data: \(data)") } } } } }
0
0
256
1w
AVAudioUnit host - PCM buffer output silent
Hi, I just started to develop audio unit hosting support in my application. Offline rendering seems to work except that I hear no output, but why? I suspect with the player goes something wrong. I connect to CoreAudio in a different location in the code. Here are some error messages I faced so far: 2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node! 2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node! 2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927] AUAudioUnit.mm:1417 Cannot set maximumFramesToRender while render resources allocated. 2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae] AVAEInternal.h:104 [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852 ** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852 I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect: /* audio engine */ audio_engine = [[AVAudioEngine alloc] init]; fx_audio_unit_audio->audio_engine = (gpointer) audio_engine; av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format; /* av audio player node */ av_audio_player_node = [[AVAudioPlayerNode alloc] init]; /* av audio unit */ av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]]; av_audio_unit = (AVAudioUnit *) av_audio_unit_effect; fx_audio_unit_audio->av_audio_unit = av_audio_unit; /* audio sequencer */ av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine]; fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer; /* output node */ [[AVAudioOutputNode alloc] init]; /* audio player and audio unit */ [audio_engine attachNode:av_audio_player_node]; [audio_engine attachNode:av_audio_unit]; [audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format]; [audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format]; ns_error = NULL; [audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline format:av_format maximumFrameCount:buffer_size error:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("enable manual rendering mode error - %d", [ns_error code]); } ns_error = NULL; [[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]); } Then I render in a dedicated thread. ns_error = NULL; [audio_engine startAndReturnError:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("error during audio engine start - %d", [ns_error code]); } [av_audio_sequencer prepareToPlay]; ns_error = NULL; [av_audio_sequencer startAndReturnError:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("error during audio sequencer start - %d", [ns_error code]); } [av_audio_player_node play]; while(is_running){ /* pre sync */ /* IO buffers */ av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer; av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer; /* fill input buffer */ /* schedule av input buffer */ frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size; av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node; AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)]; [av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil]; /* render */ ns_error = NULL; status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("render offline error - %d", [ns_error code]); } } regards, Joël
3
0
343
1w
Can't set AVAudio sampleRate and installTap needs bufferSize 4800 at minimum
Two issues: No matter what I set in try audioSession.setPreferredSampleRate(x) the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad. Now, I'm checking the current output loudness to animate a 3D character, using mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in Task { @MainActor in // calculate rms and animate character accordingly but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized. This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results. But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame. My AVAudioEngine setup is the following: audioEngine.connect(playerNode, to: pitchShiftEffect, format: format) audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format) audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil) Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second. PS: Specifying my favorite format in the let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)! mixerNode.installTap(onBus: 0, bufferSize: y, format: format) doesn't change anything either
1
0
251
3w
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured with .playAndRecord. The input tap is already installed when the engine is started. When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Assumptions / Current Setup Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking. Questions Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine? Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio? Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer? Would using separate engines for input and output improve responsiveness? I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation. Relevant Code Snippets Engine Setup func setup() { let input = audioEngine.inputNode do { try input.setVoiceProcessingEnabled(true) } catch { print("Could not enable voice processing \(error)") return } input.isVoiceProcessingAGCEnabled = false let output = audioEngine.outputNode let mainMixer = audioEngine.mainMixerNode audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat) audioEngine.connect(beepNode, to: mainMixer, format: outputFormat) audioEngine.connect(mainMixer, to: output, format: outputFormat) // Initialize converters converter = AVAudioConverter(from: inputFormat, to: outputFormat)! f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)! audioEngine.prepare() } Input Tap Installation func installTap() { guard AudioHandler.shared.checkMicrophonePermission() else { print("Microphone not granted for recording") return } guard !isInputTapped else { print("[AudioEngine] Input is already tapped!") return } let input = audioEngine.inputNode let microphoneFormat = input.inputFormat(forBus: 0) let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)! let desiredFormat = outputFormat let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate) input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in guard let self = self else { return } // Output buffer: 1920 frames at 16kHz guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return } outputBuffer.frameLength = outputBuffer.frameCapacity let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in outStatus.pointee = .haveData return buffer } var error: NSError? let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if converterResult != .haveData { DebugLogger.shared.print("Downsample error \(converterResult)") } else { self.handleDownsampledBuffer(outputBuffer) } } isInputTapped = true }
4
0
275
3w
AVAudioEngine failing with -10877 on macOS 26 beta, no devices detected via AVFoundation but HAL works
I’m developing a macOS audio monitoring app using AVAudioEngine, and I’ve run into a critical issue on macOS 26 beta where AVFoundation fails to detect any input devices, and AVAudioEngine.start() throws the familiar error 10877. FB#: FB19024508 Strange Behavior: AVAudioEngine.inputNode shows no channels or input format on bus 0. AVAudioEngine.start() fails with -10877 (AudioUnit connection error). AVCaptureDevice.DiscoverySession returns zero audio devices. Microphone permission is granted (authorized), and the app is properly signed and sandboxed with com.apple.security.device.audio-input. However, CoreAudio HAL does detect all input/output devices: Using AudioObjectGetPropertyDataSize and AudioObjectGetPropertyData with kAudioHardwarePropertyDevices, I can enumerate 14+ devices, including AirPods, USB DACs, and BlackHole. This suggests the lower-level audio stack is functional. I have tried: Resetting CoreAudio with sudo killall coreaudiod Rebuilding and re-signing the app Clearing TCC with tccutil reset Microphone Running on Apple Silicon and testing Rosetta/native detection via sysctl.proc_translated Using a fallback mechanism that logs device info from HAL and rotates logs for submission via Feedback Assistant I have submitted logs and a reproducible test case via Feedback Assitant : FB#: FB19024508]
0
0
221
Jul ’25
What is the best approach to multi-channel, per-channel volume control.
I've got a setup using AVAudioEngine with several tone generator nodes, each with a chain of processing nodes, the chains then mixed into the main output. Generator ➡️ Effect ➡️... ➡️ .mainMixerNode ➡️ .outputNode). Generator ➡️ Effect ➡️... ⤴️ ... Generator ➡️ Effect ➡️... ⤴️ The user should be able to mute any chain individually. I've found several potential approaches to muting, but not terribly happy with any of them. Adjust the amplitudes directly in my tone generators. Issue: Consumes CPU even when completely muted. 4 generators adds ~15% cpu, even when all chains are muted. Detach/attach chains that are muted/unmuted. Issue: Causes loud clicking/popping sounds whenever muted/unmuted. Fade mixer output volume while detaching/attaching a chain (just cutting the volume immediately to 0 doesn't get rid of the clicking/popping). Issue: Causes all channels to fade during the transition, so not ideal. The rest of these ideas are variations on making volume control+detatch/attach work for individual chains, since approach #3 worked well. Add an AVAudioMixer to the end of each chain (just for volume control). Issue: Only the mixer on the final chain functions -- the others block all output. Not sure what's going on there. Use matrix mixer (for multi-input volume control). Plus detach/attach to reduce CPU if necessary. Not yet attempted, due to perceived complexity and reports of fragility in order of wiring in. A bunch of effort before I even know if it's going to work. Develop my own fader node to put on the end of each channel. Unlike the tone generator (simple AVSourceNode), developing an effect node seems complex and time consuming. Might not even fix CPU use. I'm not completely averse to the learning curve of either 5 or 6, but would rather get some guidance on best approach before diving in. They both seem likely to take more effort than I'd like for the simple behavior I'm trying to achieve.
0
0
167
Jul ’25
Execution breakpoint when trying to play a music library file with AVAudioEngine
Hi all, I'm working on an audio visualizer app that plays files from the user's music library utilizing MediaPlayer and AVAudioEngine. I'm working on getting the music library functionality working before the visualizer aspect. After setting up the engine for file playback, my app inexplicably crashes with an EXC_BREAKPOINT with code = 1. Usually this means I'm unwrapping a nil value, but I think I'm handling the optionals correctly with guard statements. I'm not able to pinpoint where it's crashing. I think it's either in the play function or the setupAudioEngine function. I removed the processAudioBuffer function and my code still crashes the same way, so it's not that. The device that I'm testing this on is running iOS 26 beta 3, although my app is designed for iOS 18 and above. After commenting out code, it seems that the app crashes at the scheduleFile call in the play function, but I'm not fully sure. Here is the setupAudioEngine function: private func setupAudioEngine() { do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try AVAudioSession.sharedInstance().setActive(true) } catch { print("Audio session error: \(error)") } engine.attach(playerNode) engine.attach(analyzer) engine.connect(playerNode, to: analyzer, format: nil) engine.connect(analyzer, to: engine.mainMixerNode, format: nil) analyzer.installTap(onBus: 0, bufferSize: 1024, format: nil) { [weak self] buffer, _ in self?.processAudioBuffer(buffer) } } Here is the play function: func play(_ mediaItem: MPMediaItem) { guard let assetURL = mediaItem.assetURL else { print("No asset URL for media item") return } stop() do { audioFile = try AVAudioFile(forReading: assetURL) guard let audioFile else { print("Failed to create audio file") return } duration = Double(audioFile.length) / audioFile.fileFormat.sampleRate if !engine.isRunning { try engine.start() } playerNode.scheduleFile(audioFile, at: nil) playerNode.play() DispatchQueue.main.async { [weak self] in self?.isPlaying = true self?.startDisplayLink() } } catch { print("Error playing audio: \(error)") DispatchQueue.main.async { [weak self] in self?.isPlaying = false self?.stopDisplayLink() } } } Here is a link to my test project if you want to try it out for yourself: https://github.com/aabagdi/VisualMan-example Thanks!
8
0
584
Jul ’25
How to synchronize the clock sources of two audio devices
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
2
0
138
May ’25
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
1
0
91
May ’25
iOS Audio Routing - Bluetooth Output + Built-in Microphone Input
Hello! I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone. Desired behavior: Play audio through Bluetooth headset (AirPods) Record unprocessed environmental audio from the iPhone's built-in microphone Actual behavior: When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs) However, the actual audio data received is clearly still coming from the AirPods microphone The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds Environment Details Device: iPhone 12 Pro Max iOS Version: 18.4.1 Hardware: AirPods Audio Framework: AVAudioEngine (also tried AudioQueue) Code Attempted I've tried multiple approaches to force the correct routing: func configureAudioSession() { let session = AVAudioSession.sharedInstance() // Configure to allow Bluetooth output but use built-in mic try? session.setCategory(.playAndRecord, options: [.allowBluetoothA2DP, .defaultToSpeaker]) try? session.setActive(true) // Explicitly select built-in microphone if let inputs = session.availableInputs, let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) { try? session.setPreferredInput(builtInMic) print("Selected input: \(builtInMic.portName)") } // Log the current route let route = session.currentRoute print("Current input: \(route.inputs.first?.portName ?? "None")") // Configure audio engine with native format let inputNode = audioEngine.inputNode let nativeFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in // Process audio buffer // Despite showing "Built-in Microphone" in route, audio appears to be // coming from AirPods with voice isolation applied - welp! } try? audioEngine.start() } I've also tried various combinations of: Different audio session modes (.default, .measurement, .voiceChat) Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP) Setting session.setPreferredInput() both before and after activation Diagnostic Observations When AirPods are connected: AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput() The actual audio data received shows clear signs of AirPods' voice isolation processing Background/environmental sounds are actively filtered out... When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through. Questions Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output? Are there any lower-level configurations that might resolve this issue? Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
0
0
100
May ’25
AVAudioMixerNode outputVolume range?
According to the header file the outputVolume properties supported range is 0.0-1.0: /*! @property outputVolume @abstract The mixer's output volume. @discussion This accesses the mixer's output volume (0.0-1.0, inclusive). @property (nonatomic) float outputVolume; However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume? Thanks
0
0
47
Apr ’25
Crackling/Popping sound when using AVAudioUnitTimePitch
I have a simple AVAudioEngine graph as follows: AVAudioPlayerNode -> AVAudioUnitEQ -> AVAudioUnitTimePitch -> AVAudioUnitReverb -> Main mixer node of AVAudioEngine. I noticed that whenever I have AVAudioUnitTimePitch or AVAudioUnitVarispeed in the graph, I noticed a very distinct crackling/popping sound in my Airpods Pro 2 when starting up the engine and playing the AVAudioPlayerNode and unable to find the reason why this is happening. When I remove the node, the crackling completely goes away. How do I fix this problem since i need the user to be able to control the pitch and rate of the audio during playback. import AVKit @Observable @MainActor class AudioEngineManager { nonisolated private let engine = AVAudioEngine() private let playerNode = AVAudioPlayerNode() private let reverb = AVAudioUnitReverb() private let pitch = AVAudioUnitTimePitch() private let eq = AVAudioUnitEQ(numberOfBands: 10) private var audioFile: AVAudioFile? private var fadePlayPauseTask: Task<Void, Error>? private var playPauseCurrentFadeTime: Double = 0 init() { setupAudioEngine() } private func setupAudioEngine() { guard let url = Bundle.main.url(forResource: "Song name goes here", withExtension: "mp3") else { print("Audio file not found") return } do { audioFile = try AVAudioFile(forReading: url) } catch { print("Failed to load audio file: \(error)") return } reverb.loadFactoryPreset(.mediumHall) reverb.wetDryMix = 50 pitch.pitch = 0 // Increase pitch by 500 cents (5 semitones) engine.attach(playerNode) engine.attach(pitch) engine.attach(reverb) engine.attach(eq) // Connect: player -> pitch -> reverb -> output engine.connect(playerNode, to: eq, format: audioFile?.processingFormat) engine.connect(eq, to: pitch, format: audioFile?.processingFormat) engine.connect(pitch, to: reverb, format: audioFile?.processingFormat) engine.connect(reverb, to: engine.mainMixerNode, format: audioFile?.processingFormat) } func prepare() { guard let audioFile else { return } playerNode.scheduleFile(audioFile, at: nil) } func play() { DispatchQueue.global().async { [weak self] in guard let self else { return } engine.prepare() try? engine.start() DispatchQueue.main.async { [weak self] in guard let self else { return } playerNode.play() fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: true) // Ramp up volume until 1 is reached if volume >= 1 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 1 } } } } func pause() { fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: false) // Ramp down volume until 0 is reached if volume <= 0 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 0 playerNode.pause() // Shut down engine once ramp down completes DispatchQueue.global().async { [weak self] in guard let self else { return } engine.pause() } } } private func updateVolume(for x: Double, rising: Bool) -> Float { if rising { // Fade in return Float(pow(x, 2) * (3.0 - 2.0 * (x))) } else { // Fade out return Float(1 - (pow(x, 2) * (3.0 - 2.0 * (x)))) } } func setPitch(_ value: Float) { pitch.pitch = value } func setReverbMix(_ value: Float) { reverb.wetDryMix = value } } struct ContentView: View { @State private var audioManager = AudioEngineManager() @State private var pitch: Float = 0 @State private var reverb: Float = 0 var body: some View { VStack(spacing: 20) { Text("🎵 Audio Player with Reverb & Pitch") .font(.title2) HStack { Button("Prepare") { audioManager.prepare() } Button("Play") { audioManager.play() } .padding() .background(Color.green) .foregroundColor(.white) .cornerRadius(10) Button("Pause") { audioManager.pause() } .padding() .background(Color.red) .foregroundColor(.white) .cornerRadius(10) } VStack { Text("Pitch: \(Int(pitch)) cents") Slider(value: $pitch, in: -2400...2400, step: 100) { _ in audioManager.setPitch(pitch) } } VStack { Text("Reverb Mix: \(Int(reverb))%") Slider(value: $reverb, in: 0...100, step: 1) { _ in audioManager.setReverbMix(reverb) } } } .padding() } }
1
0
108
Apr ’25
Level Networking on watchOS for Duplex audio streaming
I did watch WWDC 2019 Session 716 and understand that an active audio session is key to unlocking low‑level networking on watchOS. I’m configuring my audio session and engine as follows: private func configureAudioSession(completion: @escaping (Bool) -> Void) { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: []) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) // Retrieve sample rate and configure the audio format. let sampleRate = audioSession.sampleRate print("Active hardware sample rate: \(sampleRate)") audioFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1) // Configure the audio engine. audioInputNode = audioEngine.inputNode audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFormat) try audioEngine.start() completion(true) } catch { print("Error configuring audio session: \(error.localizedDescription)") completion(false) } } private func setupUDPConnection() { let parameters = NWParameters.udp parameters.includePeerToPeer = true connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupTCPConnection() { let parameters = NWParameters.tcp connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupWebSocketConnection() { guard let url = URL(string: "ws://***.***.xxxxx.***:0000") else { print("Invalid WebSocket URL") return } let session = URLSession(configuration: .default) webSocketTask = session.webSocketTask(with: url) webSocketTask?.resume() print("WebSocket connection initiated") sendAudioToServer() receiveDataFromServer() sendWebSocketPing(after: 0.6) } private func setupNWConnectionHandlers() { connection?.stateUpdateHandler = { [weak self] state in DispatchQueue.main.async { switch state { case .ready: print("Connected (NWConnection)") self?.isConnected = true self?.failToConnect = false self?.receiveDataFromServer() self?.sendAudioToServer() case .waiting(let error), .failed(let error): print("Connection error: \(error.localizedDescription)") DispatchQueue.main.asyncAfter(deadline: .now() + 2) { self?.setupNetwork() } case .cancelled: print("NWConnection cancelled") self?.isConnected = false default: break } } } connection?.start(queue: .main) } Duplex in this context refers to two-way audio transmission simultaneously recording and sending audio while also receiving and playing back incoming audio, similar to a VoIP/SIP call. The setup works fine on the simulator, which suggests that the core logic is correct. However, since the simulator doesn’t fully replicate WatchOS hardware behavior especially for audio sessions and networking issues might arise when running on a real device. The problem likely lies in either the Watch’s actual hardware limitations, permission constraints, or specific audio session configurations. I am reaching out to seek further assistance regarding the challenges I've been experiencing with establishing a UDP, TCP & web socket connection on watchOS using NWConnection for duplex audio streaming. Despite implementing the recommendations provided earlier, I am still encountering difficulties From what I can see, your implementation is focused on streaming audio playback with the server. In my case, I'm looking for a slightly different approach: I want to capture audio and send buffers of a specific size to the server while playing audio simultaneously, essentially achieving full duplex streaming similar to a VOIP call. Additionally, I’d like to ensure that if no external audio route is connected, the Apple Watch speaker is used by default. Any thoughts or insights on adapting this setup for those requirements would be very welcome.
1
0
76
Apr ’25
Attaching procedural audio to an ARKit SCNNode
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post Working Implementation with Static Audio File: let audioPlayer = SCNAudioPlayer(source: audioSource) node.addAudioPlayer(audioPlayer) Attempted Implementation with Procedural Audio: // Audio generation code } let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode) node.addAudioPlayer(audioPlayer) In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here: Apple docs Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating: Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use. However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
0
0
135
Apr ’25
Creating an initial Now Playing state of paused - impossible?
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart. The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo. What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled. Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first). I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described. https://github.com/timboudreau/ios-play-pause-demo I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown. What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
0
2
70
Apr ’25
Audio player app is silent if device connected via CarPlay
I have a SwiftUI app - (https://youtu.be/VbAfUk_eYl0?si=JxUBh0Bpb-vc1E1U) - which I thought was almost ready for release - a manager for airdropped audio files from Logic Pro or other music creation applications. It uses AVAudioEngine and AVAudioPlayerNode to play audio, and the MediaPlayer API to integrate with car audio and similar, all of which works well. It does not currently have an explicit CarPlay integration (and I'm slightly horrified at the amount of work that is going to require). I had the good or bad luck of getting a loaner car with carplay while mine is being repaired yesterday, and lo and behold, when connected to the vehicle via CarPlay, there is no audio output in the vehicle at all. The now playing panel correctly shows the information my app provides about the currently playing song; the player node believes it is playing, the AVAudioSession is configured as it should be. But there is no sound. Obviously I cannot ship it in this state. I've tried fiddling with the parameters the AVAudioSession is configured with, in case there was some parameter that was preventing audio output, to no avail - currently: var options = AVAudioSession.CategoryOptions() options.insert(.allowAirPlay) options.insert(.allowBluetooth) options.insert(.allowBluetoothA2DP) try session.setCategory(.playback, mode: .default, options: options) try? session.setPreferredIOBufferDuration(0.002) // ~96 samples at 44.1kHz try? session.setPrefersNoInterruptionsFromSystemAlerts(true) try? session.setPrefersInterruptionOnRouteDisconnect(false) try session.setActive(true, options: [.notifyOthersOnDeactivation]) All diagnostics within the app show the player operating correctly - files are played and flushed; AVAudioPlayerNodeCompletionCallbacks are called when they should be. But the output is not audible in the vehicle. I would much prefer to ship this app without full-blown CarPlay integration, but with working audio when connected via CarPlay, and work on full CarPlay integration for the next release. Is there some secret handshake I am just missing to make this work?
1
0
67
Mar ’25
Play Audio for a Metronome
Hi, I am looking for a good way to play sounds at a high frequency. At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer. When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer. The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse. Maybe anyone has an idea on how I can improve my method. Its a Plugin for Flutter. import AVFoundation class FastSoundPlayer { private var audioPlayers: [SoundPlayer?] = [] private var sounds: [String: Sound] = [:] private var engine = AVAudioEngine() let session = AVAudioSession.sharedInstance() init() { do { try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers]) try session.setActive(true) createSoundPlayers(count: 20) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } // Selector method to handle applicationDidBecomeActiveNotification func applicationDidBecomeActive() { // Reinitialize AVAudioEngine and reattach all nodes do { engine.reset() objc_sync_enter(audioPlayers) audioPlayers.removeAll() createSoundPlayers(count: 20) objc_sync_exit(audioPlayers) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } func createSoundPlayers(count: Int) { for _ in 0..<count { let player = SoundPlayer() engine.attach(player.player) engine.connect(player.player, to: engine.mainMixerNode, format: nil) audioPlayers.append(player) } } func load(sound: Data, name: String) { let sound = Sound(soundData: sound) sounds[name] = sound } func play(name: String) { if !engine.isRunning { applicationDidBecomeActive() } guard let sound = sounds[name] else { print("Sound not found") return } if let player = getAvailablePlayer() { player.play(sound: sound) } } func getAvailablePlayer() -> SoundPlayer? { for player in audioPlayers { if !player!.isPlaying { return player } } return nil } } class SoundPlayer { let player = AVAudioPlayerNode() var isPlaying = false init() { player.volume = 1.0 } func play(sound: Sound) { player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in self.complete() } if (player.engine != nil && player.engine!.isRunning) { player.play() isPlaying = true } } func complete() { isPlaying = false } } class Sound { var sound: AVAudioPCMBuffer? init(soundData: Data) { do { let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav") try soundData.write(to: temporaryURL) // Create AVAudioFile from the temporary file URL let audioFile = try AVAudioFile(forReading: temporaryURL) // Define the format for the PCM buffer (44100Hz, stereo) let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false) // Create AVAudioPCMBuffer guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else { // Failed to create PCM buffer self.sound = nil return } // Read audio file into PCM buffer try audioFile.read(into: pcmBuffer) // Assign the created AVAudioPCMBuffer to the sound property self.sound = pcmBuffer } catch { print("Error loading sound file: \(error.localizedDescription)") self.sound = nil } } } Thanks!
1
0
84
Mar ’25
Making sense of AVAudioSession interruption notifications
I have an app under development - demo here - https://youtu.be/VbAfUk_eYl0?si=s6EDBx-4G6P_QbZO - which is sort of an audio player for airdropped files - something useful to musicians who dump work in progress to their phone, make notes, revise and update. I've been testing my handling of audio session interruption notifications, but seems to be a lot of inconsistency in how, when and why iOS delivers them, and I'm wondering if there is some rhyme or reason to it that I'm just not detecting. For example, I am playing a song in my app. Switch to Apple Music and start playing a song there. My app gets an interruption began notification - this is consistent. Switch back to my app, and about half the time, I will get an interruption ended notification (coupled often with a blast of the tail of whatever audio buffer was partially played when the interruption started, even though the engine was stopped - and followed by call to my AVAudioPlayerNodeCompletionCallback - is there some way to avoid this?). Half the time I don't get an interruption ended notification; my app can (as expected) end the interruption by activating the AVAudioSession and playing something. I have not been able to determine any pattern to this behavior, other than that if my app started playing using AVAudioPlayerNode.scheduleSegment rather than scheduleFile I think the notification will be consistently delivered on app activation rather than when I activate the session programmatically. I would like my app to behave deterministically, and would appreciate any help in deciphering what causes the inconsistent behavior in notifications from iOS.
2
0
305
Mar ’25
No audio in screen recordings when using AVAudioEngine Voice Processing
Hello, We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active. Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously? Any guidance would be greatly appreciated. Thank you!
1
0
285
Mar ’25