hi,
Is there an Audio Unit logo I can show on my website? I would love to show that my application is able to host Audio Unit plugins.
regards, Joël
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi, I believe I've found a potential error in the sample code on the documentation page for creating and using a process tap with an aggregate device. The issue is in the section explaining how to add a tap to the aggregate device. I have already filed a Feedback Assistant ticket on this (ID: FB17411663) but haven't heard back for months.
Capturing system audio with Core Audio taps
The sample code for modifying the kAudioAggregateDevicePropertyTapList incorrectly uses the tapID as the target AudioObjectID when calling AudioObjectSetPropertyData.
// (Code to get the list and potentially modify listAsArray)
if var listAsArray = list as? [CFString] {
// ... (modification logic) ...
// Set the list back on the aggregate device. <--- The comment is correct
list = listAsArray as CFArray
_ = withUnsafeMutablePointer(to: &list) { list in
// INCORRECT: This call uses tapID as the target object.
AudioObjectSetPropertyData(tapID, &propertyAddress, 0, nil, propertySize, list)
}
}
The kAudioAggregateDevicePropertyTapList is a property that belongs to the aggregate device, not the individual tap. Therefore, to set this property, the AudioObjectSetPropertyData function must target the AudioObjectID of the aggregate device itself. Using tapID as the first argument is logically incorrect for this operation and will not update the aggregate device as intended.
Furthermore, the preceding AudioObjectGetPropertyData call to fetch the list also appears to use the incorrect tapID as its target in the sample.
The AudioObjectID for both getting and setting this property should be the ID of the aggregate device.
_ = AudioObjectGetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, &propertySize, &list)
_ = AudioObjectSetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, propertySize, newList)
Thank you!
Hi everyone,
I’m testing audio recording on an iPhone 15 Plus using AVFoundation.
Here’s a simplified version of my setup:
let settings: [String: Any] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVSampleRateKey: 8000,
AVNumberOfChannelsKey: 1,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false
]
audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings)
audioRecorder?.record()
When I check the recorded file’s sample rate, it logs:
Actual sample rate: 8000.0
However, when I inspect the hardware sample rate:
try session.setCategory(.playAndRecord, mode: .default)
try session.setActive(true)
print("Hardware sample rate:", session.sampleRate)
I consistently get:
`Hardware sample rate: 48000.0
My questions are:
Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally?
Is there any way to force the hardware to record natively at 8 kHz?
If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices?
Thanks in advance for your guidance!
3
I am working on an application to get when input audio device is being used. Basically I want to know the application using the microphone (built-in or external)
This app runs on macOS. For Mac versions starting from Sonoma I can use this code:
int getAudioProcessPID(AudioObjectID process)
{
pid_t pid;
if (@available(macOS 14.0, *)) {
constexpr AudioObjectPropertyAddress prop {
kAudioProcessPropertyPID,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain
};
UInt32 dataSize = sizeof(pid);
OSStatus error = AudioObjectGetPropertyData(process, &prop, 0, nullptr, &dataSize, &pid);
if (error != noErr) {
return -1;
}
} else {
// Pre sonoma code goes here
}
return pid;
}
which works.
However, kAudioProcessPropertyPID was added in macOS SDK 14.0.
Does anyone know how to achieve the same functionality on previous versions?
I’m running HomePod OS 26 on two HomePod minis and OS 18.6 on main HomePod (original)
I’ve enabled Crossfade in the Home app.
I’m playing Apple Music directly in the HomePod mini.
Crossfade just doesn’t work on any HomePod.
I can understand it not working on the HomePod - but why isn’t it working on the minis running OS 26?
I’ve tried disabling and enabling Crossfade, rebooting HomePods etc but nothing?!
Hello,
I'm evaluating the Apple Music Feed dataset and I noticed that the total number of songs available in the feed is too small. As of today, the number of objects returned in each feed is:
51,198,712 albums
23,093,698 artists
173,235,315 songs
This gives an average of 3.38 songs per album which is quite low. Also, iterating on the data I see that there are albums referencing songs that don't exist in the songs feed. I would like to know:
Is the feed data incomplete?
If so, in what situations an object may be missing from the feed?
Thank you in advance!
Session player regions populate blank, with no sound media when tracks or regions are created.
Hi,
I've had a new deck installed in my car for about 1.5 weeks.
I'm having compatibility issues with my 15PM.
It happens both wired and wirelessly, I get the error "Accessory not supported by this device". It used to happen all the time, now it's 50/50. Sometimes it works.
I've removed and added Bluetooth multiple times on phone and deck, I bought a belkin usb-c to usb-a cable today and it seems to fix it but the problem comes back.
I've changed the setting "FaceID and passcode-allow access when locked-accessories."
The car stereo guy reckons it's definitely an issue with the phone not the deck, I'm inclined to believe him since the error states "by this device".
Any advice appreciated.
Topic:
Media Technologies
SubTopic:
Audio
Hi,
I have just implemented an Audio Unit v3 host.
AgsAudioUnitPlugin *audio_unit_plugin;
AVAudioUnitComponentManager *audio_unit_component_manager;
NSArray<AVAudioUnitComponent *> *av_component_arr;
AudioComponentDescription description;
guint i, i_stop;
if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){
return;
}
audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager];
/* effects */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_Effect;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
/* instruments */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_MusicDevice;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
But this doesn't show me Audio Unit v2 plugins, why?
regards, Joël
I am trying to get MIDI output from the AU Host demo app using the recent MIDI processor example. The processor works correctly in Logic Pro, but I cannot send MIDI from the AUv3 extension in standalone mode using the default host app to another program (e.g., Ableton).
The MIDI manager, which is part of the standalone host app, works fine, and I can send MIDI using it directly—Ableton receives it without issues. I have already set the midiOutputNames in the extension, and the midiOutBlock is mapped. However, the MIDI data from the AUv3 extension does not reach Ableton in standalone mode. I suspect the issue is that midiOutBlock might never be called in the plugin, or perhaps an input to the plugin is missing, which prevents it from sending MIDI. I am currently using the default routing.
I have modified the MIDI manager such that it works well as described above. Here is a part of my code for SimplePlayEngine.swift and my MIDIManager.swift for reference:
@MainActor
@Observable
public class SimplePlayEngine {
private let midiOutBlock: AUMIDIOutputEventBlock = { sampleTime, cable, length, data in return noErr }
var scheduleMIDIEventListBlock: AUMIDIEventListBlock? = nil
public init() {
engine.attach(player)
engine.prepare()
setupMIDI()
}
private func setupMIDI() {
if !MIDIManager.shared.setupPort(midiProtocol: MIDIProtocolID._2_0, receiveBlock: { [weak self] eventList, _ in
if let scheduleMIDIEventListBlock = self?.scheduleMIDIEventListBlock {
_ = scheduleMIDIEventListBlock(AUEventSampleTimeImmediate, 0, eventList)
}
}) {
fatalError("Failed to setup Core MIDI")
}
}
func initComponent(type: String, subType: String, manufacturer: String) async -> ViewController? {
reset()
guard let component = AVAudioUnit.findComponent(type: type, subType: subType, manufacturer: manufacturer) else {
fatalError("Failed to find component with type: \(type), subtype: \(subType), manufacturer: \(manufacturer))" )
}
do {
let audioUnit = try await AVAudioUnit.instantiate(
with: component.audioComponentDescription, options: AudioComponentInstantiationOptions.loadOutOfProcess)
self.avAudioUnit = audioUnit
self.connect(avAudioUnit: audioUnit)
return await audioUnit.loadAudioUnitViewController()
} catch {
return nil
}
}
private func startPlayingInternal() {
guard let avAudioUnit = self.avAudioUnit else { return }
setSessionActive(true)
if avAudioUnit.wantsAudioInput { scheduleEffectLoop() }
let hardwareFormat = engine.outputNode.outputFormat(forBus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat)
do { try engine.start() } catch {
isPlaying = false
fatalError("Could not start engine. error: \(error).")
}
if avAudioUnit.wantsAudioInput { player.play() }
isPlaying = true
}
private func resetAudioLoop() {
guard let avAudioUnit = self.avAudioUnit else { return }
if avAudioUnit.wantsAudioInput {
guard let format = file?.processingFormat else { fatalError("No AVAudioFile defined.") }
engine.connect(player, to: engine.mainMixerNode, format: format)
}
}
public func connect(avAudioUnit: AVAudioUnit?, completion: @escaping (() -> Void) = {}) {
guard let avAudioUnit = self.avAudioUnit else { return }
engine.disconnectNodeInput(engine.mainMixerNode)
resetAudioLoop()
engine.detach(avAudioUnit)
func rewiringComplete() {
scheduleMIDIEventListBlock = auAudioUnit.scheduleMIDIEventListBlock
if isPlaying { player.play() }
completion()
}
let hardwareFormat = engine.outputNode.outputFormat(forBus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat)
if isPlaying { player.pause() }
let auAudioUnit = avAudioUnit.auAudioUnit
if !auAudioUnit.midiOutputNames.isEmpty { auAudioUnit.midiOutputEventBlock = midiOutBlock }
engine.attach(avAudioUnit)
if avAudioUnit.wantsAudioInput {
engine.disconnectNodeInput(engine.mainMixerNode)
if let format = file?.processingFormat {
engine.connect(player, to: avAudioUnit, format: format)
engine.connect(avAudioUnit, to: engine.mainMixerNode, format: format)
}
} else {
let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2)
engine.connect(avAudioUnit, to: engine.mainMixerNode, format: stereoFormat)
}
rewiringComplete()
}
}
and my MIDI Manager
@MainActor
class MIDIManager: Identifiable, ObservableObject {
func setupPort(midiProtocol: MIDIProtocolID,
receiveBlock: @escaping @Sendable MIDIReceiveBlock) -> Bool {
guard setupClient() else { return false }
if MIDIInputPortCreateWithProtocol(client, portName, midiProtocol, &port, receiveBlock) != noErr {
return false
}
for source in self.sources {
if MIDIPortConnectSource(port, source, nil) != noErr {
print("Failed to connect to source \(source)")
return false
}
}
setupVirtualMIDIOutput()
return true
}
private func setupVirtualMIDIOutput() {
let virtualStatus = MIDISourceCreate(client, virtualSourceName, &virtualSource)
if virtualStatus != noErr {
print("❌ Failed to create virtual MIDI source: \(virtualStatus)")
} else {
print("✅ Created virtual MIDI source: \(virtualSourceName)")
}
}
func sendMIDIData(_ data: [UInt8]) {
print("hey")
var packetList = MIDIPacketList()
withUnsafeMutablePointer(to: &packetList) { ptr in
let pkt = MIDIPacketListInit(ptr)
_ = MIDIPacketListAdd(ptr, 1024, pkt, 0, data.count, data)
if virtualSource != 0 {
let status = MIDIReceived(virtualSource, ptr)
if status != noErr {
print("❌ Failed to send MIDI data: \(status)")
} else {
print("✅ Sent MIDI data: \(data)")
}
}
}
}
}
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc.
I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success.
Does anyone know how I can customize the media controls of ApplicationMusicPlayer.
Thank you.
A bit of a novice to app development here but I have a paid developer account, I have registered the identifier for MusicKit on the developer website (using the bundle identifier I've selected in Xcode) but the option to add MusicKit as a capability is not available in Xcode?
I've manually updated the certificates, closed the app and reopened it, started a new project and tried with a different demo project?
Apologies if I am missing something obvious but could someone help me get this capability added?
I am trying to use the new SpeechAnalyzer framework in my Mac app, and am running into an issue for some languages.
When I call AssetInstallationRequest.downloadAndInstall() for some languages, it throws an error:
Error Domain=SFSpeechErrorDomain Code=1 "transcription.ar asset not found after attempted download."
The ".ar" appears to be the language code, which in this case was Arabic.
When I call AssetInventory.status(forModules:) before attempting the download, it is giving me a status of "downloading" (perhaps from an earlier attempt?). If this language was completely unsupported, I would expect it to return a status of "unsupported", so I'm not sure what's going on here.
For other languages (Polish, for example) SpeechTranscriber.supportedLocale(equivalentTo:) is returning nil, so that seems like a clearly unsupported language. But I can't tell if the languages I'm trying, like Arabic, are supported and something is going wrong, or if this error represents something I can work around.
Here's the relevant section of code. The error is thrown from downloadAndInstall(), so I never even get as far as setting up the SpeechAnalyzer itself.
private func setUpAnalyzer() async throws {
guard let sourceLanguage else {
throw Error.languageNotSpecified
}
guard let locale = await SpeechTranscriber.supportedLocale(equivalentTo: Locale(identifier: sourceLanguage.rawValue)) else {
throw Error.unsupportedLanguage
}
let transcriber = SpeechTranscriber(locale: locale, preset: .progressiveTranscription)
self.transcriber = transcriber
let reservedLocales = await AssetInventory.reservedLocales
if !reservedLocales.contains(locale) && reservedLocales.count == AssetInventory.maximumReservedLocales {
if let oldest = reservedLocales.last {
await AssetInventory.release(reservedLocale: oldest)
}
}
do {
let status = await AssetInventory.status(forModules: [transcriber])
print("status: \(status)")
if let installationRequest = try await AssetInventory.assetInstallationRequest(supporting: [transcriber]) {
try await installationRequest.downloadAndInstall()
}
}
...
I am trying to use SpeechDetector Module in Speech framework along with SpeechTranscriber. and it is giving me an error
Cannot convert value of type 'SpeechDetector' to expected element type 'Array.ArrayLiteralElement' (aka 'any SpeechModule')
Below is how I am using it
let speechDetector = Speech.SpeechDetector()
let transcriber = SpeechTranscriber(locale: Locale.current,
transcriptionOptions: [],
reportingOptions: [.volatileResults],
attributeOptions: [.audioTimeRange])
speechAnalyzer = try SpeechAnalyzer(modules: [transcriber,speechDetector])
Hello,
I’m new here. I'm developing an iOS app and I’d like to know whether it is possible to detect if a phone call is being recorded by another app running in the background.
I’ve already reviewed the documentation for CallKit and AVAudioSession, but I couldn’t find anything related. My expectation was that iOS might provide some callback or API to indicate if a call is being recorded (third-party apps), but so far I haven’t found a way.
My questions are:
Does iOS expose any API to detect if a call is being recorded?
If not, is there any indirect, Apple's policy compliant method (e.g., microphone usage events) that can be relied upon?
Or is this something that iOS explicitly prevents for privacyreasons?
Expecting solutions that align with Apple’s policies and would be accepted under the App Store Review Guidelines.
Thanks in advance for any guidance.
AVAudioSessionCategoryOptionAllowBluetooth is marked as deprecated in iOS 8 in iOS 26 beta 5 when this option was not deprecated in iOS 18.6. I think this is a mistake and the deprecation is in iOS 26. Am I right?
It seems that the substitute for this option is "AVAudioSessionCategoryOptionAllowBluetoothHFP". The documentation does not make clear if the behaviour is exactly the same or if any difference should be expected... Has anyone used this option in iOS 26? Should I expect any difference with the current behaviour of "AVAudioSessionCategoryOptionAllowBluetooth"?
Thank you.
I'm experiencing a significant limitation with MusicKit's Dolby Atmos implementation on macOS and would appreciate clarification on whether this is intended behavior or if there are solutions available.
When streaming Dolby Atmos content through MusicKit's ApplicationMusicPlayer, the output is limited to 2-channel stereo, even when:
Audio MIDI Setup is configured for 7.1.4 (12-channel) output
The same tracks play in full multichannel through the native Apple Music app
Dolby Atmos is set to "Automatic" in Apple Music preferences
Please let me know if there is anyway to enable this. If not, is this documented anywhere? Thanks!
Hi,
I just started to develop audio unit hosting support in my application.
Offline rendering seems to work except that I hear no output, but why?
I suspect with the player goes something wrong.
I connect to CoreAudio in a different location in the code.
Here are some error messages I faced so far:
2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927] AUAudioUnit.mm:1417 Cannot set maximumFramesToRender while render resources allocated.
2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae] AVAEInternal.h:104 [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852
** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852
I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect:
/* audio engine */
audio_engine = [[AVAudioEngine alloc] init];
fx_audio_unit_audio->audio_engine = (gpointer) audio_engine;
av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format;
/* av audio player node */
av_audio_player_node = [[AVAudioPlayerNode alloc] init];
/* av audio unit */
av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]];
av_audio_unit = (AVAudioUnit *) av_audio_unit_effect;
fx_audio_unit_audio->av_audio_unit = av_audio_unit;
/* audio sequencer */
av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine];
fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer;
/* output node */
[[AVAudioOutputNode alloc] init];
/* audio player and audio unit */
[audio_engine attachNode:av_audio_player_node];
[audio_engine attachNode:av_audio_unit];
[audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format];
[audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format];
ns_error = NULL;
[audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline
format:av_format
maximumFrameCount:buffer_size error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("enable manual rendering mode error - %d", [ns_error code]);
}
ns_error = NULL;
[[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]);
}
Then I render in a dedicated thread.
ns_error = NULL;
[audio_engine startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio engine start - %d", [ns_error code]);
}
[av_audio_sequencer prepareToPlay];
ns_error = NULL;
[av_audio_sequencer startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio sequencer start - %d", [ns_error code]);
}
[av_audio_player_node play];
while(is_running){
/* pre sync */
/* IO buffers */
av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer;
av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer;
/* fill input buffer */
/* schedule av input buffer */
frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size;
av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node;
AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)];
[av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil];
/* render */
ns_error = NULL;
status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("render offline error - %d", [ns_error code]);
}
}
regards, Joël
Two issues:
No matter what I set in
try audioSession.setPreferredSampleRate(x)
the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad.
Now, I'm checking the current output loudness to animate a 3D character, using
mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in
Task { @MainActor in
// calculate rms and animate character accordingly
but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized.
This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results.
But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame.
My AVAudioEngine setup is the following:
audioEngine.connect(playerNode, to: pitchShiftEffect, format: format)
audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format)
audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil)
Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second.
PS: Specifying my favorite format in the
let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)!
mixerNode.installTap(onBus: 0, bufferSize: y, format: format)
doesn't change anything either
In my app I use AVAssetReaderTrackOutput to extract PCM audio from a user-provided video or audio file and display it as a waveform.
Recently a user reported that the waveform is not in sync with his video, and after receiving the video I noticed that the waveform is in fact double as long as the video duration, i.e. it shows the audio in slow-motion, so to speak.
Until now I was using
CMFormatDescription.audioStreamBasicDescription.mSampleRate
which for this particular user video returns 22'050. But in this case it seems that this value is wrong... because the audio file has two audio channels with different sample rates, as returned by
CMFormatDescription.audioFormatList.map({ $0.mASBD.mSampleRate })
The first channel has a sample rate of 44'100, the second one 22'050. If I use the first sample rate, the waveform is perfectly in sync with the video.
The problem is given by the fact that the ratio between the audio data length and the sample rate multiplied by the audio duration is 8, double the ratio for the first audio file (4). In the code below this ratio is given by
Double(length) / (sampleRate * asset.duration.seconds)
When commenting out the line with the sampleRate variable definition in the code below and uncommenting the following line, the ratios for both audio files are 4, which is the expected result. I would expect audioStreamBasicDescription to return the correct sample rate, i.e. the one used by AVAssetReaderTrackOutput, which (I think) somehow merges the stereo tracks. The documentation is sparse, and in particular it’s not documented whether the lower or higher sample rate is used; in this case, it seems like the higher one is used, but audioStreamBasicDescription for some reason returns the lower one.
Does anybody know why this is the case or how I should extract the sample rate of the produced PCM audio data? Should I always take the higher one?
I created FB19620455.
let openPanel = NSOpenPanel()
openPanel.allowedContentTypes = [.audiovisualContent]
openPanel.runModal()
let url = openPanel.urls[0]
let asset = AVURLAsset(url: url)
let assetTrack = asset.tracks(withMediaType: .audio)[0]
let assetReader = try! AVAssetReader(asset: asset)
let readerOutput = AVAssetReaderTrackOutput(track: assetTrack, outputSettings: [AVFormatIDKey: Int(kAudioFormatLinearPCM), AVLinearPCMBitDepthKey: 16, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsFloatKey: false, AVLinearPCMIsNonInterleaved: false])
readerOutput.alwaysCopiesSampleData = false
assetReader.add(readerOutput)
let formatDescriptions = assetTrack.formatDescriptions as! [CMFormatDescription]
let sampleRate = formatDescriptions[0].audioStreamBasicDescription!.mSampleRate
//let sampleRate = formatDescriptions[0].audioFormatList.map({ $0.mASBD.mSampleRate }).max()!
print(formatDescriptions[0].audioStreamBasicDescription!.mSampleRate)
print(formatDescriptions[0].audioFormatList.map({ $0.mASBD.mSampleRate }))
if !assetReader.startReading() {
preconditionFailure()
}
var length = 0
while assetReader.status == .reading {
guard let sampleBuffer = readerOutput.copyNextSampleBuffer(), let blockBuffer = sampleBuffer.dataBuffer else {
break
}
length += blockBuffer.dataLength
}
print(Double(length) / (sampleRate * asset.duration.seconds))