swift - How to apply audio effect to a file and write to filesystem - iOS -
i'm building app should allow user apply audio filters recorded audio, such reverb, boost.
i unable find viable source of information on how apply filters file itself, because it's needed upload processed file server later.
i'm using audiokit visualization, , i'm aware it's capable of doing audio processing, playback. please give suggestions further research.
you can use newly-introduced "manual rendering" features audio unit plugins (see example below).
if need support older macos/ios version, surprised if can't achieve same audiokit (even though haven't tried myself). instance, using aksampleplayer
first node (which read audio file), building , connecting effects , using aknoderecorder
last node.
example of manual rendering using new audio unit features
import avfoundation //: ## source file //: open audio file process let sourcefile: avaudiofile let format: avaudioformat { let sourcefileurl = bundle.main.url(forresource: "mixloop", withextension: "caf")! sourcefile = try avaudiofile(forreading: sourcefileurl) format = sourcefile.processingformat } catch { fatalerror("could not open source audio file, \(error)") } //: ## engine setup //: player -> reverb -> mainmixer -> output //: ### create , configure engine , nodes let engine = avaudioengine() let player = avaudioplayernode() let reverb = avaudiounitreverb() engine.attach(player) engine.attach(reverb) // set desired reverb parameters reverb.loadfactorypreset(.mediumhall) reverb.wetdrymix = 50 // make connections engine.connect(player, to: reverb, format: format) engine.connect(reverb, to: engine.mainmixernode, format: format) // schedule source file player.schedulefile(sourcefile, at: nil) //: ### enable offline manual rendering mode { let maxnumberofframes: avaudioframecount = 4096 // maximum number of frames engine asked render in single render call try engine.enablemanualrenderingmode(.offline, format: format, maximumframecount: maxnumberofframes) } catch { fatalerror("could not enable manual rendering mode, \(error)") } //: ### start engine , player { try engine.start() player.play() } catch { fatalerror("could not start engine, \(error)") } //: ## offline render //: ### create output buffer , output file //: output buffer format must same engine's manual rendering output format let outputfile: avaudiofile { let documentspath = nssearchpathfordirectoriesindomains(.documentdirectory, .userdomainmask, true)[0] let outputurl = url(fileurlwithpath: documentspath + "/mixloopprocessed.caf") outputfile = try avaudiofile(forwriting: outputurl, settings: sourcefile.fileformat.settings) } catch { fatalerror("could not open output audio file, \(error)") } // buffer engine render processed data let buffer: avaudiopcmbuffer = avaudiopcmbuffer(pcmformat: engine.manualrenderingformat, framecapacity: engine.manualrenderingmaximumframecount)! //: ### render loop //: pull engine desired number of frames, write output destination file while engine.manualrenderingsampletime < sourcefile.length { { let framestorender = min(buffer.framecapacity, avaudioframecount(sourcefile.length - engine.manualrenderingsampletime)) let status = try engine.renderoffline(framestorender, to: buffer) switch status { case .success: // data rendered try outputfile.write(from: buffer) case .insufficientdatafrominputnode: // applicable if using input node 1 of sources break case .cannotdoincurrentcontext: // engine not render in current render call, retry in next iteration break case .error: // error occurred while rendering fatalerror("render failed") } } catch { fatalerror("render failed, \(error)") } } player.stop() engine.stop() print("output \(outputfile.url)") print("avaudioengine offline rendering completed")
you can find more docs , examples updates audiounit format there.
wiki
Comments
Post a Comment