The goal of the audio API is to replicate features found in desktop audio production applications. Some of the most prominent features are mixing, processing, filtering, etc.
The web audio API has a lot of potential and can do awesome stuff. But first — how well is the API supported across the board?
green across all boards
Cool, worth digging into. 👍
Good question! Here are couple examples demonstrating the capabilities of the Web Audio API. Make sure you have sound on.
Most of the basic use cases covered: https://webaudioapi.com/samples/
Complicated synthesizer example: https://tonejs.github.io/examples/#buses
The web audio API handles audio operation through an audio context. Everything starts from the audio context. With the audio context you can hook up different audio nodes.
Audio nodes are linked by their inputs and outputs. The chain of inputs and outputs going through a node create a destination — destination being the audio frequency which we pick up with our ears.
Audio context schema
If you’re the type of person who wants to know all the tiny details, here’s a sweet link to get you started.
If you’re more into visual learning, here’s a great introduction talk about the Web Audio — check it out!
One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations.
Show HN: Randomly generated metal riffs using Web Audio API and React
This article explains how, and provides a couple of basic use cases.
If you’re keen on learning the audio API in depth — here’s a great series;
Here’s a free book about the Web Audio API— by Boris_Smus (interaction engineer at Google).
The web audio API is relatively intuitive to understand. Here’s an abstract example how to use the API.
const audioCtx = new AudioContext();
Breakdown of the steps;
- We create a new
AudioContextobject by calling it with
- We bind our oscillator and volume controller to the audio context.
- We connect our oscillators and volume controller to our sound system.
- Set our frequency type and value (tuning)
- Start our oscillator — The
startmethod of the OscillatorNode interface specifies the exact time to start playing the tone.
Of course, with all great things, there’s always room to grow and improve. Here’s some healthy feedback from much smarter people than I.
If you’re unsure about the use cases for such API — think about all the music music composition software out there which are desktop only. Converting those desktop apps to web apps would be a very workable business idea.
Why is web better in this case? Well, for a starter — saving and closing your workspace and continuing from another workspace. Musicians travel a lot, this approach would benefit artists by a huge margin.
Another example would be enhancing our user experience with sound. (Careful not to over-do this!)
New solutions and better experience for less fortunate/blind people who use screen readers for websites. Accessibility.
If you’re interested in staying up to date, the Web Audio Conf is an excellent event to take part in.
Thanks for reading, stay awesome! ❤