All you need to know about the Web Audio API

All you need to know about the Web Audio API

Did you know Javascript has a constantly evolving high-level API for processing and synthesizing audio? How cool is that!

The goal of the audio API is to replicate features found in desktop audio production applications. Some of the most prominent features are mixing, processing, filtering, etc.

The web audio API has a lot of potential and can do awesome stuff. But first — how well is the API supported across the board?

green across all boards

Cool, worth digging into. 👍


What is the web audio API capable of doing?

Good question! Here are couple examples demonstrating the capabilities of the Web Audio API. Make sure you have sound on.

Most of the basic use cases covered: https://webaudioapi.com/samples/

Complicated synthesizer example: https://tonejs.github.io/examples/#buses

The web audio API handles audio operation through an audio context. Everything starts from the audio context. With the audio context you can hook up different audio nodes.

Audio nodes are linked by their inputs and outputs. The chain of inputs and outputs going through a node create a destination — destination being the audio frequency which we pick up with our ears.

Audio context schema

If you’re the type of person who wants to know all the tiny details, here’s a sweet link to get you started.

This article explains some of the audio theory behind how the features of the Web Audio API work.

If you’re more into visual learning, here’s a great introduction talk about the Web Audio — check it out!

Steve KinneyBuilding a musical instrument with the Web Audio API | JSConf US 2015

One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations.

https://webaudioapi.com/samples/visualizer/

https://tonejs.github.io/examples/#analysis

https://tonejs.github.io/examples/#meter

Show HN: Randomly generated metal riffs using Web Audio API and React

This article explains how, and provides a couple of basic use cases.

Visualizations with Web Audio API

If you’re keen on learning the audio API in depth — here’s a great series;

Web Audio API | 01: Introduction to AudioContext

Here’s a free book about the Web Audio API— by Boris_Smus (interaction engineer at Google).

https://webaudioapi.com/book/Web_Audio_API_Boris_Smus.pdf


A glance at the API

The web audio API is relatively intuitive to understand. Here’s an abstract example how to use the API.

webaudio.jslink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
const audioCtx = new AudioContext();

/* The following example demonstrates how to create an oscilator audio source that will provide a simple tone. */
const oscillator = audioCtx.createOscillator();

/* The example also creates a gainNode node to control sound volume. */
const gainNode = audioCtx.createGain();

/* The default output mechanism of your device (usually your system speakers) is accessed using AudioContext.destination.
The following example demonstrates how to connect the oscillator, gain node, and destination together.
*/
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);

/* The following example sets a specific pitch (in hertz) and type for an oscillator node,
then sets the oscillator to play the sound.
*/
oscillator.type = 'square'; // sine wave — other values are 'sine', 'sawtooth', 'triangle' and 'custom'
oscillator.frequency.value = 480; // value in hertz
oscillator.start();

Breakdown of the steps;

  • We create a new AudioContext object by calling it with new keyword.
  • We bind our oscillator and volume controller to the audio context.
  • We connect our oscillators and volume controller to our sound system.
  • Set our frequency type and value (tuning)
  • Start our oscillator — The start method of the OscillatorNode interface specifies the exact time to start playing the tone.

Using the Web Audio API


Big potential, room to grow

Of course, with all great things, there’s always room to grow and improve. Here’s some healthy feedback from much smarter people than I.

I don’t know who the Web Audio API is designed for | Hacker News


Making music with the browser

Jake Albaugh — The creator of Tone.js showing how to create music with the browser

Wrap up

If you’re unsure about the use cases for such API — think about all the music music composition software out there which are desktop only. Converting those desktop apps to web apps would be a very workable business idea.

Why is web better in this case? Well, for a starter — saving and closing your workspace and continuing from another workspace. Musicians travel a lot, this approach would benefit artists by a huge margin.

Another example would be enhancing our user experience with sound. (Careful not to over-do this!)

New solutions and better experience for less fortunate/blind people who use screen readers for websites. Accessibility.

What else can we do with the Web Audio API? — Christoph Guttandin

If you’re interested in staying up to date, the Web Audio Conf is an excellent event to take part in.

alemangui/web-audio-resources

Thanks for reading, stay awesome! ❤

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×