July 2012 Archives
July 31, 2012
I simply cannot keep ToneCraft to myself any longer. I've had a really great time playing with this web app by DinahMoe and feel it's really well assembled. The app, inspired by ToneMatrix, is a sequencer represented by a 3D grid upon which the user can place colored blocks that represent different kinds of sounds or textures. You can stack the blocks and remove them, too. Also, the whole scene can be rotated by holding the Shift key. Plus, users can share their creations via Twitter and Facebook.
Just stumbled across Cloud Audio Workstation (Claw) yesterday, a project built by Gabriel Cardoso, who also happens to be a member of the W3's Audio Working Group. What is Claw? I'll let them explain:
Claw is a website that allows a group of musicians to work on their projects from anywhere using an online multitrack sequencer. This is not "yet another DAW". The aims is to complete the tools we are used to with the following features :
- distance working : access, listen and edit your musical projects from a web browser (i.e. from anywhere)
- exchange project format : import and export songs from a DAW to another
- version control : share a project with your band, follow their work, pick what you want from their versions without breaking yours, go back, etc.
- project management : comment, create and assign tasks, etc.
- social network : meet other musicians and use Claw to collaborate
Speaking of Matt Diamond, I've been meaning to post a simple Web Audio API example he shared with W3's Audio WG listserve a couple weeks ago. The code — in addition to JQuery includes — totals 120 lines and is very readable, producing a light, airy background ambience one might expect behind a guided meditation recording, and can be user modified by simply adjusting the provided sliders.
I'm also thinking that the plugin repo would encourage instrument/effect pull requests from developers, or perhaps create a separate project for gathering such submissions. Basically a way to have the community develop awesome web audio sounds that developers can easily include in their projects.
Have thoughts on the matter? The thread is open and awaiting your input!
July 28, 2012
Roby Baxter shared a new project of his on the W3's Audio WG listserve a couple days ago. The project, called Patter, is a monophonic step sequencer built using Web Audio API. The source code can be found on github. A few folks on the listserve have posted some examples they created. Neat idea and implementation. Plus, the code includes a Flash fallback for non-Web Audio API browsers.
Along with this week's Safari 6 update, Apple has updated their developer documentation to include information about using the Web Audio API. There are some nice bits of sample code in there, as well as some basic help for anyone just starting out. Worth bookmarking for future reference. (A PDF of the entire Safari HTML5 Audio and Video Guide is also available.)
Just noticed that HTML5 Rocks has updated their tutorial on capturing audio and video in a web browser and thought it might be a good time to include it here. I've only played around with this a little (using Canary), but find the implementation to be rather rough around the edges. And by that I mean that I don't think it's really ready for primetime, at least not on Canary. Perhaps it works better on one of the supported Android browsers?
Michael Mahemoff posted an article yesterday detailing his experiences trying to get HTML5 Audio working properly on his Android device. In a nutshell, Android system events trigger audio to play when it is paused, with no consideration for what a developer intended. How does this become an issue? Well, if you were a game developer using audio sprites, receiving a phone call or text message will cause the audio stream to play unexpectedly. An example is provided.
Mahemoff also touches briefly on issues with the Chrome browser on Android.
July 25, 2012
Today, Apple released the latest major update to the Safari web browser. This new version of Safari, which works on the latest version of Lion and the just-released Mountain Lion, includes support for the Web Audio API, bringing audio support that was previously only available in Google Chrome to a significant number of Mac users. You can obtain Safari 6 through the Software Update tool in Lion or by upgrading to Mountain Lion.
I briefly tested some of my favorite recent implementations of the Web Audio API in this new version (under Lion) and found support to be rather hit and miss. Some examples worked, others reported that I needed Google Chrome for Web Audio API functionality. The biggest issue, unfortunately, is still codecs. Several examples I tried utilized Ogg Vorbis audio files, which Safari still does not support. Hopefully aurora.js can remedy this — since browser makers seem unwilling to solve the universal codec support problem — but we'll need to see more use of it.
Update: Is Safari for Windows EOL'd? I don't see any mention of it, nor a download link.
July 24, 2012
Mozilla has announced that the new Firefox Beta 15 includes support for the upcoming Opus audio format. The features cited as benefits for Opus include improved compression over existing formats, good for both speech and music, dynamically adjustable bitrate, audio bandwidth, and coding delay, and support for realtime and pre-recorded audio.
The page announcing codec support includes a "Why Should I Care?" section, addressing the likelihood that most people will yawn upon hearing the news that rather than getting off their free software high horse (read "principles") and supporting MP3 or AAC, Mozilla is adding yet another codec to the mix. Apparently, the codec is intended for use in WebRTC. Wikipedia lists four other audio codecs as being intended for WebRTC, none of which are Opus, so it's unclear if Opus is simply Mozilla's push for an official WebRTC codec or if the codec will be the official choice at some later point. Either way, time will tell if other browser makers (Apple, Google, Microsoft, Opera, etc) will include support.
Personally, I tend to agree with Gruber's analysis on free software codecs. They aren't necessarily safer than patent-encumbered codecs; they just haven't yet been proven to infringe on the multitude of software patents in the wild. The lack of universal support of a single codec has made the HTML5 <audio> tag virtually unusable today. I'm not sure that adding another codec will solve the problem.
Like a cross between a spectrograph and an old-school mechanical music box, Canvas.fm provides a really neat way to see what your audio looks like in a very pleasing visual form. The system, tied in with SoundCloud, can visualize any audio you might find on SoundCloud's service. Definitely worth checking out!
Update: This only works with Firefox. On both Safari and Chrome the user is told, "Sorry, your browser doesn't provide enough awesome to fully display this visualisation ☹ You need to be using a recent version of Firefox." The audio still plays, though.
The folks over at creativejs.com have continued their Web Audio API tutorial series with the more advanced aspects of 3D panning and convolution. The tutorial discusses the Panner Node, which allows audio to be played back relative to position inside a three-dimensional space, as well as the background behind and how to implement convolution reverb inside a project.
July 23, 2012
Stuart Memo is keeping busy. Today he's released a project that allows web synth developers to integrate a straightforward QWERTY keyboard control into their project with a minimum of effort. There's a Web Audio API demo embedded into the main page, so fire up Chrome and check it out!
July 13, 2012
Want to add visualization of an audio stream into your web app? ThreeAudio provides functionality to do this using the Web Audio API, either as a standalone toolkit or as a jQuery plug-in. Unfortunately, there isn't a working demonstration posted anywhere at this time, so I can't link directly to something that runs without building something myself to test out the functionality. Given the statement in the readme regarding the current experimental nature of the code, I'll hold out hope that a simple working demo gets posted eventually.
July 12, 2012
Doug Schepers noted today on the W3C Audio Working Group's listserve that the Device API Working Group has issued a "last call" for comment on the HTML Media Capture API. As Doug mentioned in his post, the Web Audio API is expected to make use of the HTML Media Capture API in order to obtain microphone input, so it would behoove anyone with a vested interest in web audio to peruse and make relevant comments. Additionally, Doug points out that the security model implemented on the HTML Media Capture API would make a good model for the forthcoming Web MIDI API.
Comments on HTML Media Capture must be in by 9 August 2012.
If you've been following along, you likely remember the proof-of-concept Web Audio Editor I linked to just over a month ago. That web app, written as a student project, demonstrated that there are likely some interesting audio tool implementations in Web Audio API's future. Today we see another such demonstration: Plucked.de's HTML5 Audio Editor.
While last month's Web Audio Editor had a fairly limited feature set, the HTML5 Audio Editor has several useful features implemented, not the least of which is a "Save" button. The web app allows the user to drag-and-drop an audio file into the interface and perform various kinds of edits and effects, including cut, copy, paste, fade in, fade out, normalize, silence, and loop. It's rather impressive, actually, and could easily satisfy basic editing needs for a novice user who wanted to perform a simple edit.
Even better is the fact that the source code is available, too, and draws on some of Audacity's code.
Google Canary is recommended at this time, as the most recent stable release of Chrome tends to stutter during audio playback.
July 10, 2012
In terms of browser support, audiolib.js utilizes sink.js, another of Jussi's projects, which provides a unified interface for programming audio to work with Web Audio API and Audio Data API capable browsers. As noted on the sink.js GitHub page, Flash fallback support could be added by creating a plugin to provide the functionality.
If you've been following along here, you likely saw the article last month about how only 7.1% of Android device users could play audio in their browser using the <audio> tag. Updated figures were provided by Google recently that put Ice Cream Sandwich usage share at 10.9% now, an increase of 3.8 points since last month. In what hopefully signals a shift away from an antiquated version of the Android operating system, Gingerbread usage share decreased by 1 point to 64%. Usage share numbers for Jelly Bean, the newest release of Android were not included in Google's figures.
July 9, 2012
I was going to write up a story about the really awesome WebBeeper 2A03 synth from the prolific folks at g200kg.com, but when I delved further into why it existed at all, I discovered WebMidiLink. What is WebMidiLink, you ask? Well, it's a technology that allows synchronization of multiple web synths across different websites to be played from a single interface. I could explain further, but I think you really should play with it instead.
Some things to notice in the interface:
- You get 3 virtual instruments to run from the keyboard on the demo page.
- The checked instrument is the one that will play when you click on the keys.
- Below the keyboard, you can load up a web synth that supports WebMidiLink by simply selecting the synth for a particular instrument and clicking "Load". You can put your own into the URL field and load it.
- If you wish to program the three synths to change MIDI channels, adjust the output volume, play music automatically, and more, put the properly coded instructions in that instrument's MML field. Then click "MML Play" beneath the keyboard to hear the result.
You might want to read some further discussion by several of the folks implementing this new technology.
This synthesizer example, while over a year old, demonstrates a pretty simple implementation of a synthesizer built using Mozilla's Audio Data API. The code is pretty easy to follow, but the sluggish performance makes it difficult to utilize. The author had this to say about it:
The biggest issue I have with this demo is the huge buffering delay. Unfortunately, I couldn't do much about it because it's imposed by Firefox, which requires a 500ms delay between writing audio buffers.
Still seems worth a gander, though.
As noted on the W3C's Audio Working Group public listserve, Alistair MacDonald is stepping down as co-chair of the group, leaving Olivier Thereaux in his role as chair. Doug Schepers says it better than I can:
Alistair was critical in the foundation of the Audio WG. He was at the Processing.js workshop where I first learned of the project that eventually became Mozilla's original Audio Data API; he and I brainstormed about how to start a standardization activity around it, and he stepped up to chair the Audio Incubator Group, where we gathered support for a Working Group to create a Web audio API. He built an enormous number of cool demos, and collected together many more from the community, and very much promoted and fostered the technology. He communicated with many stakeholders to help ensure that the Audio WG would start off on good footing.
He also warmly welcomed Olivier as Co-Chair to the Audio WG, and they complemented each other well.
Without Al, this WG would not be in nearly as good a position as we are today, and it may not have been viable at all. We formed this group at just the right time, when there were early steps in the browsers, but before real fragmentation had yet set in, and just as developers were recognizing the need for advanced audio capability beyond the HTML5 <audio> element.
July 5, 2012
A very interesting Kickstarter project crossed my path a couple days ago. Titled StarBound, it includes interactive music, an Altered Reality video game, and a comic book. The final result is expected to be a mobile app which will allow for unlimited possibilities in remixing an album made using sonified light curves (the sound of actual stars, obtained from NASA's Kepler Labs).
What I found particularly fascinating were the two demos, which take advantage of the Web Audio API. Both are worth spending some time playing with and offer creative interfaces for interactive music in a browser.
Update: it seems the coder behind the two demos I linked to is the same guy who created Sympyrean. Also, I found a comment he posted on Reddit that explained some hidden features you might want to try out:
If you change the hash for the URL you can check out what different combinations look like geometrically. For instance #48 is a 4 x 8 system
I've set it up for 3 through 5 instruments, and 3 - 10 (10 being a) different stems.