Beati Sounds Home

Recent posts: Blog

Beati Sounds was loved by Laidback Luke at Dancefair 2016

At Dancefair we followed the remix competition of Laidback Luke​. The assignment was:
-Receive some stems from one of his tracks
-You have two hours to remix it.

We are proud to share the result with you, that was made in two hours of high speed producing. And yes, we give this to you as a FREE download! Enjoy this remix!

https://soundcloud.com/beatisounds/beati-sounds-love-at-dancefair-2016

17 February 2016 Blog Read more

Everything you need to know about digital audio files

Don’t know the difference between lossy and lossless? What’s the deal with bit rates? Let us explain.
If you use iTunes or if you buy and download digital music, you’ll have come across a number of terms and abbreviations that describe digital audio files. This alphabet soup can be quite confusing. What are codecs or audio file formats? What is a bit rate, and what’s a sample rate? What does it mean when music is “high-resolution?” When you buy a CD, the audio on the disc is uncompressed. You can rip (or import) CDs with iTunes or other software, turning the CD’s audio into digital audio files to use on a computer or a portable device. In iTunes, you can rip in two uncompressed formats: WAV and AIFF (other software allows for other formats). Both formats simply encapsulate the PCM (pulse-code modulation) data stored on CDs so it can be read as audio files on a computer, and their bit rate (you’ll learn what the bit rate is below) is 1,411 kbps.

WAV and AIFF files can be quite large. As such, digital audio files are compressed to save space. There are two types of compression: lossless and lossy. Lossless includes formats (or codecs, short for coder-decoder algorithms) such as Apple Lossless and FLAC (the Free Lossless Audio Codec). Lossy includes the ubiquitous MP3 and AAC formats. (AAC, which stands for Advanced Audio Coding, is, in reality, the MP4 format, the successor to the older MP3. While Apple adopted it early on in iTunes, Apple was not involved in its creation, and has no ownership of this format.)

You may see other audio formats too, though they are less common. These include Ogg Vorbis, Monkey’s Audio, Shorten, and others. Some of these codecs are lossy, and some are lossless. However, if you use iTunes and Apple hardware, you’ll only encounter WAV, AIFF, MP3, AAC, and Apple Lossless, at least for music.

iTunes can rip or import audio files in these formats. Choose the one you want to use in iTunes > Preferences > General > Import Settings.

When you rip or convert an uncompressed audio file to a lossless format, and then play that file, it is a bit-perfect copy of the original (assuming the data was read correctly from a CD). As such, you can convert from one lossless format to another with no loss of quality.

When you rip to a lossy format, however, if you convert the file later to another format, you lose some of its quality. This is similar to the way a photocopy of a photocopy doesn’t look as good as the original.

Some people prefer lossless formats because they reproduce audio as it is on CDs. Lossy compression is a compromise, used to save space, allowing you to store more music on a portable device or hard disk, and making it faster to download. However, most people can’t tell the difference between a CD and a lossy file at a high bit rate, so if you’re ripping your music to sync to an iPhone, lossless files are overkill.

Lossless rips are a good way to make archival copies of your files, since you can convert them to other formats with no loss in quality. And you can have iTunes convert them automatically to AAC files when you sync. See this article for more on this automatic conversion, as well as other questions about lossless files.

The best way to judge the quality of an audio file—relative to its original, not to its musical or engineering quality—is to look at its bit rate. Audio file bit rates are measured in thousands of bits per second, or kbps. I mentioned above that a CD contains audio at 1,411 kbps, and when you convert that audio to a lossy file, its bit rate is much lower.

A higher bit rate is better, so a 256 kbps MP3 or AAC file is better than a 128 kbps file. However, with lossless files, this isn’t true. The bit rate of a lossless file depends on the density and the volume of its music. Two tracks on the same album, ripped to a lossless format, may have bit rates of, say, 400 kbps and 900 kbps, yet when played back, they both reproduce the original audio from CD at the same level of quality. Lossless compression uses as many bits as needed, and no more.

If you’re ripping music to a lossy format, it’s good to choose the iTunes default of 256 kbps, unless you need to cram a lot of music onto your portable devices. If you’re ripping audiobooks or other spoken word recordings, you can use much lower bit rates, since the range of the human voice is quite narrow. Audiobooks are often ripped at 32 kbps, and they sound fine.

High-resolution audio, once a niche format, has gotten a lot of press recently. Neil Young’s beleaguered PonoPlayer raised awareness of this type of digital audio. Strictly speaking, high-resolution audio is distributed in files that are “better” than CD quality. High-resolution audio is defined by certain numbers: the bit depth of files, and their sample rate.

CDs contain 16-bit audio at a sample rate of 44,100 Hz. So high-resolution audio has a bit depth and/or sample rate that exceeds that of the CD specification (known as the Red Book standard). Much high-resolution audio is 24-bit, 96 kHz, often abbreviated as 24/96. Some companies sell files at 24/192 and 24/384. And there are also several types of DSD (direct-stream digital) files, which use a different recording method. DSD is used on SACDs, or Super Audio CDs, a format designed by Sony and Philips that is pretty much deceased.

iTunes showing information about a high-resolution audio file. You can see that the bit rate is much higher than for a standard lossless file. What iTunes calls the sample size is the bit depth.

When we talk about bits in high-resolution audio, we’re not looking at the bit rate, which I discussed above, but the bit depth. This is the number of bits in each sample, and it mostly affects dynamic range, which is the difference between the softest and loudest parts of the music. (Though, as you can see in the screenshot below, the actual bit rate of a high-resolution audio file is much higher than that of a CD or of a file ripped in a lossless format.)

A good example of music with a very broad dynamic range is Mahler’s third symphony. Listen to the final movement, and you’ll hear some very soft sounds as well as an extremely loud crescendos. Or listen to Led Zeppelin’s “Stairway to Heaven;” it starts with a soft acoustic guitar and builds up to a fuzz-box finale.

A higher bit depth allows music to have a wider range of volume from its softest to loudest passages. But with a lot of contemporary music, the volume of the music is “compressed” to make it louder. (This is dynamic range compression, not the compression used to make files smaller.) So you don’t hear much of a difference with that type of audio if the bit depth is higher.

The sample rate is the number of “slices” of audio that are made per second, and are measured in Hz (Hertz). 44,100 Hz means that the music is sampled 44,100 times a second; 96 kHz means it is sampled 96,000 times a second. The sample rate affects the overall fidelity of music, but also the range of frequencies that can be reproduced. Files sampled at 44,100 Hz can reproduce up to about 20 KHz, or the highest frequencies that humans can hear. High-resolution files can reproduce sounds above that frequency, sounds that humans cannot hear at all. (And extremely high sample rates, such as 192 kHz, may even result in distorted sound.)

Better numbers don’t mean that the music necessarily sounds better. To many people, high-resolution audio is simply a marketing ploy, a way of getting listeners to buy their favorite music in yet another format. It is conceivable that people with very expensive stereos in rooms treated to provide excellent sound may hear the difference, but those one percent of music listeners shouldn’t sway others into buying music that doesn’t sound any different. And when you’re listening on a portable device, the quality of your headphones, and the ambient noise, ensure that you certainly won’t hear anything different.

All this makes it seem like listening to music should depend on numbers. But it shouldn’t. Listen to the music you like, in the format that’s most convenient, on the device you want to play it. It’s great to have better headphones and speakers, but great music can cut through all the fancy hardware and move you no matter how you listen to it.

Source

9 February 2016 Blog Read more

The future of audio technology (Wireless, noise cancellation, bone conduction and acoustics)

The ways in which we have experienced music, and how it has been recorded and consumed, have changed dramatically, and different formats have created some pretty iconic pieces of paraphernalia over the years. These have come to represent specific eras in the past century: think of the gramophone, vinyl, cassette tapes, compact discs and the rise of digital music, MP3 and now dedicated streaming services.

The devices used to translate these formats into sounds that we can enjoy on a daily basis have also transformed. The apparatus hasn’t really changed in idea (headphones and speakers still remain the standard way of consuming music personally), but the technology that goes into them, and the shapes they have taken, are in a constant state of development.

But what’s around the corner? Now that digital music is here for the foreseeable future, we can expect that all kinds of audio, whether music, e-books or telecoms, will become more versatile and mobile.

Two of the most prolific recent advances in audio technology, at least from a hardware point of view, are noise-cancellation and wireless connectivity in professional and consumer headphones and earphones. These technologies have become commonplace over the past five years or so in the way people listen to music or use audio devices to communicate. Bluetooth has only recently reached a level where it is considered by OEMs as good enough to carry quality audio and replace wires completely, thus paving the way for a throng of cable-free cans.

We are now also starting to see noise-cancellation features in these wireless devices to offer the best of both worlds, allowing users to block out external, unwanted sounds and concentrate solely on the music without being tied to a device. Still, at the moment, a bog standard set of earphones that the average person buys from Amazon, for example, will still most likely be wired. This is going to be one of the first big changes in audio technology in the coming years.

Take the recent revelation that Apple will eliminate the headphone jack from the next iPhone, rumoured as the iPhone 7. According to speculation, Apple will get rid of the 3.5mm headphone jack that has been standard on iPhones and other devices for years, replacing the included earbuds with those that plug in through the Lightning port. This will mean that users wanting to buy their own earphones will have to opt for a Bluetooth wireless variety, thus pushing the growth of wireless buds as third parties rush to give iPhone users alternative options.

WiFi speaker systems are already popular with music enthusiasts. Championed by the likes of Sonos, Bose and Raumfeld, WiFi systems offer seamless integration of stereo, home cinema system and other amplified audio devices with no rewiring or complex programming. Sonos Bridge, for example, lets you connect your wireless router and link all your Sonos players with one touch. The music can then be played from any mobile device in the house through as many players as you want simultaneously.

As history dictates, when predicting the future of audio technology it’s a good idea to look at the best available and use that as an example to see what mass market devices might look like in 10 or even 20 years’ time.

Take Sennheiser and its Orpheus project. The German audio firm created a pair of headphones in 1991 that quickly became known as the best that money could buy. The Orpheus HE90 headphones were made from the best materials available at the time and came with an amplifier. They cost a whopping £10,000.

Almost 25 years later, Sennheiser decided to bring this idea into the 21st century, again using the best materials money could buy, to design a new version of Orpheus, creating the Orpheus HE1060, which the firm again calls “the world’s best headphones”.

The Orpheus system is built into a big slab of marble, and needs to be seen to be believed. Equipped with unique features and state-of-the-art technology, these headphones transform music from something you listen to, into something you feel part of. And at £35,000, you’d want them to.

One of the main features that Sennheiser’s engineers worked on to make these headphones the best in the world was improving audio quality to such a level that they offer an experience like no other: recorded music that sounds like it’s being played live in front of you.

“We found that with the right materials, we got the best transducers from current to sound pressure, so it is the best in the world,” explained Orpheus product manager Axel Grell. “When you measure its frequency response, it works from close to zero kilohertz to over 100kHz and there is no other transducer in the world that can do that.”

Distortion, the unwanted tones in music playback that were not in the original signal, is something Senheiser has worked tirelessly to eradicate when building Orpheus in order to enhance sound quality to a level not heard before.

“When you measure normal speakers, even those made for studio recordings, when they produce sound pressure levels of 100db, they are in the range of one percent distortion. Our transducer, when it produces sound pressure levels of 100db, has 0.1 percent distortion, that’s 100 times less than studio speakers,” added Grell.

“The lower the level of distortion, the more details in the music are audible, so this is the reason why we made Orpheus: to bring distortion to such a low level so when you listen to music you can hear all the details.”

Distortion reduction is just one of many innovative ways Sennheiser is showcasing the technology in Orpheus to demonstrate how it can improve audio and thus pave the way for advances in the future. However, Grell said that it could be 25 years or so before we see technologies like this in mass market devices. Nevertheless, he predicts an expectation among consumers for very high sound quality in headphones and speakers, and that it will become a trend in the coming years with mass audiences who will expect music to have greater clarity and less distortion.

“Quality will play a bigger role in headphones and speakers and in production as consumers opt for better quality sound recordings,” predicted Grell. “This will be a trend, and it has started already, but the next step will be better wireless devices based on Bluetooth, which could see better quality sound and battery life.”

Another upcoming technology that completely turns audio tech as we know it on its head is the concept of listening to music from a part of the body that isn’t the ear. This might sound impossible, but bone conduction technology aims to give music fans the ability to listen to a recording privately and keep an ear on the world around them at the same time. Few have attempted to tackle the technology so far. But a startup called Studio Banana Things has had a go with a project entitled Batband, which the firm describes as “a high fidelity acoustic experience via an innovative bone conduction system”.

Batband looks like a set of headphones but without the ear cans. A band goes around the back of the head and the ends rest on the bones above your ears. The technology consists of transducers that emit sound waves transmitted at a frequency that can be conducted through the bones of the skull, perceived by your inner ear, thus freeing your outer ear.

Studio Banana Things insists that this works in a better way than standard headphones as it frees up your ears and you get to hear twice as much “without compromising on comfort, quality or style”.

However, Batband is merely experimental, as its status as a Kickstarter project suggests, and doesn’t represent a technology that the industry’s big players are taking seriously in their audio development strategy just yet.

For example, Plantronics, a Californian audio communications equipment company particularly popular in business, told us that it has experimented with bone conduction technology, but that tests proved it wouldn’t be viable for production because it doesn’t offer the level of quality or comfort its customers expect.

Nevertheless, as the technology improves, or if one company masters it, we could well see bone conduction used in the way we listen to audio in the future. It would be especially beneficial to cyclists, who need to keep their ears on the road and would like a backing track too.

But the future of audio technology isn’t necessarily just how we hear music through devices such as headphones, earphones or speakers. It could also encompass a change in the way we perceive sounds in particular environments, specifically at work, through the role acoustics play in an office, for example.

Plantronics champions this very idea with its open air noise cancellation technology. Some of the firm’s solutions include smarter working in offices by improving acoustics to offer better speech privacy, reduced reverberation in meeting rooms and lower noise levels in open plan offices.

“While flexible working is a growing trend, people still feel that they have to go to a place of work. As more offices become open plan, the acoustics are changing for the worst,” explained the firm’s head of consumer marketing and business development, Stuart Bradshaw. “And businesses are not taking into account the challenge of acoustics.”

Bradshaw believes that more companies face the challenge of keeping workers productive while providing a positive working space.

“We have created our own working space in Plantronics to demonstrate how audio best works. Our architects have created the right kind of white noise, which is pushed out across the office so that when you’re on a phone call it drowns out some of the noise from other colleagues,” he said.

Plantronics is not expert in the full solution of office acoustics, but is developing noise cancelling solutions and certain aspects of the kit that generates the white noise. The company believes it could be an area of traction as there are no other companies currently developing it, and it is worthwhile because workers can feel confident in the ability to focus, connect and collaborate in open office and on-the-go work environments.

“While for some people, personal audio devices can block out external sounds and make their individual work space more productive, there are negative effects to be had there,” added Bradshaw, explaining that this can be distracting for others. “However, white noise generators, for instance, I think will become a prevalent trend.”

While digital audio has generally seen the demise of physical audio formats in recent years, the future of audio could well see a return to a much older but once celebrated format, according to Orbit Sound’s Director, Daniel Fletcher.

Fletcher thinks that we will see a return of the once very popular vinyl records, with the music medium creeping back into the mainstream as people look to celebrate the art of enjoying music in a nostalgic, physical form.

“An upcoming trend I am rather please about is the return of vinyl,” says Fletcher. “It’s this nostalgic thing, and people like it. For the first time recently, vinyl sales outstripped ad supported streaming; it’s made a comeback and suddenly become and grown to become a cool thing.”

In the future, Fletcher sees it becoming a big trend as people want to experience music “and feel part of it again”, something that can be done much better with the ritual of collecting a physical format with art work; something that can be collected and shown off. A complete 180 when looking at the history of audio technology in the last 100 years, but at the same time, proof that in an ever-digital age, people still enjoy more traditional habits that are a somewhat removed from the virtual world.

Source

9 February 2016 Blog Read more

How to make singers sound perfect

Put on a Taylor Swift or Mariah Carey or Michael Jackson song and listen to the vocals. You may think the track was recorded by the artist singing the song through a few times and the producer choosing the best take to use on the record. But that’s almost never the case. The reality is far less romantic. Listen to almost any contemporary pop or rock record and there’s a very good chance the vocals were “comped.” This is when the producer or sound engineer combs through several takes of the vocal track and cherry-picks the best phrases, words, or even syllables of each recording, then stitches them together into one flawless “composite” master track.

Though it’s unknown to most listeners, comping’s been standard practice in the recording industry for decades. Everyone does it—“even the best best best best singers,”.

“Comping is not the thing that makes something sound robotic. Actually I would say it does the opposite.”

“Comping doesn’t have to do with the quality of the vocalist. “Back in the Michael Jackson days—and Michael Jackson was an incredible singer—they used to comp 48 tracks together”

But surely cobbling together a song this way must sound disjointed, robotic, devoid of personality, right? And while it’s true that vocal comping is used heavily in pop music where the intention is usually to sound smooth and polished rather than honest and gritty, most producers will tell you that this piecemeal approach is the best way to get a superb recording from any vocalist.

The process works like this: A singer records the song through a handful of times in the studio, either from start to finish or isolating particularly tricky spots. Starting with between 4-10 takes is typical—too many passes can drain the artist’s energy and confidence and also bog down the editing process later. (That said, it’s sometimes much more. Christina Aguilera’s song “Here to Stay” was compiled from 100 different takes. “She sat on the stool and sang the song for six hours until it was done—didn’t leave the booth once and didn’t make a single phone call,” engineer Ben Allen said in an interview with Tape Op magazine.)

The engineer generally follows along during the studio recording with a lyric sheet and jots down notes to use as a guide for later, marking whether a phrase was very good, good, bad, sharp or flat, and so on.

When the session’s over, they listen closely to each section of each take, playing the line back on loop with the volume jacked up twice as high as it will be in the final mix. They’re listening to make sure the singer’s on pitch, of course, but that’s not necessarily the primary measure of what makes the final cut.

Timing, tone, attitude, emotion, personality, and how each phrase or word fits in context with the other instruments and the rest of the vocal track can trump pitch perfection. Those little quirky gems add character and emotion to the track—they’re what the listener remembers.

The recording engineer picks out the best take for each bit of the song and edits all the pieces together, usually in a digital audio workstation (DAW) like Pro Tools.

Comping is one of the most common DAW tasks and the software has made it stupid simple, especially compared to cutting tape reels back in the analog days, when editors would mark the cut spot on the open tape reel with a pencil, slice it with a razor blade and attach the two ends together with sticky tape.

Most programs today let you input multiple files within an audio track, so you can simply drag and drop the portion you want from each take into the master track.

The editor makes sure the transitions are seamless, the track flows properly, that no glitches or bad edits made it into the final cut, and importantly, that no emotion or personality is lost in the process. A sign of a success is that the listener has no idea a song’s vocals were compiled from several different takes. The work should be invisible.

“It’s rare that you hear a really bad vocal comp,” says says recording engineer Mike Senior, a columnist for Sound on Sound and author of Mixing Secrets for the Small Studio. But that’s often because the edits are obscured by the other instruments in the track.

About 20 seconds into Aguilera’s “Genie in the Bottle” is a really clunky edit, but you can’t hear it in the song because it’s tucked behind a big heavy drum beat, says Senior. “If you think how many drum beats typically occur in a mainstream song, you can think how many places you can edit without it being heard.”

Listen closely to Adele’s hit “Someone Like You” and you can hear that in the first couple verses the opening breath is missing—there’s just no breath on that phrase, he points out. You can also hear some background noise on the mic throughout the song but then in certain places it cuts out, a sign of an edit.

As you can imagine, the whole process is incredibly tedious and time consuming; it can take hours, even days. “That’s why these records are expensive”

Max Martin and “Dr. Luke” Gottwald, the hitmakers behind mega pop stars like Miley Cyrus, Katy Perry and Britney Spears, are known for relying heavily on comping during their recording process. John Seabrook, author of The Song Machine: Inside the Hit Factory, writes in the New Yorker: “Comping is so mind-numbing boring that even Gottwald, with his powers of concentration, can’t tolerate it.” However, “Max loves comping,” songwriter Bonnie McKee told Seabrook. “He’ll do it for hours.’”

But while pop songs are often accused of being sterile, artificial, or overproduced, each producer I talked to said this is not the result of comping. “Comping is not the thing that makes something sound robotic. Actually I would say comping does the opposite,” says Senior.

Comping gets a bad rap because it’s lumped in with other editing tools like pitch correction and auto-tune, but “it is almost unreservedly a good thing,” Senior says. It gives singers the freedom to push the boundaries and perform at the edge of their capability, trusting it’s OK to mess up because there’s the safety net of having multiple other takes. And if there is a rogue bum note in an otherwise killer recording, you can swap it out with an on pitch note from another take instead of relying on pitch correction, which alters the overall sonic quality.

Pushing the limit and taking chances is what leads to those gems that can make a whole track, says Lewis. “That’s one of the beauties of comping—you get to search for the most magical piece of every take.”

“People have a very idealistic view of a producer or recording engineer’s job. If people really knew how records were made, they’d be much more jaded,” says Lewis. “But if you go into record making with the idea that you need to sing the song down from start to finish, come what may, you will rarely find the true magic.”

5 December 2015 Blog Read more

Don’t try to be popular, be original!

Nowadays up-and-coming artists are very concerned with emulating very precisely a particular style or sound or textural palette (or, worst of all, a specific artist). A lot of really talented songwriters are doing themselves a huge disservice by taking this approach, and here’s why:

If other people are achieving success with a certain sound, that means that you have LESS chance of succeeding with that same sound, not more.

Tons of people are making acoustic-based recordings right now; acoustic-based music has been really popular for the last few years. Which means that the marketplace is becoming saturated with recordings that all have essentially the same sound.

This observation is by no means limited to acoustic music. There will always be a market for acoustic music. The point is that when you go to a show and all four artists sound basically the same, they’re cannibalizing each other’s markets. Why would you buy each artist’s EP if they all sound basically the same?

Or, to put it another way: if an artist says “My sound is like Matt Nathanson,” the first thought is, “Oh, I should listen to that new Matt Nathanson record!” Because why would I want to listen to a cheap knockoff of an already-popular artist, when I could just go straight to the source?

Or to put it yet another way: the world doesn’t need another Matt Nathanson. The world already has Matt Nathanson. What the world needs is your unique voice.

Audiences don’t want more of the same – they want what’s next. As an artist, you want to be like a wide receiver. You don’t want to be where the ball is now; you want to be where the ball is going to be. If you’re making a record right now, it will be three months minimum before those recordings hit the streets, right? Potentially much longer. And by then all those of-the-moment sounds that you put in your recording will sound badly behind the times. And you don’t want to sound dated, do you?

Also, the industry doesn’t want more of what it’s already got. No one at a record label is going to sign someone who sounds exactly like an artist they already have – because they already have the original, and they don’t want to cannibalize their profits.

If you’re thinking of your career like a small business – and you should be – you should be constantly thinking about how to differentiate yourself in a crowded marketplace. What makes your music stand out? A good song isn’t enough. Everyone has a good song or two. What’s going to make people prick up their ears? What’s going to call attention to what you’re doing?

Figure out a way to differentiate yourself sonically in the marketplace. Get a new guitar pedal and figure out a new dimension to your personal sonic landscape. Hell, get a drum machine. There aren’t any rules! Experiment with some synthesizer sounds in GarageBand. Listen to some dub. Expand your horizons. Making electronic music? Experiment with some acoustic textures. It goes both ways. The point is to push your boundaries. Make something interesting and forward-looking and unique.

You probably have a couple of fantastic songs that deserve to be heard on a wider basis – but if your recordings sound the same as a thousand others, they’re not going to stand out. And you want to stand out. Right?

2 December 2015 Blog Read more

Deep House Techno EDM Video: Session 1 Mix – [Official] Videoclip by Beati Sounds

Beati Sounds has put their latest tracks all in a great mix! View the 50 minutes ongoing clip:

(more…)

25 November 2015 Blog Read more

How and when to Compress: 5 Pro tips!

It’s no secret that compression is one of the most important and versatile tools to an audio engineer. Use the following tips to improve your production skills.

1. Sidechain Compression or Ducking

A staple of the EDM production toolkit, the sound of a side-chained synthesizer and kick drum is instantly recognizable. Essentially, it involves using one signal to apply compression upon a another. There are plenty of online tutorials for this process, but the applications below may be ones you’re less familiar with.

Practical applications: Use the signal of vocals to duck drums or guitars to allow the vocal to sit more prominently in the mix, use a sample to replace or augment originally recorded cymbals, use a cowbell or tick sample rather than the kick to duck synth (due to the faster attack of the tick sound.)

Possible plugins: Softube CL1B or Valley People Dyna-mite, Waves H-Comp or API 2500

 

2. Multiband Compression/Limiting

Multiband compression allows one to affect the dynamic range of multiple frequency ranges independently of one another. Want to tame the beater of a kick drum without altering the low end? No problem. Simply choose a frequency range, and then set threshold, attack and release like you would on a normal compressor.

Practical applications: Master bus for clearing up problem areas like low-mid buildup, or on lead vocals to tame harshness in the 5-10k range.

Possible plugins: FabFilter Pro-MB, iZotope Ozone 6, Waves L3-LL Multimaximizer

 

3. Lookahead Compression

Lookahead compression essentially analyzes an input signal and applies compression before the signal is audible, allowing one to tame transients in a unique way. Lookahead compression can be achieved with a standard compressor by duplicating the signal onto another track in your DAW, moving the audio back in time, placing a compressor on the original signal, and using the duplicated audio as the sidechain input.

Practical applications: Really anything with prominent, fast transients but especially effective on snare drum and vocals.

Possible plugins: Softube FET compressor, Waves C1 Compressor with Sidechain

 

4. Brickwall Limiting

[Disclaimer] Learn how to mix before simply applying a brickwall limiter to the master bus of all your productions.

Although arguably the catalyst for the Loudness War, which stripped certain popular music of dynamics for over a decade, brickwall limiting certainly has its place in music production, live sound reinforcement and broadcast. Set the ceiling, and your signal will never go above it. Alter the threshold to bring the lower amplitude of the dynamic range closer to the top, allowing one to reach professional-level RMS without understanding professional-level mixing skills. [see disclaimer!]

Practical applications: Pre-mastering if used properly and mastering. Use on sub-auxiliary tracks to achieve higher RMS values before even hitting the master bus. Can be used on individual tracks to tame transients or shape tone just like a traditional compressor.

Possible plugins: FabFilter Pro-L, Waves L2, PSP Xenon

 

5. Parallel Compression

Parallel compression (sometimes referred to as New York compression) is great for keeping the original, natural sound of a recording, while still enjoying the benefits of a compressed signal. Simply route your signal to an auxilliary track (via the sends, not output) apply compression, and blend in the aux track to taste. Be aware of delay compensation settings in your DAW to avoid unwanted phase issues.

Practical Applications: Very popular on drums or signals with harsh transients. Also great on the master bus for achieving a boost in RMS.

Possible plugins: Certain plugins like Cytomic’s The Glue or FabFilter’s Pro-L allow for a dry/wet blend which can achieve similar results to parallel, but any of your favorite compressors can achieve great results if used properly.

8 November 2015 Blog Read more

Stop selling your tracks, start streaming!

According to recent stats from Nielsen Music, digital download music sales are plummeting while streaming continues to boom. During the last week in August, digital downloads in the U.S. plummeted to 15.66 million – its lowest weekly volume since 2007 – whereas on-demand audio and video streams rose to 6.6 billion – its highest weekly volume ever.

Streaming now represents a third of U.S. music revenue; up from just five percent five years ago. Compared to total CD sales, which were down 31.5 percent in the first half of 2015, streaming revenues were up by 23 percent, according to the Recording Industry Association of America (RIAA). For the first time, U.S. sales from streaming surpassed $1 billion in the first six months of the year.

The trend is abundantly clear – as streaming gains favor among consumers, revenues from album sales and digital downloads are drying up. This is an alarming problem for songwriters and composers – the people who are the creative engine powering the entire music industry – because streaming revenue does not come close to closing the gap in physical sales, and certainly does not reflect the scale of music use on these new platforms.

Under the current system of antiquated laws, it takes nearly one million streams, on average, for a songwriter to make just $100 on the largest streaming service. However, songwriters are limited in their ability to negotiate higher compensation in these situations. When licensees and Performing Rights Organizations, cannot reach an agreement, songwriters are forced to use an expensive and inefficient rate court process in which a single federal judge decides the rate.

No other industry works this way, and we are way past due for a change. But that will not happen unless songwriters, composers and music fans make their voices heard in the ongoing debate over music licensing reform. If we truly believe that music has value, we must urge our leaders in Washington to make changes that ensure songwriters are able to receive fair compensation for their work in the marketplace.

Fortunately, the U.S. Copyright Office, members of Congress and many industry observers have realized the absurdity of the current regulatory framework and called for reform. Importantly, the DOJ is formally considering much needed updates to the ASCAP and BMI consent decrees that would better reflect the way people listen to music today. As part of the DOJ process, ASCAP has recommended changes that would foster continued innovation and competition, and result in music licensing rates that better reflect the free market.

Songwriters and composers are the lifeblood of America’s music industry – without their work, Pandora, Apple Music, Spotify, Tidal and the like would have nothing to offer new listeners. A music licensing system that reflects advances in technology and today’s competitive landscape would better serve everyone, allowing songwriters to continue creating the music that is loved by so many.

1 November 2015 Blog Read more

1 2 3 4