Jump to content
IGNORED

Processing vocal samples - work in a mix


Recommended Posts

Hello humans. Hope you are well.

 

I really enjoy cutting and slicing up little sample sound bites for use in tracks and I've recently taken stuff from tape of a 1980s tutorial to computer use..it's funny in places hence the interest.

 

Anyway...What are some good general tips/strategies for getting a vocal sample to sit well in a mix.

 

EQ/Compression tips?

 

I've been listening to this a lot and intend to add loads of processing to the original sample to have it kind of drift in and out of the overall track:

 

 

Thanks and best wishes.

 

P

Link to comment
https://forum.watmm.com/topic/89091-processing-vocal-samples-work-in-a-mix/
Share on other sites

Thanks yeah, I'll try that. I play around with panning quite a lot these days...I was contemplating distorting it a tad also to bring out the tops

Put an exciter on them, brings out the 2khz harmonics, use a high pass filter to take out the bottom end and sidehain compress and pads or any other stuff that could mask them.

If I've got vocals in with beatz and all sorts of music and shit, I tend to boost (and cut in other parts in the mix) in the 1.2~5kHz (10+kHz spikes if it's female singing vocals and I want that crisp air), depending on source material. For me, pretty aggressive compression is used for vocals, to keep everything audible, but this hard compression allows me to keep actual levels lower (because perceptive volume is much higher). In Logic on the compressor for vocals, I often set output distortion to soft (sometimes hard), which sort of emulates the lushness of tape saturation or vocals being recorded with a tube mic or whatever.

 

In general for vocals, I cut everything below 80Hz (and super lower everything below 200Hz)-- basically lower everything but the higher fundamentalz. I used to have a problem with vocals sounding muddy, and the problem was that I would keep too much of the lower and mid freq. It's surprising how little is required for vocals to stand out.

 

For male spoken vocals, though, I sometimes boost around 300~500Hz (along with the higher boosts)- coupled with hard compression- which gives a nice "radio voice" quality.

 

And of course, reverb. Reverbs that simulate tiny spaces (and used sparingly) are great for giving the vocals a grounded quality, so tiny type spaces are great for spoken vocals (like for podcasts). Massive hall reverbs are good for rave vocals where the woman is singing about some epically inconsequential shit that feels like you're in space when the e peaks and the bass drops.

 

Anyway, I say overall in the most simplistic sense, cut as much as you can, and just keep the fundamental freq that convey the best quality of the vocals. This can be found by lowering the volume of the vocal track solo'd, then making a huge spike in the EQ plugin (like +10db, Q=3), then dragging that spike through the lower to high freq. Whenever there is almost like a rumbling or angel singing quality in your penis or vagina, that is when you know you've hit the freq on the spot. Then cut out most bass, boost those chosen special freq, then cut (just a bit) those special freq in every other part of the mix that shares freq.

 ▰ SC-nunothinggg.comSC-oldYT@peepeeland

 

 

 

 

 

 

 

 

 

 

  On 4/22/2014 at 8:07 AM, LimpyLoo said:

All your upright-bass variation of patanga shitango are belong to galangwa malango jilankwatu fatangu.

^ good advice. Just to add to that I use the reverb EQ to cut out below 500khz out too, as reverb can sometimes muddy things - but I guess if you've already EQ the vocals prior to adding reverb it's less of an issue.

I tend to use vocal samples more like an instrument, not caring much about the actual lyrics or whatever. The way I go with voc samples is basically (on cubase 5):

- pitch it down a bit (-3) usually just to make it sound a bit different

- autotune that bit into a new melody (called variaudio on cubase)

- compression and a bit of saturation

- chorus or cloner plugin to make it a bit wider

- delay, lots

- sidechain compression to the kick drum

+ usually two send channels into a very long reverb and a smaller room verb

 

sometimes i duplicate that voc track and autotune the new track into a harmony with the original and tweak the plugin settings a bit so the're not identical.

I usually find that I can do a cut around 600-1000hz area with an EQ to make things sit a bit better in the mix. Also I always cut the low end just to be sure, from >100hz.
If I'm sampling something that is fairly modern sounding, the high end might be a bit too sizzly so some hi-shelf might be good.

 

sounds usually something like this (from 0:35) https://soundcloud.com/thiefinger/young-shoots

Edited by Thiefinger

Brilliant thank you.

 

I did most of what's been said to be honest but need to do more sidechaining stuff probably.

 

What I've been realising is that I'm probably too destructively sidechaining stuff when I'm trying to make space in the low end for the kick to punch through and inadvertantly removing some higher frequency detail which doesn't need compression.

 

Does anyone know how to properly use the sidechain eq on Ableton's compressor? Does that help address the issue I've stated above? - Otherwise I'd split stuff into seperate chains and process differently in each case.

 

http://austinabletontutor.com/wp-content/uploads/2015/02/Ableton_Live_Side_Chain_Compressor_EQ.png

 

I should probably make use of that more..

Post a zip file with 8 bars of your stems (vocals, beatz, any other musical shitz, bass, etc.), and the WATMM Mixmasters can show you waddup.

 ▰ SC-nunothinggg.comSC-oldYT@peepeeland

 

 

 

 

 

 

 

 

 

 

  On 4/22/2014 at 8:07 AM, LimpyLoo said:

All your upright-bass variation of patanga shitango are belong to galangwa malango jilankwatu fatangu.

Guest Chesney

I think it all depends on the vocal and the result you want.

Like Thiefinger, I use them just like an instrument, so lots of manipulation and effects. The result is no more a vocal than a synth.

But if you want real vocals keep it simple but with heavy comp and EQ, pretty much what Peace said. Sidechaining is not something you want to do much unless you want it as a specific effect and the other elements of the track match well enough for it to work.

Yeah, I usually try to only EQ the lows and highs (lows more or less as Peace 7 described and a much gentler roll off of a few dB above maybe 16k depending on the material) and try to leave the mids, especially the stuff betweek 1.5k and 3.5k (roughly) untouched and then boost the level of the track if I need to. The human voice in that part of the spectrum is something we're really sensitive to, kind of the audio counterpart of the human face, and the less I mess with the phase relationships in there the better. If you need to, you can also roll off the highs a bit more aggressively to move the samples back in the mix without needing as much reverb.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   1 Member

×
×