Your original leading free studio quality acapellas provider.
Good eventide musical guys & gals.
I'm a professional (mostly hehe) audio engineer based in the UK. I currently write for several huge names (please don't ask who because I won't tell you!) in the trance world, and engineer almost every genre for my clients... so that's dubstep, house, uplifting trance / hard trance / progressive trance / euro trance, hardstyle, hard house, breaks, DnB, hardcore / freeform, and have also dabbled in styles like schranz, minimal / tech house, techno, and gabber / hardcore techno. I've been doing this for close to a decade and also provide tuition and mastering services to my clients.
As a means to get the community here abuzz and people talking, swapping tips and ideas etc, I'm opening up this thread to YOU the A4U public, to ask me any production-related questions you may have, whether they're general questions or related to a specific genre. If I don't know, I'll say so rather than copying and pasting some waffle from wiki I've done live Q&A before at the BPM music conference, Birmingham so am confident I'll be able to provide the info you might be looking for.
So if you have a burning question... better visit the GUM clinic! Haha. Seriously though feel free to ask away.
Last edited by acapella on Sun Aug 11, 2013 9:41 am, edited 1 time in total.
Reason: super topic helping all the producers here, stickying
Alright here's a question i've been wanting to know for ages: how does Burial achieve such unique sounding pitched vocals (eg on the track archangel)? I've tried using melodyne, vocal transformer, etc etc, played with the pitch and the formants, but i can never get it to sound quite like burial does
Not about music, but as an audio engineer you might know.
On TV, in programmes and stuff, in scenes they cut and chop bits and pieces of video and disguise it by switching camera, and it looks like one shot but in fact is a bunch of different takes mashed together.
I always wondered how this worked with the audio? Cause obviously they have to cut the audio if they're using different takes for one scene, but they never sound chopped up, all the background noises are the same and continuous between shots and takes.
That's my question really, how can they chop up all the audio from several different takes to make a scene and put it back together without us noticing that.
How do you usually go about structuring a track initially? Or rather; what's your process for initially creating a track? What do you use as the "foundation"?
Sorry if this doesn't make sense, it's been a long day.
It sounds like what he's done here is, chopped the vocal at the beginning of each vowel sound (a la 're-e-ewind') and either put them on a different channel with a live pitch shifting vst, or done it manually... you're on the track with formants, it does sound like he's 'preserved' the formant. I would recommend chopping each word at the start of the vowel, clone the vocal track (obv delete all the stuff copied down), then you can experiment with a variety of pitch shift vsts till you find one that gives the desired similar sound to Archangel. At some points it sounds like the entire vocal is on the pitch shift vst and he's just automating it up and down (independant of the formant). Two really good live pitch shift tools to try which will shift the pitch independant of the formant are: Elastique pitch, and Pitchfunk (although my useage of Pitchfunk has been fraught with latency issues, so I'd guess you would have to setup all the automation putting up with the latency then bounce to audio so you can re-allign it). Let me know how you get on!
That's an interesting question. I've never worked with film, but from a logical point of view I would imagine you can get an idea by understanding the Foley process. Foley is basically creating sounds for film (ie a bats wings flapping will be an umbrella opening and closing, a punch will be someone smacking a side of ham, lightsabers will be a cardiod mic swung between two feedback sources ), this will extend to everything but the actors lines as far as I know. So there'd be no need to cut or chop between noises because it will just be one track playing all the time, ie crickets and wind, there'd just be one track of crickets and wind playing the whole time, *perhaps* a little post processing would be applied to make it sound like it's further away (ie if the camera drops back, volume is dropped also to imply distance). However although I know about Foley I do not work with film, so my thoughts could be complete bunkum! I'd imagine that's how it works though.
Your question makes perfect sense!
This probably isn't going to be the answer you want, but for me, there's no real set process. Sometimes I start with a main melody; in this instance I'd make a killer melody, then build all the kicks / bass / percussion under it, then work backwards from there making the breakdown, and then maybe go all the way back to the bassdrop, build up the intro from there and fit it all together.
Sometimes I start the opposite way; ie from the kick, build the bass, then percussion, then intro, then main riff. Almost always though I make the main melody BEFORE the breakdown. The reason for this is; it's all too easy to make an out-there chord sequence that you think is the mutts nuts, only to find out that the amazing bass you've made on the intro sounds utter pants when being pitched up and down onto other notes away from the root. So by making the main melo and ensuring it works with your bassline, you're saving yourself a lot of potential ballache and even worse the old classic of "I can't get it to work, I'll move on to a new track for now until I figure it out" and of course the uncomplete track sits there gathering digital dust.
A few things to think about when starting your track;
- I prefer to have the vocal (whether it's a one-shot, spoken movie dialogue or sung vocal) ready to roll before the project, although obviously that's not possible sometimes. It's a very good idea to have a clear idea from the start what key you are working in, so you can tune the kick drum to the root note of your track. Otherwise you run the risk of having to repitch everything you've already made to suit the amazing vocal you've found. I'd rather spend 3 hours searching for a decent vocal than spend 6 hours shoe-horning one in to a track it wasn't designed for. However there have been those occasional "haha" moments when a (seemingly) randomly selected vocal just slots perfectly into place!
- having a track template is an AWESOME idea. Let me explain; I don't mean use the same kick bass and perc in every track! However if you think about it, everytime you start a new track you will have a channel for crash cymbal; channels for kick; channels for perc, fx, bass, synths, snares, endless channels. It's a fantastic idea to create a template with all your channels linked to the relevant mixer channels, all named, coloured and organised in a manner which is pleasing to the eye and which you can look at 'at a glance' and know where you are in the inevitable huge amount of tracks you'll end up with writing most dance music styles. You'll also tend to settle on a 'go-to' reverb for your main send reverb, I try to use the most powerful (in terms of cpu) reverb I can handle for my main reverb as if many things are being bussed to it, you want them all to sound as good as poss. So you can see that by setting up a template in this manner you're saving yourself from having to do mundane repetitive tasks evyer damn time you start a track! Thereby freeing up more time to be creative and concentrate on the fun (hopefully) part... the music!
If anything I've said doesn't make sense (I too have had a long weekend haha) feel free to hit me up for clarification.
how can I make the synth from dj bl3nds club mix this starts at 20 seconds and it has kind of a strobe feel I have fl studios 11, 10 and reason essentials.. this mix can be found on youtube
have you ever used a Maschine? / any opinion on it?
I've used one round my friends before (she is a git, and has all sorts of lush amazing new and old school hardware... an actual TB909 *drool*). I can definitely see the attraction, however imo it's more of a rich boys toy. Perhaps it is my inexperience with the gear but it seems to me, there's nothing you can achieve with Maschine that you couldn't achieve by assigning several samples to different keys of your midi controller via kontakt for instance. However I would concede that sounds assigned to different keys won't have that same 'drumpad' playback feel as things like Maschine.
I personally have my eyes on a Novation Launchpad (or equivalent). They look properly amazing for any live work or beat mashing. Have any of you seen the amazing Skrillex / Knife Party mashups done with Launchpad? Truly mindblowing. http://www.youtube.com/watch?v=PuW1aZNol5U
Hey, thanks for that advice, really gave me a different way to approach some of the things I'm doing.
I'm trying to play around with some 8-bit and/or glytch type sounds for some things I'm working on right now. Do you have any suggestions on VSTs that would be good at generating those types of sounds or effects that could be applied?
Also, something I've been wondering for a while; what are some good ways to produce robotic type effects on vocals? Not autotuned style stuff that's real common but something like what you hear at the beginning of "Son of x-51" by Powerman 5000 http://www.youtube.com/watch?v=konPM9G9g9w
I know there's a few different effects going on there, obviously a reverb and some eq, but I remember playing around trying to get it years ago but just couldn't quite get it.
I'll have to get back to you re the 8bit thing as I'm about to go to bed and just noticed this but am sure I've got a couple of vsts that are for 8bit stuff. I had a quick listen to the start of that track though and I'm pretty sure that was made with the Fruity voice synth (part of FL Studio). Set to 'Martian' if memory serves me correctly, been a while since I used it and there's a few cool modes so might be another one. You can have it talking 'naturally' (I use that term loosely lol) or set it to a fixed pitch like 'Son of x-51'.
Hmmm curious, I had a good root around my (admittedly ridiculously large) collection of plugins but couldn't seem to find the chiptunes stuff. A quick googling through up this page though:
http://www.boyinaband.com/2009/09/top-1 ... -plug-ins/
This is an excellent post transc3nd and we're going to sticky it.
Thanks ever so much for your contribution and we look forward to watching this topic of yours grow.
Can I ask about audio interface?
I have been using EMU 1212m on my old desktop to record dj mixtape and produce music. Then later I got mobile with laptop and NI audio Kontrol 1 for my dj gig. Now I am thinking of producing music, I am thinking of getting RME fireface UC or perhaps with enough budget RME UCX. I have few question:
-Say If I am to link up with studio equipment (either some producer's studio or audio engineer's studio) with the RME interface, whats the best setup? Is it by using ADAT?
-On production side, for long distance work like track sharing say I want to send my ableton project to another producer. Whats the best way to send it? Should I record track by track or leave the midi track and its vst (if the one receive have the same vst its fine,if its not)?
Funnily enough, I'm using NI audio Kontrol 1 right now... love the sound of it although it can be a bit buggy sometimes. So unfortunately I can't give you much info on the first q other than to buy the best soundcard you can afford
Re your second question, if the person you're collabing with is using ableton and the same version, AND all the same vsts as you you'll have no probs. But obv if you want to ensure total compatibility between two different studios for any circumstance, you'd have to bounce everything to audio stems, which can be time consuming. Would prob be best to find out where the other pesron is at with their DAW before you do a remix pack as then you'll know how much they will need from you.
Hey, I was just wondering what DAW you use? I use FL studio personally, and I was told its amatuerish, even though I find it just as productive as any other DAW. I was thinking to switch to Ableton Live, just to be "more professional" since I dont have a mac and cannot get logic. Do you think it is worth it to switch? Even if some DAWs seem more professional than others, I always figured it was preference. Thanks.
What is the best software limiter to produce that chunky loud sound of commercial tracks.
For the love of music and to say a big thank you to my sponsors www.christmasjumpers.co
Hello there. I am looking forward to producing handsup music, since it is my favorite sub genre in the EDM scene I can already create decent THT styled beats with various programs, but I am wondering where I can get acapellas from handsup songs I am planning to remix.
A few examples of songs I may remix in the near future:
-Mike de Ville Ft. Frank Magal - "Everybody Dance"
-Cueboy & Tribune - "Breathless"
-Tomtrax - "Mono 2 Stereo"
... And so on.
So basically I am wondering how to get acapellas for such songs, because I can't find them anywhere.
Any help is greatly appreciated, thanks and have a great day!
I need to remove the vocals from a track. I have the original, I have an acapella. Is there some way I can phase invert the acapella to remove the vocals and create an instrumental?
Hello, i would wantd to know, if logic pro x AU's are enough to make pop music (Demi Lovato, Ne-Yo, Britney Spears, Lady Gaga, Katy Perry,etc) or are some VI's that i can use to make some great POP music, this is more about synths, strings, lead, pads but not for guitars or bass, because i record those live.
Thank You very Much,,,
why is fl considered amateur?
Any tips on getting rid of interference from instrumentals in the mid range and get them to come through clearer? (i.e. between a synth and a piano) Any EQing techniques you would recommend? Also any tips for filling up more space in my mix if it's a bit empty?
How do I get the greatest possible dynamics in my productions?
Hey there everyone. Apologies for late response, been variously ill and busy... in all honesty I completely forgot about this thread!
So, then to the questions!
I started in FL, I now use Cubase. FL does indeed have an 'amateur' tag, this is because the design is skewed towards those with less musical / sequencing knowledge (or none at all). Also down to the design of the DAW itself, it is all too easy to produce poor-quality music. You can however make fantastic, tight music with FL. To do this you will need two things which I will cover below... 1) a monster PC, 2) knowledge and mastery of what I call 'gain flow'. Also I haven't used FL since 3.56 (showing my age here) so apologies if any of my info is out of date!
Have you noticed the 'quality settings' on FL? You have a quality setting for export, right? You might not have noticed but you also have a quality setting (QS from now on to save me time) on the main mixer. If your PC isn't monstrous, you will most likely never use the higher modes and stick to stuff like 6-point hermite if it's really poor! If it takes you an hour to render your project in the highest mode, you will likely never do so. Immediately here, you are sacrificing sound quality (cubase has no QS. Everything is rendered and processed at maximum qual!). Now, onto the mixer QS! This could be thought of as the difference between transferring, then listening to a 320kbps and a 128kbps mp3. Sure, the 128kbps one will transfer twice as quickly, but it SOUNDS CRAP. Unless your PC is powerful enough to handle the mixer at highest QS, then you aren't even listening properly to what you are creating. Factor that in with the likelyhood your room isn't perfect, your monitors aren't perfect, and wham you've introduced yet another layer of lies to your listening environment.
"But I could just bounce stuff to audio" I hear you cry (am I having another flashback?). Yes, yes you could, that would take the heft off the processor. *Remember* though, unless you have set the QS for output to absolute maximum, then all you are potentially doing is stripping away some sound quality upon render, importing it in, whereupon you will RENDER IT AGAIN in the final project, thereby stripping away yet more. That right there was the reason I switched to Cubase, but Image-Line would be crazy not to both updating the summing engine over the course of a decade.
Now to point 2)... 'gain flow'. This right here is THE MOST IMPORTANT TOOL YOU MAY EVER LEARN. I know that sounds really arrogant and boastful but trust me, when you start applying this you may just cry tears of joy at the difference.
Imagine your mixer: you have a master volume control. You also have volume controls in the form of faders for each channel. Those volume controls even have volume controls on them! Not to mention the vol controls on the step sequencer, piano roll, internal sample vol controls, the vol control on each EQ, in fact on practically every vst going. What happens when something 'clips' ie goes over maximum volume? You get distortion. Usually... bad, horrible, unintended distortion.
So I imagine that little nugget has got you thinking "Damn, I need to make sure nothing is clipping anywhere in the project". That's exactly what you need to do, but it doesn't end there. Think about your sample collection... I'd imagine you've got stuff like Vengeance. Look at the samples once imported. The volume is either at 100% or near. Don't just turn down the faders, turn down the samples BEFORE they go into the faders. Otherwise, you're simply turning down a clipped signal! Trust me this one small technique is the difference between a 'plastic' sound and a full, fat sound. So, if your master bus is clipping, and you turn down the master bus, great it's not clipping on the master bus anymore, but the sounds are all too hot going in and need to be turned down before they get there.
Finally, and I'm not sure if what I'm doing here is essentially achieving the same as "k-metering", but I use an EQ plugin called 'equilibrium' to check the outputs of things. For example, your fader volume readout could be telling you -1db, and you think cool, everything is gravy here. But if you were to look at it with a plugin which shows the mid-side, you'll see that actually the side or the mid is clipping. I have a sneaking suspicion that all DAWs are doing some kind of M-S processing in the background as since I started not just turning down overloud things, but turning down things so the M-S doesn't clip, my music has gone from cool to wow. Search youtube for 'freemasons k-metering' for a really interesting vid by one of those producers. Hope this helps, if you don't understand anything drop me a PM
Hi Sebastian. Good question.
This isn't going to be the answer you want to hear, but as someone who in the past has been limiter-mad... while you do need to use a limiter on the final master, use of limiters in your project isn't always desirable. To cut a long story short, what I've found is that using limiters to keep smashing everything down, while it 'looks' like you've shaved some peaks and thereby gained some headroom, what you've actually done is traded dynamics and quality for headroom. And it's debateable whether it'll be saving you much headroom in the context of the entire mix. Sometimes limiting stuff can lose headroom in the whole context!
Without doubt, the best software limiter out there is A.O.M Invisible Limiter. This thing is incredible and has up to 16x oversampling. I've managed to get things sounding louder without being 'flat' sounding using this, I can go much further than any other limiter.
I've come to realise though, just because you CAN, doesn't mean you SHOULD. A chunky, loud sound comes from a solid mixdown in a well-treated room, with decent monitors. May I ask, what sort of environment are you writing in? Are you aware of how much your room modes and monitors can lead you to an unbalanced track? Apologies if I'm teaching granny to suck eggs here, but on the off chance you're not aware... even if your room is dead-on perfectly treated and perfect dimensions, your track will ALWAYS sound different the minute you leave the room you wrote it in. The difference being, in a well-treated room it will translate well to each new environment, while something written in a badly-treated room will only sound good in that exact room.
If you've not got a perfect room (I'll be honest, I'm a working engineer and I don't, luckily there are fixes or I'd be out of a job!), there are steps you can take. Definitely get some form of treatment up, it's really not expensive. Secondly, buy yourself a copy of ARC2. This is something you really don't want to pirate just in case anyone is considering it, as you will need the specialist mic that comes with it to record the room modes. If you're unfamiliar with this product, trust me as a user, THIS THING IS AMAZING. Just amazing. Very hard to trust at first as it makes things sound 'wrong' but you have to trust that it was wrong in the first place, and now it's right. http://www.ikmultimedia.com/products/arc/
Good luck with your quest for fatness Sebastian!
Hello there AussieAxeman and xmsman! Art thou the same entity? Anyway as the question were so similar I figured I could lump them together.
The 'axeman has the right idea... yes you can use phase invert to create acapellas this way, I've uploaded some to this very server myself in the past that I created with this method. 'axeman, this will already prob be known to you but as xmsman doesn't seem to be aware of the method I'll explain it in detail.
As producers, I'm sure you've come across the dreaded phase before. Eg, you put a sound in, and all of a sudden your kick has gone to s**t, because the sound has 'phased out' bits of the the kick bass. Why is this?
Answer: say you have a saw wav: \/\/ now imagine you inverted the phase /\/\. If you were to put the original (uninverted) and the new inverted saw over each other, exactly lined up, they will produce silence. Try it in your own DAW, because what I'm going to ask you to do next will illustrate exactly why creating your own pellas sometimes doesn't work very well (sometimes it produces gold but not very often).
So; take the inverted saw, and shift it to the left or right, zoom right in so you're shifting it by an incredibly tiny amount. What you will notice upon playback, is that the sounds no longer cancel each other out! They may to some degree but some crud will be left behind. Now set them back so they are producing silence again.... and increase / decrease the vol of one of them. Notice again, the perfect phase is gone.
Now, continuing on this principle, you can imagine if like 'axeman suggests you take a radio (vocal) edit of a song, then take the instrumental edit, invert the phase, lay one over the other, it SHOULD cancel out all the music leaving you with a vocal to ravage! Unfortunately, reality often barges in at this point.
Remember how moving the inverted saw even a miniscule fraction of a fraction off the same volume or position didn't produce a perfect silent phase? Well, imagine you've made a pro pop song. Then you are asked to do an instrumental. Even if you were to simply mute the vocal track, 99.99999% of music you listen to will have been mastered. Because the original vocal mix and the instrumental mix will both be mastered, it's highly unlikely that this processing will leave the instrumentation the same in both version. So if the level of a snare drum happens to come up by 0.1db (eg due to the limiter being set slightly higher as the mastering engineer felt track could be louder now it has no vox), when you do the phase invert and overlay the two tracks, you're going to get some bleed from that snare drum. Hope that makes sense!
So it requires not just the skill (must lined up BANG ON) and knowledge of how to do it, but the source material must be perfect. You will struggle to achieve this for instance with mp3 instead of wav, as the mp3 encoding will have altered the original data making it nigh-on impossible to match the peaks.
Hi there JR.
I don't use Logic in my studio, however one of the things I have taken away from times I've used it in the past was how 'solid' all the stock plugins sounded. I would feel confident in saying that Logic has all the tools you need to make great music.
For that RnB-esque pop (going by the artists you mentioned), as important as the sounds are, the mixdown and clarity is as important. Listen to tracks by those artists and notice how the production is at the same time, sparse yet full. It's very tempting to just keep piling on sounds and instruments but it's better to have 4-5 things in their own space than 20 competing.
If you want interesting and unique synths / strings / lead / pads and anything in between, I would strongly recommend NI Massive (anyone who uses this synth purely for wubs should have all their fingers removed and donated to a better cause, it's so versatile and very 'phat' sounding if you use it right) and Camel Audio Alchemy for 'out-there' atmos and pad sounds.
All the best with your pop production, a very lucrative market indeed if you can get your foot in the door.
Users browsing this forum: No registered users and 1 guest