ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
  • »
  • Entertainment and Media»
  • Music

Digital Music Production Basics

Updated on August 13, 2016

Soundcards

It is common for musicians who have just started out with independent music production to eventually come to a point where they need to understand the signal chain, this is the hardware or equipment that transfers your sound from the source to the computer and then into a recorded track.

The primary part of the signal chain when talking about digital music production is the soundcard and it is an extremely important piece of hardware that all musicians and audio enthusiasts need to be familiar with...

The first place to start if you are just beginning to research soundcards is to understand the fundamental differences between a soundcard and an audio interface.

Audio Interface: Commonly referred to as a soundcard for the sake of ease, audio interfaces (AI) are actually the combination of a pre-amp and converter combined into the one hardware unit and its purpose is to increase the level of the signal sound from standard microphone level to line recording level by converting the initial analog signal into a digital one.

Traditionally most audio interfaces should also include forty eight volts of phantom power for better use with condenser microphones and can be connected to your computer through a USB or firewire connection.

Soundcards: Soundcards are just audio converters; the name itself is used most often when speaking about normal everyday computers because in that setting they convert the digital signal in the computer to the analog one that you hear from the speakers.

Normally, not many people use the soundcard to input sound, although it can serve that function.

External soundcards sit outside the computers case and are another option for people who are interested in taking their music to the next level, but don’t want to have to open their computers.

What’s The Difference?:

Primary: Soundcards do not have pre-amps; single unit converters only translate analog signals to digital signals.

Secondary: Soundcards are only converters; audio interfaces contain a pre-amp and a convertor.

Audio Affects

Audio processors and audio effects are two very different things, audio processors take a signal and change it entirely creating a brand new signal, where as audio effects take the original signal and combined with a new signal.

Dynamic processors and frequency processors are two examples you can look at, these effectively change the frequency dynamics of the audio content signal entirely, this is sometimes referred to as the “chaotic method”

Other traditional effects are things like chorus, delay and reverb, even though they are different most audio enthusiasts never truly understand the differences from one or the other and categorize them both together under the blanket term “effects”.

Effects can be utilized in digital music production for technical or aesthetic outcomes and can often allow us to have more control over the final “shape” of the sound that we want.

An essential part of digital music, effects are encountered in both the digital and analog areas of music production, but even though there are multitudes of digital effects obtainable today, most of the time they fall into the following three types.

Dynamic Effects: These change the dynamics and tone of a signal, some traditional examples of this are compressors and signal limiters that are used frequently to change the tone of a performance or studio recording.

Whether it is making a loud part softer or a soft part louder, these kinds of effects are very common in digital music production.

Frequency Effects: These are used to change the frequency content of a specific signal and by doing so altering the frequency content in the signal you can attain a more bright or dark sound, depending on how you choose to implement the effect.

This become very useful in situations where the recorded music or beats are severely lacking in the areas of dynamics or tone consistency and also in instances where audio may need to be enhanced with the assistance of things like distortion or equalizers.

Time Effects: These are very versatile effects that are from the delay family and come in a multitude of varieties such as flangers, phasers, echo and of course reverb, the primary difference between these categories is the time between the effects signal.

For example, a flanger has a very quick time between modulations where as an echo traditionally with have a more linear longer time.

Compressors

During the final stage of music production, a variety of hardware is available for us to use, The hardware usually falls into the effects group or the processor group. If you didn’t know, the processor is hardware that translates the complete signal and alters it completely, thus changing the initial signal.

Generally speaking, compressors are a process that regulates the tonality and over all dynamics of any musical act.

We use it in a multitude of ways to create a more solid and sharp audio sound, however as with any hardware, there are of course certain settings that we can change to direct the sound and tone to the way we want it.

1: Attack: This setting generally decides the speed of the compressor and how it reacts to a signal that moves over the threshold.

Attack is usually measured in milliseconds, ie: a up tempo attack setting would be around 4ms: meaning, 4ms after a signal crosses the limit, it would then be compressed to the desired setting.

2: Input Gain: This functions as a feature to increase the inbound signal to the compressor and assist with compression in a multitude of ways…

Used to regulate dynamics, raising the maximum gain button this allows for a increase of silent sections of the transmission and allow you to hear a more crisp signal without the rest of the input gain noise.

3: Ratio: This decides the amount of compression that will actually take place, by using ratios like 4:1 and 2:11.

These represent a few things, ie: a ration of 2:1 shows that ever 2 parts of the signal were higher than the threshold and would be automatically decreased to the allocated setting of 1db.

4: Threshold: This is the value that shows where the compressors stack activating, in digital audio music production usually the highest signal is 0 dbfs, so this means that if we allocate the maximum at -24, any transmission that is higher than that will be automatically re sized to be smaller.

5: Release: This is essentially the direct polar opposite of attack because it decides the length of time that the compressor holds the gain reduction that is successful when the signal returns to a suitable level below the threshold.

Short releases often mean a gain reduction is let go straight away, but sometimes the signal can drop beneath the maximize, while a lengthier release might mean that it stays even longer through the signal.

It all really depends on the hardware that you choose to plug in and use, different compressor units may have less or more settings than the ones listed here, these parameters may have low/high frequency range cut, or a gain reducer meter.

Ultimately though, because every piece of software and hardware has its own creative flare and features, its often best to make sure you are using high quality sounds like any audio engineer or dj would use, with this available to us today, get out there and use intuitive software and tap into your own inner creative spirit!



Mixing Basics

When recording live performances and studio session musicians, one of the first steps in digital music production is making sure you combine the different signals and tracks into one final master mix…

This is called “Mixing” and it is the process of blending two or more sound signals in one single musical composition.

The fundamentals of mixing can be allocated into three (3) areas:

1. Volume (Or Level):

When mixing a level, we tie the track sections of a composition together via changing the peak of each track or instrument layer, if something doesn’t sound quite right, we can alter it using the fader on the master mixer.

On Digital Audio Workstations (DAW) you can do this virtually via your software based volume faders…

If an element sounds too far removed, or very silent, then you may increase the audio signal to make it brighter and louder, pushing it to the front slightly.

2: Equalizing (Or Frequency):

Often equalizing is one of the more advanced techniques in digital audio mixing, this is because trying to understand the correct equalizing processes, a sound technician needs to know the harmonic structure of the individual sections, and then understand how to change the frequency, effectively altering the timbre of the instrument or sample…

3: Panning (Or Stereo Image):

.When you are mixing a final project, especially a stereo mix, which seems to be the most common these days, you are able to balance the sections of the composition by altering their stereo image location, this is also known as panning.

Sometimes referred to as “Pan” this stands for panoramic potentiometer, panoramic suggests that you can place the sounds signals and sections in an audio field that stretches from right to center to left and beyond…

At the very start of the early music era, once recording had begun, there was only mono signal technology, so there was really any need to figure out which track was positioned where, but now with our stereophonic technology, we can pan and shift our music’s shape endlessly.

Panning must be done with serious consideration and you should always refer to other successfully mastered tracks featuring panning to get a better understand of how to go about using the feature for yourself with the correct methods and ratio settings…

Comments

Submit a Comment

No comments yet.