Power Field Studio

Power Field Studio

sexta-feira, 11 de agosto de 2017

Fazendo Dinheiro Com A Sua Música - 6 Diferentes Fontes De Renda

A different take on making money from your music: The 6 different kinds of income sources

First of all thanks to Cheryl B. Engelhardt for this article.

Balancing sanity and satisfaction (and your checkbook) in your music career.

There are a bajillion articles, courses, and books on making money in the music industry. Heck, even my mastermind course covers this broad topic simply because we have to talk about it.
All of the different ways to make money — from house concerts to monetization to Spotify playlists to merch — can be overwhelming. I sometimes feel like I’m not doing enough, not tackling enough areas, not following the current trends. It’s information-overload in the scariest form: fear of missing out on an opportunity that can change the course of your music career.
The truth is, there may be a different perspective to take. The most successful independent musicians I know are ones that are wearing multiple hats with different sources of income. Each hat is something that inspires and motivates them, gets them excited to get out of bed in the morning, and something that fulfills them. When what you are doing is fulfilling, the money becomes a secondary conversation.
It just often feels so hard to get there.
It’s taken me over a decade to be able to say I feel that I have a fulfilling and diverse music career. I took a look at what the income source balance is for me, and I’ve discovered three different pairs of income characteristics that require a delicate balance for that sweet spot of fulfillment and ability to pay the bills!

Passive vs. Active Income

This is about time.
Passive income is money you make while sleeping:
  • Royalties from radio plays or TV placements
  • digital sales and streams
  • selling of online products like a teaching video or merch
Active income is what you make when you put your time into something:
  • playing a live gig
  • composing a film score
  • teaching guitar lesson
A lot of passive income comes after putting in significant time upfront creating a product (a CD, course, merchandise product) but gives you longer term freedom than an upfront sale for you time.

Predictable vs. Inconsistent Income

This is about consistency.
Early on, I had the realization that my FAVORITE sources of income were ALL inconsistent:
  • writing music for commercials (super competitive with low chances of airing/being paid)
  • playing a sold-out show (not something I could do every day)
  • scoring a film (which takes months to do for one-time fee)
This lead to some really great months, followed by barely-paying-rent months. Saving was hard and I was always stressed. Teaching piano lessons, on the other hand, is fairly predictable. I only sell packages of 5 or 10 lessons upfront, and have a strict cancellation policy, so even if someone doesn’t show, I still make money.
Balancing predictable and inconsistent income sources has been the source of finding peace of mind, and peace of bank account (not a thing, but you know what I mean).

High vs. Low Paying Opportunities

This is about volume.
When I’m selling CD’s for $10 at a show, I make sure I announce that I have them for sale, where they are, and how you can buy them. I do this because I know that when I’m selling something for only $10, the impact will be in the volume. The more I sell, the more it makes sense to sell them.
When I’m writing demos for commercials, ONE sold track can make the difference. So it’s a great way to plan how I will spend my time.
When you are crowdfunding, you can focus on getting 1000 people to contribute $10 or 10 people to contribute $1000 for the same outcome. But the strategy for each will be VERY different. If you know what you’re going for, you can really optimize your time.

quarta-feira, 9 de agosto de 2017

Dividindo A Publicação De Uma Música - Os Co-Autores

A publishing split sheet for co-songwriters

First of all thanks to  for this article.

Get it in writing and protect your publishing rights. 

You’ve just co-written a killer song. You’re psyched. But before you leave, before you celebrate, before you tweet about it, before you do anything else, sit back down with your collaborators and fill out a publishing split sheet.

Who contributed, and in which ways? What percentage of the song does each writer own?

There’s no better time to get on the same page with your co-writers than that moment when a song is new, memories are fresh, and you’re all feeling like part of the same team.
CD Baby has created a simple three-step PDF template to help you and all your collaborators make clear who owns what when it comes to your new original song, how the royalties should be distributed, and more.
Download our publishing split sheet so you’re ready to get it in writing the minute your new collaborative work is finished.

'Game of Thrones' 7 Temporada Será Lançada Em Breve

'Game of Thrones' Season 7 Soundtrack Has a Release Date


The selections of composer Ramin Djawadi's music will be released Sept. 29. 

The soundtrack for Game of Thrones' current seventh season will be released Sept. 29, according to a new pre-sale listing on Amazon
The album will be available on CD and digitally and will select works from the show's composer, Ramin Djawadi
Djawadi recently spoke to Billboard about the new season's music, saying it everything is bigger now that Daenerys Targaryen's dragons are the size of 747s. "That's epic in its own right, so the music has to rise with that as well," he said. 

"There's some really beautiful, emotional scenes in this season and really amazing actions scenes," he continued. "[The dragons have] grown so much that musically I was able to push in both directions -- emotional and action-packed. A lot of times it's very dialogue-heavy and the music stays in the background so I use solo instruments like cello and violin. But now as we're coming to the end with big scenes I can use bigger string sections and brass and choirs and get much bigger orchestra sounds." 
Djawadi has scored other TV shows including Westworld and Prison Break, as well as films such as WarcraftPacific Rim and Iron Man, for which he received a Grammy nomination. The first Game of Thrones soundtrack in 2011 peaked at No. 17 on the Billboard Soundtrack chart. 

segunda-feira, 7 de agosto de 2017

3 Dicas Para Voce Achar O Seu "Groove"

3 Tips for Finding Your Song’s Groove

First of all thanks to Cliff Goldmacher for this article.

One of the unsung elements that helps bring a song to life - and make it memorable - is the groove. By way of definition, groove is a mystical combination of tempo and feel that adds depth and texture to your song. That being said, it’s not uncommon for songwriters to relegate groove to an afterthought. Here are a few tips for focusing on and finding your song’s groove.
1. Try accenting different words in your melody
Sometimes it’s as simple as changing which words in your lyric you choose to emphasize that can make all the difference in finding a groove that works. Experimenting with how your melody falls on each beat can also lead to a more expressive delivery of your lyric while giving the music a better overall feel.
2. Try changing the groove to its opposite
This may sound counterintuitive but the history of hit songs is full of sad/contemplative lyrics set to a killer beat, so if you’re not quite sure why your song isn’t working, try changing the groove completely. Sometimes ballads take on an interesting twist when they’re changed to uptempo/groove-based songs. The same is true when up-tempo songs lack the gravity that you’re looking for - changing the groove to a ballad can shine a whole new light on the lyric.
3. Reference the tempo/feel of popular songs in the genre you’re going for Given that certain lyrics work better in certain genres, referencing a popular song in the genre where your lyrics fit can be a great starting point when you’re looking for a groove. To keep it sounding original though, and not like a copy of the song you’re referencing, don’t hesitate to experiment with rhythmic variations from other genres to make it unique and engaging for your listeners.
Bonus Tip 
Remember that loop software isn’t just for certain genres. It can - and should - be used as a reference point for any genre of music to help you find new rhythmic feels that you might not otherwise come up with on your own.
While your song’s melody and lyric will always be front and center, the decisions you make around the groove of your song can play a big part in how your music is ultimately received.
Good luck!




IA e Música: Seremos Escravos Dos Algoritimos?

AI and music: will we be slaves to the algorithm?

Pioneers of sound (left to right): George Philip Wright, Jon Eades and Siavash Mahdavi at Abbey Road Studios, London. Photograph: Sonja Horsman for the Observer


First of all thanks to  for this article.



Tech firms have developed AI that can learn how to write music. So will machines soon be composing symphonies, hit singles and bespoke soundtracks?

From Elgar to Adele, and the Beatles or Pink Floyd to Kanye West, London’s Abbey Road Studios has hosted a storied list of musical stars since opening in 1931. But the man playing a melody on the piano in the complex’s Gatehouse studio when the Observer visits isn’t one of them.
The man sitting at the keyboard where John Lennon may have finessed A Day in the Life is Siavash Mahdavi, CEO of AI Music, a British tech startup exploring the intersection of artificial intelligence and music. 
His company is one of two AI firms currently taking part in Abbey Road Red, a startup incubator run by the studios that aims to forge links between new tech companies and the music industry. It’s not alone: Los Angeles-based startup accelerator Techstars Music, part-funded by major labels Sony Music and Warner Music Group, included two AI startups in its programme earlier this year: Amper Music and Popgun.
This is definitely a burgeoning sector. Other companies in the field include Jukedeck in London, Melodrive in Berlin, Humtap in San Francisco and Groov.AIin Google’s home town, Mountain View. Meanwhile, Google has its own AI music research project called Magenta, while Sony’s Computer Science Laboratories (CSL) in Paris has a similar project called Flow Machines.
Whether businesses or researchers, these teams are trying to answer the same question: can machines create music, using AI technologies like neural networks to be trained up on a catalogue of human-made music before producing their own? But these companies’ work poses another question too: if machines can create music, what does that mean for professional human musicians?
“I’ve always been fascinated by the concept that we could automate, or intelligently do, what humans think is only theirs to do. We always look at creativity as the last bastion of humanity,” says Mahdavi. However, he quickly decided not to pursue his first idea: “Could you press a button and write a symphony?”
Why not? “It’s very difficult to do, and I don’t know how useful it is. Musicians are queuing up to have their music listened to: to get signed and to get on stage. The last thing they need is for this button to exist,” he says.
The button already exists, in fact. Visit Jukedeck’s website, and you can have a song created for you simply by telling it what genre, mood, tempo, instruments and track length you want. Amper Music offers a similar service. This isn’t about trying to make a chart hit, it’s about providing “production music” to be used as the soundtrack for anything from YouTube videos to games and corporate presentations.
Once you’ve created your (for example) two-minute uplifting folk track using a ukulele at a tempo of 80 beats-per-minute, Jukedeck’s system gives it a name (“Furtive Road” in this case), then will sell you a royalty-free licence to use it for $0.99 if you’re an individual or small business, or $21.99 if you’re a larger company. You can buy the copyright to own the track outright for $199.
“A couple of years ago, AI wasn’t at the stage where it could write a piece of music good enough for anyone. Now it’s good enough for some use cases,” says Ed Newton-Rex, Jukedeck’s CEO.
“It doesn’t need to be better than Adele or Ed Sheeran. There’s no desire for that, and what would that even mean? Music is so subjective. It’s a bit of a false competition: there is no agreed-upon measure of how ‘good’ a piece of music is. The aim [for AI music] is not ‘will this get better than X?’ but ‘will it be useful for people?’. Will it help them?”
The phrase “good enough” crops up regularly during interviews with people in this world: AI music doesn’t have to be better than the best tracks made by humans to suit a particular purpose, especially for people on a tight budget.
“Christopher Nolan isn’t going to stop working with Hans Zimmer any time soon,” says Cliff Fluet, partner at London law firm Lewis Silkin, who works with several AI music startups. “But for people who are making short films or YouTubers who don’t want their video taken down for copyright reasons, you can see how a purely composed bit of AI music could be very useful.”
Striking a more downbeat note, music industry consultant Mark Mulligan suggests that this strand of AI music is about “sonic quality” rather than music quality. “As long as the piece has got the right sort of balance of desired instrumentation, has enough pleasing chord progressions and has an appropriate quantity of builds and breaks then it is good enough,” he says.
“AI music is nowhere near being good enough to be a ‘hit’, but that’s not the point. It is creating 21st-century muzak. In the same way that 95% of people will not complain about the quality of the music in a lift, so most people will find AI music perfectly palatable in the background of a video.”
Not every AI-music startup is targeting production music. AI Music (the company) is working on a tool that will “shape-change” existing songs to match the context they are being listened to in. This can range from a subtle adjustment of its tempo to match someone’s walking pace through to what are essentially automated remixes created on the fly.
“Maybe you listen to a song and in the morning it might be a little bit more of an acoustic version. Maybe that same song, when you play it as you’re about to go to the gym, it’s a deep house or drum’n’bass version. And in the evening it’s a bit more jazzy. The song can actually shift itself,” says Mahdavi.
Pinterest
Watch the Alice demo on YouTube here.
Australian startup Popgun has a different approach again. Its AI – called “Alice” – is learning to play the piano like a child would, by listening to thousands of songs and watching how more experienced pianists play them. In its current form, you play a few notes to Alice, and it will guess what might come next and play it, resulting in a back-and-forth human/AI duet. The next step will be to get her to accompany a human in real-time.
Advertisement
“It’s a new, fun way to interact with music. My 10 year-old daughter is playing the piano, and it’s the bane of our existence to get her to practise! But with Alice she plays for hours: it’s a game, and you’re playing with somebody else,” says CEO Stephen Phillips.
Vochlea, which is the other AI startup in the Abbey Road Red incubator, is in a similar space to Popgun. Beatbox into its VM Apollo microphone, and its software will turn your vocals into drum samples. Approximate the sound of a guitar or trumpet with your mouth, and it will whip up a riff or brass section using that melody.
“It’s a little bit like speech recognition, but it’s non-verbal,” says CEO George Philip Wright. “I’m focusing on using machine-learning and AI to reward the creative input rather than taking away from it. It came from thinking, if you’ve got all these ideas for music in your head, what if you had a device to help you express and capture those ideas?”
Many of the current debates about AI are framed around its threat to humans, from driverless trucks and taxis putting millions of people out of work, to Tesla boss Elon Musk warning that if not properly regulated, AI could be “a fundamental risk to the existence of civilisation”.
AI music companies are keen to tell a more positive story. AI Music hopes its technology will help fans fall in love with songs because those songs adapt to their context, while Popgun and Vochlea think AI could become a creative foil for musicians.
Jon Eades, who runs the Abbey Road Red incubator, suggests that AI will be a double-edged sword, much like the last technology to shake up the music industry and its creative community.
“I think there will be collateral damage, just like the internet. It created huge opportunity, and completely adjusted the landscape. But depending on where you sat in the pre-internet ecosystem, you either called it an opportunity or a threat,” he says.
“It was the same change, but depending on how much you had to gain or lose, your commentary was different. I think the same thing is occurring here. AI is going to be as much of a fundamental factor in how the businesses around music are going to evolve as the internet was.”
That may include the businesses having the biggest impact on how we listen to music, and how the industry and creators make money from it: streaming services. They already use one subset of AI – machine learning – to provide their music recommendations: for example in personalised playlists like Spotify’s Discover Weekly and Apple’s My New Music Mix.
The songs on those playlists are made by humans, though. Could a Spotify find a use for AI-composed music? Recently, the company poached François Pachetfrom Sony CSL, where he’d been in charge of the Flow Machines project.
It was under Pachet that in September 2016 Sony released two songs created by AI, although with lyrics and production polish from humans. Daddy’s Car was composed in the style of the Beatles, while The Ballad of Mr Shadow took its cues from American songwriters like Irving Berlin, Duke Ellington, George Gershwin and Cole Porter. You wouldn’t mistake either for their influences, but nor would you likely realise they weren’t 100% the work of humans.
Now Pachet is working for Spotify, amid speculation within the industry that he could build a team there to continue his previous line of work. For example, exploring whether AI can create music for Spotify’s mood-based playlists for relaxing, focusing and falling asleep.
Advertisement
For now, Spotify is declining to say what Pachet will be doing. “I have no idea,” admits Jukedeck’s Newton-Rex. “But to the question: ‘One day, will a piece of software that knows you be able to compose music that puts you to sleep?’ Absolutely. That’s exactly the kind of field in which AI can be useful.”
What’s also unclear is the question of authorship. Can an AI legally be the creator of a track? Can it be sued for copyright infringement? Might artists one day have “intelligence rights” written into their contracts to prepare for a time when AIs can be trained on their songwriting and then let loose to compose original material?
AI Music’s plans for automated, personalised remixes may bring their own complications. “If an app allows you to shape-change a song to the extent that you can’t even hear the original, does it break away and become its own instance?” says Mahdavi.
“If you stretch something to a point where you can’t recognise it, does that become yours, because you’ve added enough original content to it? And how do you then measure the point at which it no longer belongs to the original?”
The answers to these questions? Mahdavi pauses to choose his words carefully. “What we’re learning is that a lot of this is really quite grey.”
It’s also really quite philosophical, with all these startups and research teams grappling with fundamental issues of creativity and humanity.
“The most interesting thing about all this is that it might give us an insight into how the human composition process works. We don’t really know how composition works: it’s hard to define it,” says Newton-Rex. “But building these systems starts to ask questions about how [the same] system works in the human brain.”
Will more of those human brains be in danger of being replaced by machines? Even as he boldly predicts that “at some point soon, AI Music will be indistinguishable from human-created music”, Amper Music’s CEO, Drew Silverstein, claims that it’s the process rather than the results that will favour the humans.
Advertisement
“Even when the artistic output of AI and human-created music is indistinguishable, we as humans will always value sitting in a room with another person and making art. It’s part of what we are as humans. That will never go away,” he says.
Mark Mulligan agrees. “AI may never be able to make music good enough to move us in the way human music does. Why not? Because making music that moves people – to jump up and dance, to cry, to smile – requires triggering emotions and it takes an understanding of emotions to trigger them,” he says.
“If AI can learn to at least mimic human emotions then that final frontier may be breached. But that is a long, long way off.”
These startups all hope AI music will inspire human musicians rather than threaten them. “Maybe this won’t make human music. Maybe it’ll make some music we’ve never heard before,” says Phillips. “That doesn’t threaten human music. If anything, it shows there’s new human music yet to be developed.”
Cliff Fluet brings the topic back to the current home for two of these startups, Abbey Road, and the level of musician it has traditionally attracted.
“Every artist I’ve told about this technology sees it as a whole new box of tricks to play with. Would a young Brian Wilson or Paul McCartney be using this technology? Absolutely,” he says.
“I’ll say it now: Bowie would be working with an AI collaborator if he was still alive. I’m 100% sure of that. It’d sound better than Tin Machine, that’s for sure…”

Try it out

You can experiment with AI music and its close cousin generative music already. Here are some examples. 
Jukedeck
As mentioned in this feature, you can visit Jukedeck’s website and get its AI to create tracks based on your inputs.
AI Duet 
Launched by Google this year, this gets you to play some piano notes, then the AI responds to you with its own melody.
Scape 
Brian Eno was involved in this app, where you combine shapes to start music that then generates itself as your soundtrack.
Humtap Music
A little like Vochlea in this feature, Humtap’s AI analyses your vocals to create an instrumental to accompany you.
Weav Run
This is part running app and part music app, using “adaptive” technology to modify the tempo of the song to match your pace.

sexta-feira, 4 de agosto de 2017

Beats Por Dinheiro - Artistas Estão Fazendo Milhões Com Esta Nova Modalidade

Beats For Cash: 300K Artists Making $20M In The Music Industry's New Gig Economy

First of all thanks to John Koetsier for this article.


Today anyone with an iPad and GarageBand can make the ingredients of modern music: beats. And 300,000 artists in their garages, bedrooms and condos are doing exactly that ... with some of them making hundreds of thousands of dollars selling short slices of music on a new Amazon.com for musical mixes, Airbit.

“After using Airbit for a while I started making six figures,” says US music producer Tone Jonez.




Shutterstock

Technology has enabled the disaggregation of work, as any of today's gig economy workers at TaskRabbit, Uber, Fiver, Airbnb or Postmates can tell you. Even knowledge work can be sliced up, divided into tiny segments and sent out to Amazon's Mechanical Turk, where tens of thousands of semi-employed people work on pieces of your jobs.
Music -- and art in general -- seem different. Platforms like Airbit, however, are changing that.

The site allows producers to create, sell, and distribute their beats. There are now 300,000 producers on the platform, according to CEO Wasim Khamlichi, who have collectively made over $20 million.

That means, of course, that the average producer makes only $66 or so.
But others like "Purps," say he's made $12,000 from a single beat. The L.A.-based producer and songwriter contributed to hit singles on a recent album by Migos, a hip-hop group from Atlanta, Georgia. Their latest album, Culture, went platinum, and Purps contributed to two of the songs, which have over 10 million views and listens combined. It helps that he has over 150,000 Twitter followers and self-promotes on social media.

"Let's just say I have never had a regular job," he told me via email. "Selling beats has been my career for over 10 years."

Others are making money as well. Tommaso and Alessandro Pinto, producers in Germany who create music under the DopeBoyzMusic label, sold a track called Purple Clouds that has 1,000 leases, netting them $15,000. Some of the best-selling artists on Airbit top charts, Khamlichi says, are making a very good living indeed: $100,000/year or more.
While GarageBand might have popularized making music for the masses, serious artists use serious tools, like FL Studio.

And it's not all fun and games, top producers say.
John Koetsier


Beats sales platform Airbit

"As cliche as it is, it took time, patience and hard work," said Ric and Thadeus, a production team from a small town in Kentucky. "There is a process to it. You have to establish yourself and slowly but surely build a following and customer base. You have to be willing to try just about everything and figure out what works. If you do that, it’s possible to make a living doing this."

The interesting thing for me is this:
How many millennials listening to hip hop and rap know that their music isn't necessarily created by the stars with their pictures on the albums, but is often assembled, piece by piece, into a final product, in a musical production line?
My guess is: precious few.

And, if you've ever wondered by much of club music, hip hop and rap tend to sound the same, it might just be because of over-reliance on beats that hundreds of other artists have used too.

quinta-feira, 3 de agosto de 2017

Como Criar Sons Para Comerciais - Aqui Vão Algumas Dicas De Alguns Vencedores

HOW TO CREATE SOUND FOR COMMERCIALS – INSIGHTS FROM AN AWARD-WINNING SOUND TEAM:

How do you create sound for commercials? Here, the sound team from Factory Studios shares insights and approaches behind the sound for some of their critically-acclaimed work:

Written by Anthony Moore, Dan Beckwith, Mark Hills & Phil Bolland. Videos and photos courtesy of Factory Studios




Established in 1997, Factory is an award-winning sound design and audio facility based in London. With a highly skilled team of sound designers headed up by Founding Partner and Creative Director, Anthony Moore, Factory has created some of the most revered and awarded commercial work of recent years.

With a passion for promoting the craft of creative sound design, Factory’s work has been globally recognised by the likes of the Cannes Lions, BAFTA, D&AD, Clios, Music & Sound Awards and the British Arrows to name but a few.

Projects of note include an impressive body of work created for Honda with films such as ‘Hands’, ‘Ignition’, ‘Paper’ and ‘The Other Side’. The company has also been involved with every John Lewis Christmas campaign since the launch of the iconic ‘Long Wait’ back in 2011. Last year saw Factory working as part of the team that helped to create the stunning ‘We’re The Superhumans’ campaign for Channel 4.

As Factory celebrate their 20th anniversary, the sound design specialists have a whole host of interesting projects lined up for the remainder of 2017. Factory’s new Dolby Atmos suite is ever popular as clients begin to realise the true potential of immersive sound design. The company have also pushed forward with their long form work and have two major feature films in production which are due for release in 2018.

A Sound Effect caught up with Factory Sound Designer’s Dan Beckwith, Mark Hills and Phil Bolland to find out more about their work, their processes and what it takes to make award-winning sound for advertising…


Dan Beckwith – Sound Designer, Factory




‘We open on an underwater shot in a large indoor swimming pool. We wait anxiously. Through the still, reflective surface of the water, we can make out a person ready to dive in. Suddenly, as the diver crashes into the water, we are transported though a series of currents via flashing imagery until the diver emerges, floating peacefully in an otherworldly landscape.’


This is the kind of imagery that can be conveyed in a script or a directors treatment for a TV commercial.

At Factory, we always encourage clients to involve us as early as possible on a project. Being able to see the script allows us to start forming our own ideas on sound design, which can then be fed into the creative team as they prepare to shoot. More often than not, the projects that think about the sound from their inception, usually sound pretty amazing by the end. From the moment we are briefed by our client, we immediately start to think about creating our soundscape…

What sound design techniques will we use?
Will the sound design lead the story?
Will the piece involve music?
Will I need to record locations, Foley, bespoke effects, or perhaps our Head of Transfer talking with his head in a bucket of water?

With every job at Factory, we always look to meet with the creative team and director to talk through their thoughts on sound and their overall vision for a project. This gives us a chance to explore the sound design possibilities together and create a plan to make their project sound as amazing as it possibly can. Because that’s what it all comes down to, making your client excited about their sound and the process ahead.


Exploring sounds and ideas is often what makes a project great

From here, we begin to schedule the job with our bookings producers. It’s vital that we allocate the correct amount of time required to create our sound design. We often encourage clients to allow us the time to experiment, especially on more abstract sound design briefs. Exploring sounds and ideas is often what makes a project great. Never be afraid to get it wrong before you get it right. Experimentation can often lead to better, more exciting concepts.

At Factory, we always encourage collaboration across our work. This is why you will often see two sound designers allocated to a project. This methodology allows for greater creativity and more flexibility on a job. As the old adage says, four ears are better than two!

We have recently completed a new spot for Volkswagen entitled ‘The Button‘ with sound design and mix from myself and Anthony Moore.




WW – The Button, Audio by Factory Studios




We hit upon the idea of grading the sound design to match the era and feel of each scene

This project saw us working closely with the creative team as we set about creating six very different feeling movie scenes within one commercial. With a brief centered upon bringing to life the genres of Sci-Fi, espionage, action-thriller, adventure, blaxploitation and horror; we hit upon the idea of grading the sound design to match the era and feel of each scene. Hence, the 1930’s horror section was mixed in mono and then degraded using an old, worn out tape effect. The Sci-Fi sequence was built to sound grand and futuristic, whilst the blaxploitation scene was warmed up with some tape fuzz and a cheeky Wilhelm scream thrown in for good measure. Having our creative team on board and excited about this concept from the outset was invaluable to the success of the project.



How to create sound for commercials
CLICK TO TWEET




Mark Hills – Sound Designer, Factory




As a Sound Designer at Factory, I am very aware of how music can be one of the most powerful cinematic devices available to us. It has the power to dictate the story, emotion and pace of a film. If used correctly, music has the potential to create a lasting impression with an audience for years to come.


In November 2011, John Lewis released the now iconic Christmas TV commercial ‘The Long Wait’ , with the sound lovingly brought together here at Factory. John Lewis had been releasing popular Christmas adverts for many years, but this time around something was different.




John Lewis – The Long Wait, Audio by Factory Studios



The film was a phenomenal success for a great number of reasons, but one integral component that people couldn’t stop talking about was the music. A hauntingly beautiful cover of The Smiths ‘Please, Please, Please, Let Me Get What I Want’ by Slow Moving Mille was absolutely fundamental to the success of the campaign.

Not only did it push the John Lewis Christmas ads to the forefront of popular culture, but it also inspired a new genre of advertising which has been influencing the visual and musical styles for brands ever since. As such, the choice of song for the John Lewis Christmas campaign seems to create as much hype and anticipation as the film itself.

So knowing it’s potential, how do we ensure that music is given the full care and attention it deserves in a TV commercial?

On certain jobs, we are fortunate enough to work with composed tracks specifically written for the film, where the music follows the action and fits perfectly with the content on screen. However, more often than not, we are working with existing pieces of music which require intricate editing.


Music editing is one of the most valuable and important aspects of our job

Music editing is one of the most valuable and important aspects of our job. Taking an existing piece of music and cutting it to work with the story and feel of the film is where things get really fun. In ideal scenarios, we’re able to mix the music from supplied stems which provides us with an even greater level of creative freedom.

Stem mixing affords us the opportunity to pick and choose what elements of the music we want to hear, or even remove the parts we do not. It allows us to create more complicated edits seamlessly and it opens up a world of possibilities for surround sound mixing.

With the introduction of Dolby Atmos we can literally surround and immerse the audience completely into music, which is exactly what we did with our recent Vue ‘This Is Not A Cinema‘ project. By working with stems of ‘The Rift’ by Solomon Grey, we were able to creatively pan pads and musical motifs through the cinema to totally immerse the viewer in the experience. Our aim was for the sound to invoke a physical reaction from our audience through this use of movement and some clever frequency manipulation. On watching the first few playbacks, we had our client stating that the ‘hairs on the back of the neck’ were indeed tingling! Job done.




Vue – This Is Not A Cinema, Audio by Factory Studios



At Factory, we always strive to go further than just placing a piece of music over a film and making it match the timings. It’s incredibly satisfying to sound design to the rhythm of a track; or to take a sound effect and transpose it to match the key of the music. In a busy mix, tricks like these can often help the mix come together, as opposed to a multitude of layers merely sitting on top of each other.


It’s this kind of attention to detail which, when all combined, creates a truly crafted piece of work

One of my favourite examples of how perfectly music and sound design can work together is Virgin Media’s ‘9.58’ commercial. The basis of the advert is to provide the audience with ten examples of just how fast 9.58 seconds really is. Starting with a gunshot from a starter pistol, the music, visuals and sound design all begin to synchronise to the rhythmic tempo of the digital stopwatch. The soundtrack continuously builds with the advert, allowing each scenario to give a slightly different twist on the music. At the end of every 9.58 seconds, the starter pistol fires again and we begin a new scenario. Throughout the film, individual sound effects are repeatedly timed to the rhythm of the beat which becomes a key motif throughout. It’s this kind of attention to detail which, when all combined, creates a truly crafted piece of work.




Virgin Media – 9.58, Audio by Factory Studios



The commercial world of sound design also sees us working regularly with voice artists. I’ve always been of the opinion that a good voiceover talks to the audience, not at them. O2’s ‘Follow The Rabbit’ commercial is a great example of this. Softly spoken, it invites you into its world and instead of forcing you to listen, it actually makes you want to listen.




O2 – Follow The Rabbit, Audio by Factory Studios




A great voiceover starts with the performance, and the best way to ensure that an artist achieves their best is to ensure they are comfortable and happy

A great voiceover starts with the performance, and the best way to ensure that an artist achieves their best is to ensure they are comfortable and happy. Talking through a script with them, addressing any potential issues or confusion, and being open to suggestions of how they want to work is vital. Once you’ve set your level and everybody is ready, you can pretty much press record and let them do the hard work.

Asking a voiceover to read with a smile or playing music lightly in the background can entirely change a performance. From a technical side, the sound engineers job is to ensure that the recording is clean, and to keep an eye out for issues such as noise being picked up on the mic, plosives and very importantly, to keep an eye on the timings. Wherever the mix is played, the goal is to ensure the voiceover sounds clean, clear and effective. After-all, a voiceover is there to be heard.