Power Field Studio

Power Field Studio

sábado, 2 de abril de 2016

Teste: O Que a Sua Música Favorita Diz Sobre Você?

Quiz: What does your favourite music say about you?

First of all thanks to CNN's Elizabeth Cohen
You're at a heavy metal concert. An electric guitarist grinds out the final chords of a loud, aggressive solo and smashes the guitar. Are you thinking, "That was epic!"? Or are you just glad the music finally stopped?
Or picture yourself at a coffeehouse as an acoustic guitarist strums note after relaxing note. Are you feeling warm and happy? Or do you just want to rip the strings right out of that folksy guitar?
Whichever one is you, psychologists have found that your taste in music says a lot about your personality. 
"People who are high on empathy may be preferring a certain type of music compared to people who are more systematic," said David Greenberg, a University of Cambridge psychologist. 
Greenberg has quizzed thousands of people, first giving them a written test to analyse their personalities and then finding out what types of music they prefer.
He found a correlation: Those who have a well-developed ability to understand thoughts and feelings in themselves and others -- so-called "empathizers" -- tend to prefer mellow music that evokes deep emotion. 
But the world is full of underlying patterns and systems, and those who can more easily identify these connections are "systemizers." Greenberg's research shows they prefer intense music that forms complex sounds. 
The theory, he said, is that empathizers are interested in music's emotional qualities and how it makes them feel, whereas systemizers are more intrigued by its structural qualities.
"They are focusing more on the instrumental elements, seeing how the music is mixing together," Greenberg said. "It's almost like a musical puzzle that they're putting together."

Music taste reflects personality

Systemizers lean toward jobs in math and science: the meteorologist who rapidly deciphers emerging weather patterns, the geologist who untangles eons of mystery about how a mountain formed.
Empathizers tend to be good listeners. They can put themselves in someone else's shoes. Systemizers can have an average or even high ability to do this, too -- they don't lack empathy per se -- but their systemizing abilities are even greater, Greenberg said. 
He found people who like both mellow music and intense music score about the same in empathizing and systemizing tests, indicating a "balanced" thinking style. 
"We are seeking music that reflects who we are, so that includes personality, that includes the way we think, and it may even be the way our brain is wired," Greenberg says.
One hypothesis: Listening to mellow music can make us feel sad, so our brain may release a pleasing hormone to soothe us -- and empathizers may get a bigger dose, since the region of their brain responsible for regulating the chemical's release is larger. Systemizers' brains are bigger in regions responsible for recognizing patterns, so when they hear intense or highly structured music they may prefer it for its complexity. 

Test your taste in music

Are you an empathizer or a systemizer? Or you could be balanced. Take our music quiz to find out.

About the songs

Greenberg and his team of psychologists found empathizers prefer music that inspires strong feelings, often sadness, such as Joni Mitchell's "Blue." These songs often feature themes of love, loss, relationships, heartbreak and nostalgia. Not only are the lyrics to Bill Withers' "Ain't No Sunshine" about melancholia and loss, but the song is also in a minor key that's typically associated with sad feelings. Neil Young's "Philadelphia" is about brotherly love and loss. The chorus of Adele's "Hello" is about the desire to communicate regret and sorrow. Ray Charles' "Georgia on My Mind" is also in a minor key and is about remembering a lost loved one through a sweet but sorrowful song.
Greenberg's team found systemizers prefer music that tends to be more energetic, often eliciting joy or even anger through charged lyrics and intricate patterns of notes. Led Zeppelin's "Black Dog" is about finding happiness with a woman. Hendrix's "Voodoo Child" and Rage Against the Machine's "Bulls on Parade" are aggressive songs about rebellion. "Welcome to the Jungle" by Guns N' Roses is an exhilarating rock anthem about the temptation of drugs. "Blitzkrieg Bop" by the Ramones is an animated, joy-inducing song written in a major key that's often associated with more positive emotions. 
Balanced music lovers tend to prefer songs across the spectrum.
Check the link and make the quiz.

sexta-feira, 1 de abril de 2016

Estes Três Grandes Podem Perturbar a Indústria de "Streaming Music"

The Three Major Players That Could Disrupt The Streaming Music Industry











Amazon
Aside from selling everything under the sun, Amazon actually has its own streaming music player: The use of Amazon Prime Music comes with a subscription to Amazon Prime, which is a subscription that tens of millions of people have signed up for.
The problem with its music service is that it’s not very impressive, and it certainly isn’t worthy of the largest store on the planet. Amazon Prime Music is lacking in almost everything, including its offerings. It’s safe to bet that as Prime membership grows, the company is looking into building up its in-house streaming platform. Right now, it’s not the sort of thing that anybody would actually pay for (if it was offered outside of the Prime program), but there is no reason why a company like Amazon couldn’t create such a product.

Samsung
Like Amazon, Samsung actually already has a streaming platform, but that doesn’t mean that the company is satisfied. Milk Music has been around for years now, but it is really losing the popularity battle. Samsung might be a huge company, but its internet radio service has never been able to gain any traction or pick up a substantial number of listeners, especially ones that are willing to pay to use it.
For some time, many in the industry were wondering if the tech giant was going to purchase Jay Z’s beleaguered Tidal platform and use that as a starting point, but it has been made very clear that such a deal is not in the works. So, if not Tidal, what will Samsung do to better its position in the streaming music world, which is something it is looking into? It could snap up another competitor, it could revamp its existing product, or it could launch something entirely new. What will happen isn’t clear, but it needs to happen soon, otherwise Samsung might not have a chance to catch up to the most established services. Samsung has the money to make something work here, and because of past failures, the determination is present as well.
SoundCloud
Millions of people already go to SoundCloud for their music listening needs, but the company is working on something big that will change it fundamentally. Right now, the music site doesn’t charge people anything to listen, and that’s problematic. The company is often in financial trouble, and record labels and artists around the world are often upset about people uploading their music and royalties not being delivered.
It has been known for some time that SoundCloud is working on turning itself into a more legitimate streaming service, though it isn’t clear what that will look like or when it will arrive. Since it already has millions of loyal users, there is reason to believe that many of them will at least be willing to try out whatever proper service SoundCloud comes up with, which is typically half the battle.


Esta Luva Deixa Você Criar Uma Música do Ar!


This glove lets you create music out of thin air














A new glove will turn even the most deficient in musical talent into rockstars.
Austin, Texas-based company Remidi made a glove, dubbed the T8, that allows you to create music simply by making gesture with your hand and fingers. 
The glove works in tandem with a motion sensor bracelet, allowing you to create sounds by moving your fingertips or your entire hand. You can tap any surface and it will sound like you're playing on a keyboard.
Remidi recently surpassed their Kickstarter goal of $50,000 by raising $137,326. You can currently pre-order it on their website.
There are eight sensors embedded in the glove — three in the palm and one in each fingertip — that work with the bracelet to detect what direction your hand is moving and at what speed. The sensors can also detect how hard your fingertips press down onto a surface.
Here's how it works: you simply put on the glove and bracelet and open the Remidi app. Through the app, you can program what sound you want to play when you make a specific movement. So, you could tell the glove to play a particular musical note when you press your pinkie on a surface.
You can then record songs in real-time, on-the-go, and trust that they'll all be saved onto the app. The glove can send recordings over WiFi or Bluetooth.
You can also play music using Remidi over a background track or mix to make songs more engaging.
Watch it in action:

quinta-feira, 31 de março de 2016

YouTube Diz Que Paga Muitos "Royalties"

YouTube Says It Pays Plenty Of Royalties


First of all, thanks to my Friend Bobby Owsinski for this article






Many fingers are being pointed at YouTube for not contributing much to label and artist bank accounts despite the enormous number of streams it generates.

For instance, YouTube claims that it had 50% of the 317 billion streams last year, yet paid only a fraction of what the paid tier from Spotify paid.

How much? We don't have the exact breakouts, but a combination of YouTube, Soundcloud, and all the ad-supported tiers from all streaming services accounted for $385 million in the U.S. in 2015.

Premium tiers of Spotify, Apple Music, Google Play and others amounted to $1.22 billion last year.

While everyone is disgruntled with YouTube for paying such low rates, its response as been that it's paid out over $3 billion dollars to the music industry, which is deceiving in that it's over the services lifetime, not last year.

The fact of the matter is that YouTube is still the go-to service by most people to listen to music, yet it pays the least to artists, songwriters, labels and publishers.

Yet the company has the music industry over a barrel as it holds all the leverage. Whether an artist wants their music there or not, chances are some fan is going to upload it, so it's always going to be available, and the price is still right at free.

Unfortunately, don't expect this dynamic to change soon.

terça-feira, 29 de março de 2016

Por Que Classic Rock Não é Mais o Que Costumava ser


Why Classic Rock Isn’t What It Used To Be

First of all thanks to  for this article.

Led Zeppelin is classic rock. So are Mötley Crüe and Ozzy Osbourne. But what about U2 or Nirvana? As a child of the 1990s, I never doubted that any of these bands were classic rock, even though it may be shocking for many to hear. And then I heard Green Day’s “American Idiot” on a classic rock station a few weeks ago, and I was shocked.
It was my first time hearing a band I grew up with referred to as “classic rock.” Almost anyone who listens to music over a long enough period of time probably experiences this moment — my colleagues related some of their own, like hearing R.E.M. or Guns N’ Roses on a classic rock station — but it made me wonder, what precisely is classic rock? As it turns out, a massive amount of data collection and analysis, and some algorithms, go into figuring out the answer to that very question.
No one starts a band with the intention of becoming classic rock. It’s just sort of something that happens. Figuring out which genre a band fits into — is it techno or house? — has always been a tricky part of the music business. Identifying what’s classic rock is particularly challenging because it’s a constantly moving target, with very different kinds of music lumped together under the same banner. How the people who choose what music you hear — whether on the radio or an Internet streaming service — go about solving this problem reveals a deep connection between data and music.
To see what the current state of classic rock in the United States looks like, I monitored 25 classic rock radio stations1 operating in 30 of the country’s largest metropolitan areas for a week in June.2 The result, after some substantial data cleaning, was a list of 2,230 unique songs by 475 unique artists, with a total record of 37,665 coded song plays across the stations.
I found that classic rock is more than just music from a certain era, and that it changes depending on where you live. What plays in New York — a disproportionate amount of Billy Joel, for example — won’t necessarily fly in San Antonio, which prefers Mötley Crüe. Classic rock is heavily influenced by region, and in ways that are unexpected. For example, Los Angeles is playing Pearl Jam, a band most popular in the 1990s, five times more frequently than the rest of the country. Boston is playing the ’70s-era Allman Brothers six times more frequently.
To put today’s classic rock on a timeline, I pulled the listed release years for songs in the set from the music database SongFacts.com. While I wasn’t able to get complete coverage, I was able to get an accurate release year for 74 percent of the 2,230 songs and 89 percent of the 37,665 song plays.3 The earliest songs in our set date back to the early 1960s;4 the vast majority of those are Beatles songs, with a few exceptions from The Kinks and one from Booker T. and the MGs. A large number of songs appeared from the mid-’60s through the early ’70s. Classic rock peaked — by song plays — in 1973. In fairness, that was a huge year — with the release of Pink Floyd’s “Dark Side of the Moon”(an album of classic rock staples), Led Zeppelin’s “Houses of the Holy” and Elton John’s “Goodbye Yellow Brick Road”5 — but the trend steadily held for the rest of the ’70s and through the mid-’80s.
hickey-feature-classicrock-4
The 10-year period from 1973 to 1982 accounts for a whopping 57 percent of all song plays in the set. Besides a small trickle of music from 1995 onward — a trickle to which the Green Day song that inspired this article belongs — the last year to make an actual dent in the listings is 1991. That’s largely due to releases by Nirvana, Metallica and U2, the groups that make up the last wave of what is currently considered classic rock.
But clearly it’s not just when a song was released that makes it classic rock. Popularity matters, as does as a band’s longevity, its sound and a bunch of other factors. To find out why some artists are considered classic rock, I spoke to Eric Wellman, the classic rock brand manager for Clear Channel, which owns nine of the 25 radio stations in our data set. He’s also the programming director at New York’s classic rock station, WAXQ. Wellman said release years have nothing to do with what makes a song “classic rock”; the ability of the genre to grow based on consumers’ tastes is one of the things that’s given it such longevity.
In fact, radio stations are using data to make their selection decisions. Wellman said any radio company with the resources conducts regular studies in its major markets to find out what its listeners consider classic rock. And so it’s you, the consumer, who’s helping to define the genre.
“The standard in the industry these days is an online music test or an auditorium music test where you just gather a sample and have them rate songs based on the hooks — the most familiar parts of the song — and you just get back a whole slew of data,” Wellman said. The stations find a cluster of people who like the music that makes up the core of classic rock, and then finds out what else they like. They like R.E.M.? Well, R.E.M. is now classic rock. “It’s really that simple,” Wellman said.
So here’s what consumers consider classic rock. These were the most-played songs across the 25 radio stations I studied during my week-long observation period:
hickey-feature-classicrock-table-1
The top 25 most frequently played artists — the likes of Led Zeppelin, Van Halen and the Rolling Stones — together account for almost half of the spins on classic rock stations in the U.S. Another way of saying that is 5 percent of all the bands played on these stations made up nearly 50 percent of the song plays — which shows that there is at least a classic rock core.
While it’s cool to quantify the most essential classic rock musicians out there, what’s even cooler is comparing each station’s mix to what’s playing in the rest of the country. Doing so demonstrates that the very definition of classic rock can change just by going over to the next city.
This list looks at the 25 most-played artists in our set and then finds the city where they’re played most disproportionately. So if you can never get enough of Led Zeppelin, pack it up and move to New York, where 7 percent of all songs played on WAXQ are by Zep. If you’re like me and can’t stand the Eagles or Tom Petty, stay the heck out of Tampa, where WXGL appears to specialize in them.
hickey-feature-classicrock-table-2
Every station has its own character, informed by its audience’s preferences. Classic rock stations do a massive amount of market research to understand who their listeners are and to figure out what songs to play, Wellman said. For example, in the South, listeners like the rock as hard as it comes. According to Wellman, immigration plays a role. “The Hispanic influx across the southern United States vastly changes the rock landscape,” he said. “The common conventional wisdom is that Hispanics who listen to English-language rock like a significantly harder brand of English-language rock. In markets where that is an influence, you’ll see that.”
Migration within the 50 states also plays a role. Billy Joel is bigger in New York than anywhere else in the country, but he’s also the most disproportionately played artist in Miami. Why? Think about who might be listening to classic rock stations in Miami: retired New Yorkers! I looked at each city in our set to find the artist its classic rock station played most disproportionately. Essentially, this map shows the local and regional preferences in classic rock. (Click on the image to see it larger.)
CLASSICROCK_GRX_FINAL
Cities where rock stations have longer legacies, like Detroit and Philadelphia, tend to prefer an older style of classic rock — think J. Geils Band and the Beatles — while cities without that history tend to favor the more contemporary set.
But do radio stations rely at all on the institutional knowledge of their DJs to decide what to play?
Nope. The role of the song-picking DJ is dead. “I know there are some stations and some companies where if you change a song it’s a fireable offense,” Wellman said, cavalierly ruining the magic.6
While all of my data collection and analysis focused on radio stations, which have been around for decades, I decided it was also worth reaching out to digital-native startups and streaming services to see how they’re defining classic rock. The Echo Nest, which is owned by Spotify, uses data to generate song recommendations. A huge part of this work entails placing artists into genres. At its very core, the methodology used by The Echo Nest isn’t all that different from what Clear Channel does: Pick a center of the universe — if we’re talking classic rock, that would be Zeppelin, Floyd, etc. — and then find out what orbits the center. Radio stations figure this out based on interviews and music tests. Glenn McDonald, who’s in charge of developing the genre algorithm at The Echo Nest, goes about it using math.
“We start from a set of artists and use a lot of complicated math to extrapolate the rest of the universe if those artists are the the center of it,” McDonald said. In order to define a genre, first McDonald has to find out the relationships among different artists. To do that, he relies on a mountain of data that The Echo Nest collects from users and elsewhere. “We have listening histories and machines that go read the web,” he said. “We read charts, we read reviews, we read blog posts, we read news articles, Wikipedia entries, pretty much anything we’re legally allowed to read.”
These inputs are then interpreted into a relational map among artists showing how similar each is to another — essentially a map of the musical universe. For example, if each artist was a point in space, McDonald would know how close that point is to every other point. And “genre” is what happens when McDonald sees a cluster of points form.
In addition to the web-crawlers and listening histories, The Echo Nest uses sophisticated music-analysis software to figure out the qualities of different songs. McDonald looks at 13 dimensions when evaluating genre: tempo, energy, loudness, danceability, whether a song is more acoustic or electric, dense or spare, atmospheric or bouncy, and so on. Some genres are defined by one of these dimensions in particular — electronic music with a very finite range of beats per minute, say — and some are painted in broader strokes, like classic rock.
Classic rock, McDonald said, has a much wider range of tempo and rarely is powered by a drum machine. The Echo Nest can detect whether an actual person is behind a drum set based on minor imperfections in tempo, or beats that a drum machine can’t mimic. “The timing will be very human and unmechanical,” a dead giveaway, he said.
But McDonald agrees with Clear Channel’s Wellman: The classic rock genre is always changing, and that’s one of the things that makes it so hard to define.
So what’s next for Classic Rock? Should I steel myself for One Direction’s eventual air time on the WAXQ of tomorrow?
It’s going to come down to economics, Wellman said. As baby boomers and Gen X-ers age out of the key advertising demographic over the next five to 10 years, one of two things will happen. Either advertisers will chase them, or classic rock will start to skew younger.
CORRECTION (July 8, 9:35 a.m.): A footnote in an earlier version of this story listed the incorrect call numbers for the classic rock station we monitored in Houston. It was KGLK, not KKRW.
Listen to the most-played classic rock hits and other songs on FiveThirtyEight’s Classic Rock Playlist on Spotify:


segunda-feira, 28 de março de 2016

7 Teorias Musicais do Tema Principal De "Final Fantasy VII" - Parte III

7 Music Theory Lessons from the Main Theme of Final Fantasy VII - Part III








Lesson V: Borrowed Chords

Why this lesson is important:

 As stated earlier, a standard piece of music will be composed almost entirely of the same 7 chords made of the same 7 pitches.  While you can build an entire career within those constraints, with a little extra sophistication you can bring a little more color to your music by using Borrowed Chords.  Basically, this gives you more chords to choose from when harmonizing a melody.

The Lesson:

Borrowed Chord is a chord borrowed from the key parallel to the one you’re writing in.  Parallel keys are major and minor keys that share the same root note.  E Major and E minor are two different keys that use two different scales, but they both use E as their root note.  Because they use different scales, they use different pitches and – since chords are built with the pitches of the scale – they contain different chords as a result.  The parallel key to E Major is E minor,  which contains the chords E minor, F# diminished, G major, A minor, B minor, C major, and D major.  Why not ask your good neighbor, E minor, if you can borrow a cup of sugar and a C Major chord for a little while?  That’s what neighbors are for.
Remember in the last lesson when I said that Uematsu uses chords in E Major for the main sections of the piece with the exception of 1 bar?  In that single bar, he adds a little magic by harmonizing the melody with chords borrowed from E minor.  Boom.  Magic.  Give it a listen again and pay close attention to the chords underneath the melody:

So, why is this so special?  There are a couple of reasons that this particular usage of borrowed chords is a fantastic example.  Assuming that the melody was composed before the chord progression, Uematsu – whether he noticed or not – could’ve easily found himself painted into a corner if he didn’t know that borrowing chords was possible.  If you follow through a textbook lesson for learning to harmonize a melody, you’ll first be taught to harmonize with chords that contain the melody’s pitch at any given time.  If the melody is playing a C, the triad chord you choose has to contain a C.  In the example below, if Uematsu’s knowledge of harmony didn’t reach any further than that lesson, FFVII’s theme may have sounded like this…

Or this…

Bleh.  Thank goodness for those borrowed chords, right?  If Uematsu chose to harmonize his existing melody with chords containing the melody’s pitches, he would’ve been limited to a small handful of options – none of which produce a particularly strong or remarkable chord progression.  Happily, a lot of the rock music that likely influenced him used this technique and other similar tricks to keep things interesting.

How you can use this lesson:

Take a look at a piece – either an old one, a new one, or the next one you haven’t even started yet.  Figure out which key it’s in, and then look up what chords are available from the parallel key.  Remember – if you’re writing in C Major, the parallel key is in C minor.  A quick Google search will help you find a list of the chords available in that parallel key.  Next, figure out which chords you’ve been using behind your melodies and experiment with substituting chords from the parallel key – especially when you feel like the chord progression could stand to be a little stronger.

Lesson VI: Common-tone Modulation

Why this is important:

You could write music for years without ever using modulation, but adding a modulation – or key change – to a piece of music creates a very dramatic effect.  You can use a modulation to create an epic, rising effect (see Lesson VII).  Alternatively, you can use a modulation to go from a major (happy-sounding) key to a minor (sad/ominous-sounding key) as Uematsu did in Lesson III, where he uses a deceptive cadence to pivot us into a minor key.  Regardless, the use of modulations in music is not only common amongst great composers and song writers – it’s fun and interesting!  A very easy-to-use technique for modulation is called common-tone modulation, and so we’ll start there.

The Lesson:

Imagine that your favorite TV show has just aired its series finale, and the network has decided to produce a spin-off show that – while belonging to the same genre as the original show – is very different than what you’re used to.  How do they pull off these new shows without losing the entire audience from the original series, thus avoiding the need to start over from scratch?  By leveraging a character who existed in the first series and will continue on in the second series.  This character provides an anchor of familiarity and a point of reference for the new series, and a common-tone modulationworks in a similar fashion.
Modulation, as I stated earlier, occurs when the tonal center of a piece of music changes.  This results in the use of a new root note, scale, and set of chords as per Lesson II above.  While this effect can be totally awesome to use in your music, you shouldn’t just dump your listeners into a new key without an anchor or some sense of familiarity.  That would be very jarring and unpleasant to listen to, even if the average listener can’t articulate why it’s unpleasant.  If you don’t want your modulation to sound like you accidentally played a wrong chord and decided to run with it, you need to use an anchor to pivot your piece into the new key.
In a common-tone modulation, you leverage a repeated or sustained note from the original key as a bridge to carry the music into a new key which also contains that note.  For example, if you’re in the key of C major and ending a section with a C major chord, you may modulate into G major by way of the G note, which is found in both the C major chord and the G major chord.  In OST version of Final Fantasy VII’s theme, a commom-tone modulation is used to raise the piece from Emajor into Gmajor, by using B as the common-tone.  You can listen to a simplified version of this modulation in the following video, in which I emphasise the common tone by playing quarter notes right before the modulation occurs:

Did you hear how the natural this transition sounded?  It’s subtle, but it’s enough of an anchor to make the modulation enjoyable to your ear.

How you can use this lesson:

This technique isn’t rocket surgery, but it’s very effective so long as you’re using it very deliberately.  To begin using this technique, I would recommend choosing (or writing) a piece of music with a strong melody or a very catchy ostinato (think Jenova).  This technique works well with both looping- and scored/cued music that may accompany a scene, trailer, or event in the game.  Because video game music is short-form music by nature, the modulation point should be chosen very carefully and would best be used to transition to a new section or to repeat an existing section of music as Uematsu did with the above excerpt.  By modulating and repeating the exact same musical material in a new key, an emotionally lifting effect is achieved while content is recycled in an interesting way.
Finally, keep in mind that – because most video game music loops – if you modulate to a new key you will ultimately have to modulate back to the original key at some point.  Make sure to plan/write accordingly!

Lesson VII: Common-chord Modulation

 Why this lesson is important:

As discussed in Lesson VI, modulation creates variety in your music – which is especially important in music that will be heard repeatedly throughout gameplay.  The more tools you have at your disposal to keep it interesting for the player, the better.  Common-chord modulation is another method for changing keys in your music, and if you’ve become comfortable with the other lessons in this post you have all of the knowledge you need to execute this technique effectively.

The Lesson:

common-chord modulation is achieved by transitioning from the original key to the new key through a chord that occurs in both keys.  Just as a common-tone modulation uses a shared tone to anchor the listener through the modulation, a common-chord modulation uses a shared chord – called the pivot chord – to make the transition between keys.
For example, you’ll remember from Lesson II the key of E Major contains the following chords: E Major, F# minor, G# minor, A Major, B Major, C# Minor, and D# diminished.  If I wanted to modulate to the key of D major, I could use any chord that occurs in both keys as my pivot chord.  The key of D major contains the following chords: D major, E minor, F# minor, G major, A major, B minor,and  C# diminished.  This gives us two possible options for our pivot chord – F# minor and A major – because these chords exist in both keys.
For this example, we’ll actually be looking at the same exact place in the music as we did in Lesson VI – but not the version found on the original soundtrack.  This time, we’re going to look at how that modulation occurs in the orchestral version.  Recall that in Lesson VI above, Uematsu uses a common-tone modulation to make the jump from E Major to G major on the OST version of the track.  On the Final Fantasy VII: Reunion Tracks album released in 1997, Uematsu collaborated with Shiro Hamaguchi to arrange this theme for a full orchestral performance.  It’s a gorgeous arrangement with some additional ear-candy built into it, including the new common-chord modulation from E Major to G major.
BUT, that’s not all.  The real magic is which chords they used as the pivot chords.  Remember the borrowed chord example from Lesson V, when Uematsu borrows a bVI and a bVII chord from the parallel minor key to spice things up a bit?  I’m not sure if this was by original design or a happy coincidence that was allowed to happen because of the keys Uematsu chose to use in the original soundtrack, but they were able to use the borrowed bVI and bVII chords as the pivot chords!  It’s a little easier to digest if you see the Roman numeral analysis and hear the modulation in the video below:

See what they did there?  In E Major, that same bVI-bVII chord trick we’ve been hearing uses C major and D major chords.  The destination key of G major contains both of those chords (IV and V chords, respectively), and as a result they use the bVI – bVII chord progression in E major AND as a IV – V – I progression in the new G major key (an authentic cadence, as per Lesson III).  Mind.  Blown.  Effectively, they combine Lessons II, III, and V in order to pull off the common-chord modulation.  See accompanying illustration:

Cloud Limit Break Omnislash

How to use this lesson:

Choose a piece you’re working on, or one that you’ve already finished.  Decide where you’d like to place a modulation (perhaps repeat a section that already exists?), and use this Wikipedia page to identify the relative minor and closely-related keys.  Choosing from these closely-related keys will be easier to modulate to, as they already share several common tones/chords.  While using borrowed chords to modulate to a new key is a neat trick, it’s not necessary to try until you’re comfortable with a basic common-chord modulation.
Next, all you have to do is pick a key you’d like to end up in.  Experiment by playing your melody/ostinatos in the original key followed immediately by the destination key.  Remember that each modulation will have to return to the original key if your music is looping, so you’ll have to modulate twice.
 Conclusion and Next Steps:
Phew!  Still with me?  That was a lot of information, and you should not try to implement all of these at once.  Get comfortable with one new technique until you’ve internalized it before moving on to the next one.  Just like in an RPG, it’s all about gradual progress and accumulating new skills, abilities, and Materia along the way.  Take your time, and have some fun with it.
Special thanks to: DJ Cutman, who noticed that the volume on the videos was pretty low and took it upon himself to boost the audio and send me the new versions to update the post.  You can listen to his epic remixes and be jealous of his website by clicking here.  Thanks, DJ Cutman! 

RIAA - Estatísticas de Vendas em 2015 Oficial


The Official RIAA 2015 Statistics Are Out










The RIAA has released its statistics for 2015 and, as always, there are some surprises. The things to remember about the RIAA is that it works for the record labels (especially the majors), so some stats you have to take with a grain of salt. Here are some of the more noteworthy data points.
  • There was a very slight increase in the recorded music part of the business, with revenues of just over $7 billion, for an increase of 0.09%
  • Streaming accounted for more revenue than any other income stream for the first time, accounting for 34.4% of income, while download sales made up 34%, physical sales were 28.8%, and synch were 2.9% of total revenue.
  • Paid subscription revenue increased 52.3% to $1.22 billion, compared to $800.1 million in 2014, while ad-supported streaming revenue increased 30.6 percent to $385.1 million. All very good news!
  • Revenue from CDs, vinyl and DVDs of albums and singles fell another 10.1 percent to $1.9 billion (although that was less than predicted). CDs fell to $1.521 billion from $1.83 billion the year before based on 123 million CDs that were sold last year, which was down from around 143 million in 2014. 
  • Vinyl sales continued to soar, generating $423 million from 16.9 million album sales and roughly 500,000 singles, an increase of 31.8 percent.
Here's the catch - the RIAA's numbers reflect retail sales, which means that the above numbers don't reflect how much the labels actually received for their music, although wholesale prices are from 65 to 70%.