Log

May 19,2017

To Do

  • Perhaps should be weighted max clique and not maxclique with weight as secondary measure
  • Favor big cliques as opposed to long matches
  • Figure out how to include local maxima representing even short tagline choruses in the max clique calculations
  • Ignore non-word vocalizations via dictionary? (this could be one of the “flavors” of the algorithm)
  • Use phrase breaks to determine chorus end (and maybe beginning) (this could be another “flavor”)
  • Label more data
  • Introduce more parameters (use multi-venn diagram to show accuracy as combination of features)
  • Do it all over for verses
  • Axon Website
    • link to pierre
    • link to CV
    • link to LinkedIn
    • link to Google Scholar profile
    • photos from grad poster thing
    • grad poster thing
    • video from Asher
    • video from grad student society about me
    • videos about bioinformatics discovery
    • all pubs with links
    • blog

Done

  • Decompress all of the mxl files, make changes directly to mxl files, starting with Michael Bublé, maybe even incorporate annotations that way (input them the same way I'm outputting them)?
  • Don't normalize aln matrices—pick static value as min threshold (actually let the GA decide it)
  • Make the minDistanceFromDiagonal a parameter to be learned
  • Implemented Generic self-similarity alignment
  • Thoroughly annotated Twinkle and Rainbow for any repetition in any viewpoint
  • Use F-Score instead of accuracy for fitness function
  • Implement it for pitch, harmony, and lyrics

May 5, 2017

Done

  • In the case of ties, use the one with the higher score
  • Find faster implementation of max clique solution
  • Why not “Honesty”? (I was writing over it with other billy joel song: solution was to not substr file name)

May 4, 2017

Done

  • Make website
  • Revise and resubmit ICCC paper
  • Implement alignment module for xmls
  • Implement genetic algorithm for finding params
  • Figure out how to label tagline choruses: solution was to actually label them as tagline choruses. Might have repercussions down the road when running Pop*.
  • Implement mechanism for identifying multiple choruses: solution was to find local maxima and do max clique to find mutually agreeing choruses

April 10, 2017

Done

Work on alignment paper

  • Compute all the events to be aligned up front instead of recomputing them on the fly—duh
  • Make the computation more effective by linearly traversing data structure instead of repeatedly computing access point, too
  • Keep testing that correct elements are being aligned
    • NOTE: aligning lyrics requires having the right lyrics (e.g., multiple verses represented using repeat signs). Having the right lyrics (as of now) requires knowing the segment type (i.e., choruses, you take whatever lyrics are there for each note; verses have to use lyrics that match the repeat count). We need to implement something that doesn't require knowing the segment type to infer whether any lyric will work or whether it has to be the lyric matching the repeat count of surrounding lyrics. OR just use the manually labeled segment type to extract lyrics from XML. This isn't cheating because it's really a different problem.

April 6-7, 2017

Done

Work on alignment paper

  • Finish alignment module
  • Test that correct elements are being aligned

Process Qualtrics data

  • Download data
  • Find script to process data
  • Process accuracy and confusion matrix

Make Pop* run deterministically

  • Create a global random number generator seed in the program args or configuration setting
  • Find all random number generators and set the seed
  • If that doesn't work, fix each one individually
  • Check that it works to rerun multiple times with same output

Finish Doctoral Consortium Paper

  • Add 4 references
    • Pachet
    • Toivanen
    • Magenta
    • ??
  • Quick read-through for grammatical errors

Finish Grad Expo Poster

  • Finish layout
  • Add words
  • Polish

January 5, 2017

I have successfully established a pipeline that generates music xml files which are sung by HarmonyAssistant. I opted to switch the dependency order so that harmony and melody are generated before lyrics based on the following rationale:

Originally I'd thought that lyrics should come first because they determine rhythm. While the exact rhythms are dependent on the lyrics, the general melodic line is not determined by the lyrics and in fact is independent of the lyrics by virtue of the fact that you can have multiple verses on the same melody. Thus we generate melody first and then fine tune the melody to adjust to the rhythms dictated by the lyrics.

What needs to happen next?

  • Finish the data processing module to correctly handle D.S. al Fine/Coda.
  • Allow the generator to train models on the xmls (this needs to be done without loading all xmls into memory—train all models on one song, then go to next)
    • For each model, filter on appropriate criteria (e.g., english lyrics only for lyrics model)
  • Get something to show the 673 class
  • See about doing mixed-order markov
  • See about conditioning on more variables than just previous tokens (e.g., next note should depend on position in bar as well as previous notes)
  • Get data for melogen group
  • Alignment paper
  • Compare Human results on key recognition?
  • Human annotate rhymes in songs
  • Revamp key recognition paper?

Nov 4

My goals here are to:

  1. Fix human_readable syllabification of words in training MIDI files
  2. Render human-readable syllabification of new lyrics for new MIDI files
  3. Determine stress of syllables in both training and new lyrics

Note that the specific human_readable syllabification depends on which phonemes are selected for the word.

I need something that can

A) turn words into human readable syllables B)

Ceck out this book https://books.google.com/books?id=BrcQAwAAQBAJ&pg=PA184&lpg=PA184&dq=java+syllable+lookup&source=bl&ots=BQz-ikPO0d&sig=svLD6NiPnHm8CAmgxgJd4ZPdz6w&hl=en&sa=X&ved=0ahUKEwjE6sb24JDQAhUIi1QKHTkeC-AQ6AEIQjAG#v=onepage&q=java%20syllable%20lookup&f=false

Tuesday, October 25, 2016

After meeting with Dan today, the idea came that we should take advantage of the 673 class more. Instead of working my tail off to get them what they need, I just formulate the work that I need them to do. A trade-off of sorts. That will ease my burden a bit to not have to do all of that work. I just need to get the logistics of Pop* in place.

Some ideas:

  • I could have them go through the midi files and mark them with which tracks are the bass, drums, and other accompaniment tracks.
  • I could have them annotate the lyric sheets with rhyme scheme, segmentation, etc.
  • I could have them go through the midi files and mark them with genres for each song.

September 13, 2016

Finished annotating melodies in MIDI. Now, to do key inference:

  • Use only tracks with a single key signature, melody track, -1 comb track, non-neg lyr track, no chords in melody,
  • Extract track with melody line
  • Use only notes that have a volume greater than 0
  • Run it through the key inference pipeline to get accuracies

August 9, 2016

I discovered that there are some lyrics in the midi files that aren't lyrics and also don't just show up at the beginning. They're metadata that show up all throughout the (un-noted) intro. So what does this mean? Well, right now it means that I'm likely not pulling out the right melody track for the lyrics on at least a few of the songs. So that suggests I ought to go back and try doing some sort of alignment between lyrics and between melody events. That could be a cool application for my alignment paper.

For the moment I'm going to press forward like nothing happened, though. Finish off the pipeline to infer key signature and compare between methods.

I also need to address finding the key signature better inasmuch as there are songs with multiple key signature events in the midi file, but which don't actually appear to have multiple key signatures in the song. So I need to be smarter about which events I consider as far as key signature goes.

It appears also that many of the songs have the wrong key signature indicated. This is suggested by the fact that simply guessing the key of C always does decently, better even than more intelligent guessing. I could either go manually fix them, or I could ignore any of them that have the key of C, which is about 60% of the 292 songs, leaving me with just over 100.

July 21, 2016

Found some weird midi stuff today. Nifter's “are you lonesome tonight” starts at some completely random time such that when opened in Finale, the notes don't quite line up right (remember, MIDI doesn't say anything about the sheet music rendering, just the audio rendering). But when opened with MuseScore, it just starts the song from the beginning of the first note (it gets it right). I have a feeling that its making a lot of assumptions to get that far. Not sure what to think about it.

July 18, 2016

Today Chris Tensmeyer taught me a few things that could be handy. I can ssh into any machine in the lab that has ssh enabled and run jobs on it. screen is a program that allows me to essentially preserve state and login to a particular state from wherever I may be. The IP address for the lab is 192.168.29.*, with the * being the number specific to the machine. The machine across from me that I've been using has an IP address of 192.168.29.69. ipconfig (or ifconfig on mac) tells the IP address. This is how you enable ssh on a mac.

mind/log-entries.txt · Last modified: 2017/05/20 02:53 by norkish
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0