[WIP] FMStudio

I just finished uploading the initial beta version of my latest project FMStudio to github.

What is FMStudio you ask?

FMStudio was inspired by @jpfli’s awesome FMSynth library. It allows you to customize the instruments with a nice GUI interface and can create entire songs by breaking songs up into reusable patterns. Currently it is able to export FMSynth Patches as C Header code useable with FMSynth on HW and can export songs as RAW audio. Later I will be working on exporting songs in a C Header format that is similar to SimpleTune but slightly more complex and will allow including songs directly in your code using minimal space. The beauty is if you want to create more complex pieces (like having more than one note playing at a time within a single pattern, or your game is too resource intensive to generate the music on the fly) you can always just export the songs as RAW audio (8000hz 8-bit unsigned pcm) to be streamed from SD.

The interface should be fairly easy to learn as you simply move the cursor around to place notes/patterns (it will show a semi-transparent copy of where you’ll be placing if the placement is valid). You can adjust the note duration, duration snapping, grid snapping, and all sorts of other components. FMStudio also supports adjusting each note’s velocity (between 0 and 127) which effects the note’s overall volume (you can create some neat effects using this). There’s also a virtual piano keyboard that you can click on to hear individual notes based on the selected instrument.

Patterns are not tied to specific instruments either allowing you to play the same melody across different instruments for added effect.

There is also an “Overlapping Notes” section where you can tell it to prevent overlapping notes (it won’t allow you to place them), highlight any overlapping sections red, or allow overlapping notes (you can place them and it won’t highlight them).

Projects are stored in FMX files (plain text JSON file) so that later when exporting to C Header all songs will share the same set of instruments instead of providing the instruments used in that song (thus having duplicate instrument data).

Curious to see what others are able to make with this (both instruments and songs). I’ve included a mario.fmx project which has the first few measures from the overworld theme to demonstrate some features. Also there’s an experiments.fmx which contains my current experimental songs/instruments/patterns (I particularly like my Heavy Bass instrument I made).

Github
Binaries: v0.2.1

  • Windows (Simply extract the zip file anywhere you’d like).
  • Linux (Simply extract the tarball and run FMStudio).
  • MacOS (Simply open the DMG and drag the AppBundle into the Applications or run directly from the DMG).

Trying to create a Mac binary but currently clang is having issues with the FixedPoint library so hopefully I can sort that out at some point. Linux binaries would also be nice but linux is notorious for not being very binary distribution friendly.

7 Likes

Nice! I will try it soon.

1 Like

A marvelous creation! I’ve been wishing there was a tool like this to make songs with FM based instruments.

After lots of trouble with the FixedPoint library not compiling on Mac (clang didn’t like a static constexpr Fixed to be declared inside the Fixed class because the Fixed type was incomplete at the time of the declaration (GNU and MSVC were both fine with it since it was a static object).

Now there is an official Mac binary on the release package on github.

1 Like

Tested it. Works well already. Seems pretty intuitive also. Here is my quick test melody. Might be suitable for some horror game :wink:

Some comments:

  • I liked the UI overall
  • I would like more inputting a melody using the PC keyboard instead of mouse
  • The bass line in my song is a little too silent. How to change the channel volume?
  • How to load other instruments than “BASS”?
  • it is a bit confusing that the snap value and the note duration value are expressed in different units.
2 Likes

That’s some good feedback.

I planned on adding keyboard input (and possibly midi input) at some point. It shouldn’t theoretically be too hard to implement and I already have some ideas on how I might do it.

Unfortunately even if I added a channel volume control it would only decrease the volume. However, if you edit the instrument you can increase it’s volume by adjusting it’s volume knob (most instruments are set to 80%). Then for notes that are a little too loud you can drag their velocity down a bit (the vertical bars beneath the notes on the pattern editor.

Not sure which part is confusing you here, being able to edit instruments, adding new instruments, or selecting the instrument to add. Here’s some screenshots.

From the main screen click the “…” button next to the instrument to pull up the instrument editor:
image
Now you can click on “New Instrument”


Then you type a name for the instrument and select the desired starting instrument from the drop down:

I hadn’t really thought about that but I should translate the note duration from it’s integer representation (which is in steps of 1/128th notes with 0=1/128) to the fractional value (easy enough to do just take duration+1 over 128 and then reduce both by their GCF).

I’ll be posting a tutorial video here shortly but one other feature is on the instrument editor right below the “New Instrument” button is the “C Header Data” button. That button will pop-up a window showing the instrument as a C-style structure (for use with FMSynth::Patch) that can be used with the current version of FMSynth (although I think the latest version removed the instrument name field so it might need to be commented out). Also if you overwrite that text and click “Ok” then it will actually update the instrument with the entered C Header Data.

Finally, if you create a custom instrument that you would like to use in other projects or share with others you can save that C Header Data to a .h file. If you put that .h file inside the “instruments” folder in FMStudio’s root folder and restart FMStudio then it will now have that instrument available in the drop down list when adding new instruments.

Wanted to solidify the interface before doing a tutorial video, but I knew it was really close to being production ready so I appreciate any feedback.

2 Likes

Ah, I did not realize that I need to create “New” instruments for each song. Could there be ready made instruments which you could use without going to the instrument editor window? Now there is only “BASS” in the selection list by default.

1 Like

So it’s not necessarily creating new instruments for each song, rather instruments for each project (as a project can contain multiple songs all sharing the same instruments). Originally I set it up to only work on one song at a time but realized that a single game would have multiple songs and they should share instruments instead of duplicating them.

I could probably just add all instruments in the “instruments” folder by default and let user’s delete ones they don’t want and create new ones. That would probably give a better starting point to just jump in and play around with making some music without having to mess with the instruments.

2 Likes

Is note release supported? Instrument’s release time doesn’t seem to have any effect – sound always stops immediately.


You are right. There was a name field in the FMPatch class of the original FMSynth – the one used in the synthesizer application with its own instrument editor and MIDI support.

Later when I turned FMSynth into a libAudio extension, I removed the name field. It was there really just for the synth’s instrument editor and UI. In code you can use variable names to identify instruments, as your C Header export already does.

So you could remove the line “.name=…” from the exported instruments, but you need it for the .h files inside the “instruments” folder. Would it be a good idea if FMStudio instead extracted instrument’s name from its variable name by removing the “patch_” prefix? Or if the instrument’s name was in a comment line above the patch declaration:

// FM Patch: BASS
FMSynth::Patch patch_BASS =
{
  ...
};
1 Like

I think I’ll go with the comment approach but comment out the current .name= line since my CHeaderParser currently doesn’t know about comments. The tricky part with CHeaderParser was not to design a robust parser that can parse all the various ways a structure can be statically defined but to use a method that is valid C/C++ but also easy enough to parse.

So something like this:

#pragma once

FMSynth::Patch patch_PIANO =
{
  //.name="PIANO",
  .algorithm=3, .volume=80, .feedback=50, .glide=0, .attack=0, .decay=75, .sustain=100, .release=60,
  .lfo={.speed=0, .attack=0, .pmd=0},
  .op=
  {
    {.level=71, .pitch={.fixed=false, .coarse=1, .fine=0}, .detune=50, .attack=0, .decay=70, .sustain=0, .loop=false},
    {.level=23, .pitch={.fixed=false, .coarse=1, .fine=0}, .detune=57, .attack=0, .decay=70, .sustain=0, .loop=false},
    {.level=20, .pitch={.fixed=false, .coarse=5, .fine=0}, .detune=51, .attack=0, .decay=70, .sustain=0, .loop=false},
    {.level=40, .pitch={.fixed=false, .coarse=1, .fine=0}, .detune=50, .attack=0, .decay=50, .sustain=0, .loop=false}
  }
};

Checking against FMStudio and commenting out the line doesn’t prevent it from reading the value, but it does remove it from C/C++ compiled code which is a great, cheap, and easy solution.

I was noticing that as well and couldn’t figure out what I’m doing wrong. Basically what I’m doing is calling voice.noteOn(patch, note, velocity) when a note is played and tracking how many samples the note should be held for based on it’s duration and then calling voice.noteOff() Then I continue sampling the voice until either a new note is requested to play over it or until voice.released() returns true (which seems to happen immediately after voice.noteOff() is called. However, maybe there’s something else I’m doing wrong.

Found the culprit

In FMSynth/Voice.h line 89

inline bool released() const { return _master_env_gen.stage() >= EnvelopeGenerator::Stage::Release; }

Changed it to

inline bool released() const { return _master_env_gen.stage() > EnvelopeGenerator::Stage::Release; }

Now the notes release properly. Need to replay my songs and see how different they sound.

The released() function is supposed to return true immediately after the note is released – that >= is not an error.

What you could do is add a new function called ‘finished()’ or something. But I’m not sure there is really a need to stop sampling the voice. When the voice is finished FMSynth changes it’s algorithm (callback function) to _null_algorithm which just returns 0. Like this (Voice.h line 324):

if(self._master_env_gen.stage() >= EnvelopeGenerator::Stage::Idle) {
    self._cur_algo = _null_algorithm;
}

Good to know. I couldn’t find anything that actually called that function and since I’m tracking how many samples a note is playing for then if (samples == 0) tells me the note is off but I needed a way to keep sampling it until it has been fully released (unless another note jumps in). Currently if no voices are playing I’m filling the buffer with 128 which means a fully finished voice would set it to 0 and mix the rest of the voices with 0. Would it be better to mix with 0 (and thus just keep calling voice.update() or would it be better to add a finished() function to FMSynth::Voice so I can properly tell when all notes have finished playing (might actually be needed for the end of the song so everything properly fades out instead of coming to a hard stop).

EDIT: I think I’ll go with the finished() function actually and then later add options to the export function including a checkbox on whether to fade out at the end or hard stop (hard stop might be better for looping tracks).

Ok, sounds good.

I’ve been exploring the program for a bit now, and it already feels quite ready for use. One feature that I’d like to see added (or I didn’t find) is glide from one note to another. There is a setting for this in the instrument editor, but do you have plans to add support in the song editor?

Here are some thoughts and suggestions for improvements:

  • In the piano roll editor, the key that the cursor is at could be highlighted when placing notes, or the rows of C notes marked with a slightly different color. Would make it esier to place notes closer to the right edge.
  • Adjusting note velocity is a bit tricky, because the clickable area is so small. Maybe the clickable area could be wider and span full height, and the velocity level could jump right where you click/drag.
  • In the instrument editor, I would prefer separate coarse (integer) and fine (fraction) pitch settings, since they change the sound in quite a different way. Not a big deal though, as it’s already possible to adjust just the coarse value with the small up and down arrows or keyboard up/down keys.
  • Also, entering the pitch ratio using keyboard is pretty laborous. You have to type the numbers one by one and every time delete the next number before entering a new one.

Thank you for creating this remarkable tool.

2 Likes

That’s a good idea. I think I’ll do both, highlight the C notes AND highlight the current note on the virtual keyboard.

Yes, still struggling with making this one easier. Biggest difficulty is making it capable of handling overlapping notes (for more complex songs). I think I might keep the handles visually the same size and change it from a box to a circle and if multiple are within the minimum distance to the cursor just pick the closest one. Either way I’m still playing around with the input for this one (I did fix the tooltip showing the current value so it’s not so dark, the coloring for the background was being applied to the tooltip as well which obviously wasn’t desired).

I agree. Playing with it more I’m finding it would be best to separate the two controls. That’s easy enough to do and will be slated for v0.2.

This is actually a bug (in my code). To keep the knob and value in sync when one changes it changes the other one. Only problem is I was anticipating a valueChanged signal only when the user finished entering a value, but turns out it sends it with each digit the user enters. This triggers a setValue on the knob which in turn triggers a valueChanged (the loop is closed when setValue is called with the controls current value and thus the value wasn’t changed). Its within that call to setValue that adds the trailing zeros which inhibits the keyboard input. There is also an “editingFinished” signal so I’ll probably need to hook into that as well.

So far the only major issue I’m having is trying to recreate instrument sounds that I don’t have a waveform sample of because they’re in my head. They’re simple ones (like a whistle sound, or something like that). Otherwise I really need to figure out the best way to record notes from the keyboard (and midi keyboard now that I’ve got my MIDI-USB cable). Beyond that I’m continuing to tune the interface while experimenting with instruments and songs.

EDIT:

Actually I did, but didn’t realize it wasn’t working since the didgeridoo is the only instrument that currently uses it. The issue was that glide is only triggered if the note has not been released. This works in your original SimpleTuneFM because it only calls voice.noteOff(); if it encounters a rest note. Mine however encodes the notes with their offset so there is no rest note just gaps between notes. To remedy this I’m now checking if there is another note immediately following this note, but on HW I think I’ll have it encode rest notes since I believe that would be more optimized as it reduces the number of checks to make for each byte in the audio buffer.

You could also highlight the closest one or change the mouse cursor to a hand as with the notes, so you know when the cursor is at the right spot.

The theremin also uses glide! It’s probably a better instrument for testing.

About the glide effect in FMSynth: it is implemented in such a way that a succeeding note-on does not retrigger the sound (no new attack-decay), the pitch just slides to the new value. Some synths do retrigger the sound even with glide, but then smooth theremin-like slides wouldn’t have been possible. With instruments that use glide, you want to be able to start a new sound without glide, and that’s when note-off comes to play. It’s a simple way to control when to use glide. Otherwise there would need to be an additional parameter for the noteOn() method or some other way to toggle glide.

How are you planning to handle overlapping notes with glide? The easiest solution would be to play the pattern monophonic when the instrument has glide enabled. So you would just glide from one note to another, and call noteOff() only when there are no more notes overlapping or immediately following.

Yes I think I will have it change the cursor to the hand here too as that will indeed make it easier to tell when you’ve actually grabbed the handle.

This one is interesting on how I’m currently handling it. To support things like chord progressions WITH glide what I’ve done is all notes are sorted first by offset then by midikey (since two notes can’t have the same offset and midikey). From there when a note is played it checks the first voice to see if it is available (not necessarily released, but current note has reached its duration) and if so it sends the next note to that voice, if not it checks the next voice in the stack for that channel and if no available voice is found it creates a new one (so a chord would create 3 voices). What ends up happening for something like a chord progression is the first, second and third notes of the chord glide over to the corresponding first, second, and third notes of the next chord.

So far this has been interesting trying to do it in a way that can support note-duration-velocity values played directly on HW while also allowing more complex pieces that just get exported as RAW audio (later I’ll also setup a dialog for adjusting the output so you can change the sample rate and maybe introduce other settings as well). If I can port the FMSynth to use real floating point values then I might be able to add support for exporting better quality versions of the songs, but we’ll see.

Just uploaded the latest release v0.2.0 (Sounds Great)

Improvements:

  • Improvements to the interface and the instrument editor.
  • Added better waveform analysis by adding a spectrum graph.
  • Waveform and spectrum previews now update based on note played on the virtual keyboard.
  • Ability to load RAW audio sample to compare with note’s waveform and spectrum.
  • Added a likeness rating when comparing current instrument with loaded sample.
  • Separated the pitch’s coarse and fine controls for each operator.
  • When starting a new project it automatically imports all global instruments
  • Fixed note duration display to now show in increments of 1/128 matching the snap value displays.
  • Changed piano roll to give each semitone its own grid space instead of flat/sharp notes displaying halfway between spaces.
  • When placing notes the virtual keyboard now highlights the key corresponding to where the cursor is at.
  • When adjusting the velocity of individual notes the cursor now changes to a hand icon when it is close enough to the handle.

Bug Fixes:

  • Notes now properly glide from one to the next on instruments with glide.
  • Fixed note release time so they properly release based on instrument settings.

Links in the first post.

3 Likes

Thank you!