Got this when I tried to add an audio file to a project:

NotSupportedError: Failed to construct 'OfflineAudioContext': The number of frames provided (0) is less than the minimum bound (1).

Tested a (pcm,16-bit) wav file.

Also noticed that the wav file is added to the project only after restarting the IDE.


Both were fixed after restarting the IDE :wink: SFX works now in the game!


I must say that I really love the integrated graphics editor (I do not use Aseprite at all). It is so quick and immediate to test different graphics and do color experiments :slight_smile:

Only that the building is a bit slow in my, quite low spec, laptop, but that is what I have to live with.
Edit: maybe this improves the situation when implemented: https://github.com/felipemanga/FemtoIDE/issues/13

Error: C:\bin\FemtoIDE\IDE-windows\projects\ZombieFlock\Main.java: Event: Parse error on line: 77, column: 9

This could be clearer as the bug is in “Event.java”.
Main.java: Event.java:
Main.java => Event.java:


What is the sort order in the project file list?


Did you manage to identify a pattern that causes this bug? Is it always when you add a new audio file?

That issue won’t help on its own, as it’s for C++ projects. After that’s done, Java will need to be modified to produce multiple C++ files instead of just one. The Java compiler itself is pretty fast, the linker takes up most of the time. I feel your pain, though: my big test project takes >2 minutes to compile on my i7.

Indeed, that is confusing. Ideally it wouldn’t mention Main.java at all.

Heh, it’s supposed to be alphabetic.



At the moment sounds and images end up generating a class with a name that matches the filename.

Is anything in particular preventing having Sound and Image classes?

(I’m vaguely aware that images might have animations, but that’s the only thing I’m aware of and I can think of a solution to that problem.)

If it were possible to have general Sound and Image classes then it would be possible to put sounds and images in an array, which would make it easier to do effects like selecting a random sound.


They have common base classes (femto.sound.Procedural for sounds, femto.Sprite for animated sprites, femto.Image for static images), so you’d use that to put them into arrays.

Edit: clarifying further:

import femto.sound.Mixer;
import femto.sound.Procedural; 
import sounds.Zombie1;
import sounds.Zombie2;
import sounds.Zombie3;

    static final Procedural[] samples = new Procedural[]{
        new Zombie1(),
        new Zombie2(),
        new Zombie3()


I have a few proposals for the API which I originally was going to turn into GitHub issues but I wasn’t sure whether it was worth making GitHub issues or not.

Firstly, looking at the way Procedural is set up, it doesn’t really make sense to be able to instantiate it, so should the constructors be protected instead of public?

Secondly, it also doesn’t make much sense to instantiate Mixer, and all of Mixer's methods and fields are static, so should Mixer be marked final and its constructor made private?
(I was originally going to say make the class static, then I discovered that Java doesn’t actually allow top-level static classes, which is pretty daft.)

Thirdly, technically Mixer only actually uses Procedural's update and reset methods, so perhaps it would make more sense to introduce an interface (or class) that’s the parent of Procedural, which only has update and reset.

That way the new interface could be used for implementing more varied kinds of sound (e.g. bytebeat music) without the excess baggage of the public channel and t fields and the play function.


It does make sense to instantiate it:

        var bytebeat = new Procedural(){
            ubyte update(){
                return (t>>5) | (t>>4) | t++;

That is a possibility. I don’t really like the fact that Mixer is a static class, as it prevents people from extending it. Need a mixer with per-channel volume control? Can’t be done.
The other option would be to make it a singleton where you can replace the instance with your own and have the static methods call the instance ones.

While Mixer only really needs update, all sounds (bytebeat, sampled, etc) would need everything else that’s in Procedural (no point in having a sound without a channel to play it).
Do we gain anything in using an interface, other than the obligation of having to duplicate channel, t and play in all the derivatives?

One thing that does bother me: I should’ve named Procedural Sound, instead. Then again, I don’t like how femto.sound.Sound sounds.


Ah, I had forgotten anonymous classes are a thing.
Although, you don’t actually need the constructor to be public to do that.
Not with standard Java at least.
I.e. the code is still valid if the constructor is protected.

Also I wasn’t aware Java finally added a var feature.
It’s slowly catching up with C#. Slowly. :P

That’s probably the better option.
Either that or make people manage their own instances.

Procedural doesn’t technically need to even know about Mixer

I somewhat disagree with this.

It doesn’t seem to me that the sound’s channel is actually part of the sound,
the channel only makes sense in the context of playing the sound via Mixer.

If it weren’t for the fact that Procedural is stateful I’d say it ought to be possible to do:

Procedural sound = new SomeSound();
Mixer.setChannel(0, sound);
Mixer.setChannel(1, sound);

And it is actually possible to do:

Mixer.setChannel(0, new SomeSound());
Mixer.setChannel(1, new SomeSound());

At which point Procedural's channel is just wasting memory.

channel only really exists to make play possible,
and the play function is purely shorthand for Mixer.setChannel(channel, sound),
which means it’s effectively just a convinience function.

I can’t imagine any other implementation that would make sense.
Also play pretty much forces Procedural to depend on Mixer,
which means they end up being strongly coupled.

I’m not even sure the Procedural part is right.
When I hear ‘procedural’ I think of procedural generation (e.g. bytebeat music rather than pre-generated sound data).
I think maybe SequentialSound would have been more on the nose.

The semantics are different.
femto.sound is sound as a concept, grouping together everything relating to sound generation,
femto.sound.Sound is a particular instance of a sound.

A better alternative would probably be femto.audio.Sound, but that means changing more things.


(deleted, didn’t read the very last line :stuck_out_tongue:)


You’re right, and initially using setChannel was the way to make a sound play.
It has two problems, though, stemming from the fact that it’s a lower level API:
1- If you want to play a sound, calling sound.play() is a lot clearer/simpler and conducive to the intent.
2- Defining the channel to be used becomes a burden placed upon whichever code is playing a sound. It is easier to define the channel once on instantiation.

I see setChannel as a low-level API and play as a high-level one. The latter isn’t strictly necessary, but it fits better into the rest of the game code.

Since the grouping is only two classes, I’m tempted to just move them into femto.Mixer and femto.Sound.
It’d be convenient when doing import femto.*;

Yesterday I generated the docs and put it here. Had to compile doxygen from source to get it to work since ubuntu serves an ancient version, as usual.


If setChannel had been called playSound then it would be just as clear.
Even more so if Mixer had a parent called SoundPlayer and SoundPlayer was the interface the user interacted with rather than the concrete Mixer class.

The idea that an object has to operate on itself is somewhat of a fallacy.
renderer.draw(entity) is just as meaningful and often more flexible and more desirable than entity.draw(),
so it follows that soundPlayer.play(sound) or soundPlayer.play(sound, channel) ought to be equally fine.

Such an arrangement would allow Mixer to select an unused channel if no channel was specified if such functionality were desirable.

Also sound.play somewhat violates the single responsibility principle.
Representing a stream of bytes constituting a sound is one responsibility,
being able to play said stream of bytes as sound is a different responsibility.

I think it provides more flexibility to the calling code.

Also it potentially reduces memory usage because the channel number doesn’t have to be stored in the sound.

What is it about setChannel that makes it low level?

If it’s the existance of a ‘channel’ in the first place, then Procedural still has to know what a channel is so that detail leaks into Procedural anyway (and leaks out of Procedural's interface by having channel be public).

Strange, the image is missing from the Sine and cosine example and the two examples aren’t nested under the Examples heading.

I’m going to rerun my doxygen-update script, if I get the same problem then it’s probably something that’s changed to the code, but if I don’t then it’s something that’s changed between 1.8.15 and 1.8.16.


I checked (though it took me a while because I didn’t realise the output directory had been changed in the last commit) and locally the image is where it should be and the nesting is working properly.


So either something broke in 1.8.16 or something else has gone wrong somewhere.


I have a request!

Is that possible to draw the text, like, aligned by the right, using a setTextAlignment() method?
Or at least have a method that calculate the width in pixels of a full string without actually rendering it?


At a guess…

public int measureString(string text)
	int characterWidth = LDRB(font);
	int stringLength = text.length();
	return ((stringLength * characterWidth) + (*stringLength - 1) * charSpacing));

Which would have to have access to the screen mode’s font.

setTextAlignment would be somewhat more efforty,
and would involve writing a right-alignment version of putchar.


Yeah, I’m using an equivalent of that for now as a rough estimate, with hard coded values for the max character width. But the TIC80 font isn’t a fixed one (the 1 for example is only 3 pixels wide) so that won’t do great :frowning:

Also, what’s that LDRB() method?


The fact that it “provides more flexibility to the calling code”.
sound.play() has only one dependency (the sound itself, from the point of view of the game’s code) while soundPlayer.play(sound) has two.

I think it’s a Windows vs Linux thing. The path in the md file starts with a “/”, which means the root directory on Linux. When Doxygen can’t find the file, it outputs the path without changes (which breaks on-line as it removes the repo folder).
In your case, I guess it manages to find the file and the resulting HTML has a relative path.

There is a textHeight() method, it would make sense to add a textWidth().


LDRB loads a byte from memory, given a raw pointer. It is inside System.memory.
It’s meant for internals and for direct hardware access. Non-standard, here-be-dragons.

I’ll add this to the next release.
In the meantime, put this in IDE/javacompiler/femto/mode/ScreenMode.java:

    public int textWidth( String str ){
        if( font == null ) return 0;
        int total = 0;
        uint w = LDRB( font );
        uint h = LDRB( font+1 );
        char index;
        for( int i=0; (index=str[i]) != 0; ++i ){
            index -= (char) LDRB( font+2 );
            uint extra = h != 8 && h != 16;
            uint hbytes = (h >> 3) + extra;
            pointer bitmap = font + 4 + index * (w * hbytes + 1);
            int numBytes = LDRB(bitmap);
            total += numBytes + charSpacing;

        return total;


Variable-width fonts complicates things somewhat.
You’d need some kind of lookup table that maps characters to character widths.

From the outside, but not from the inside.

Besides which, if Mixer ended up being a singleton so the implementation could be swapped then it wouldn’t be much of a hassle.
Though having a soundPlayer would make that less important anyway since the end user would then have full control over whether then used the singleton Mixer or some other implementation of SoundPlayer.

With the sound.play() version the play() function is bound to the singleton,
forcing the end user to either use the singleton or ignore the play() function entirely.

I was expecting it would resolve as a relative path.

The fact it doesn’t implies that Linux differentiates between absolute and relative paths by checking if the first character is a /, which is simultaneously interesting and weird.

That only explains the image problem though,
that doesn’t explain why the nested examples would break.

I get the L, D and B, but I’m still not sure why the R is in there.

Pointer.readByte(pointer) would be a bit kinder,
but then people might try to use it in their code. :P

With generics it would be possible to have Pointer<T>,
but generics don’t apply to the primitive types,
so Pointer<T> would be ineffcient.

Just to be awkard, I think this would be a tad cleaner:

char index = str[0];
for( int i = 0; index != '\0'; ++i, index = str[i] ){

But mainly because I’m horribly averse to putting side effects in expressions.
(Learning Haskell has ruined me for life. The functional purity has stained my soul. :P)