Dynamic memory with Pokitto

I wonder about dynamic memory with Pokitto. Where is the dynamic memory saved? I read in the Pokitto stats [quote=“jonne, post:1, topic:514”]
256kB program memory – 36kB RAM
Is it stored in RAM, program memory or something else?
Btw: Are there disadvantages with the use of such memory? I heard about higher speed because it must not be loaded to RAM every time. Is that right?

1 Like

Welcome back @Zuzu36, long time no see!

Dynamic memory works the same way on all devices, not just Pokitto.

There are 3 types of RAM memory allocation:

  1. Static RAM allocation (before program starts, remains reserved the entire duration of the program)
  2. Automatic RAM allocation (runtime memory allocation for variables that are stored in the stack part of RAM, for example local variables inside a function). They are removed from memory when the function ends
  3. Dynamic memory allocation, where memory is reserved from the heap

Now, the thing is heap grows upward from the bottom of the RAM, whereas stack grows downward:

The problem with using dynamic allocation of memory in embedded systems is that there is no automatic system to “clean up” the dynamically allocated memory. Plus, you might run into situations where heap and stack both grow to the point where they collide and you get no warning.

The mbed library used in Pokitto does support dynamic memory allocation and many people already use for example vectors which are in fact dynamically allocated arrays. So you can use dynamic memory.

Just be aware of the possibility that you might cause stack and heap collisions and make sure you delete the dynamically allocated objects when you no longer need them.

1 Like

Thanks for the quick answer! Yeah, I was not here for a long time, I improved my C++ skills a bit :wink:
Well i’m not getting everything of this, but i basicly understand how it works. I see how i can use this for my PokittoDash-game (still working on it :slight_smile:)


@pharap will be here shortly to explain heap/stack in incredible detail :wink:

1 Like

4 hours later… :P

In my defence, I was spending some of that time writing a comment in the defence of templates as a force of good.

Ah, dynamic allocation. You’ve just opened a massive can of worms.

@jonne’s explanation is a good concise explanation, so now it’s time to get bogged down in the details.

To answer your original question:

  • Code and constants live in program memory
  • Objects, variables, the stack and the heap all live in RAM

(ARM doesn’t appear to have a progmem directive like AVR so I have no idea how to influence the compiler’s choice.)

Personally I think that to really understand the heap, you must also understand the alternative - the stack.

What is a stack?

A stack is a kind of data structure.
Specifically it’s like a list of objects or values, but you can only add or remove things by adding or removing from one particular end of the stack (that end is typically called the ‘top’).
For this reason it’s often called a LIFO data structure; last in, first out - the last object added to the stack is the first object removed from it.

The reason it’s called a stack (and the reason the end you add/remove to/from is called the ‘top’) is because it’s often described in analogy to a stack of plates/dishes:
Imagine you’ve got 5 china plates.
You put one down, you put another on top of that and then you put another on top of that.
To get the one at the bottom, you then have to remove the two that you’ve just placed on top of it, otherwise you risk breaking them all.

What is the stack?

Now, the stack (more correctly, ‘the call stack’) is a special kind of stack. It’s a data structure in RAM that practically all modern processors use to keep track of function calls and variables.

When the processor calls a function:

  • It saves all the registers it was using to the stack
  • It looks at the instruction after the call instruction and copies the address of that instruction (henceforth the ‘return address’) to the top of the stack
  • It pushes any arguments the function needs onto the stack
  • Then it jumps to the code for the function

(The exact behaviour differs between calling convention and processor, but those are the 4 most common operations in the general order in which they happen.)

When the processor returns from a function

  • It dumps all the arguments
  • It takes the ‘return address’ and jumps to that address
  • It takes the remaining values off the stack and copies them back into the registers
  • Execution continues as normal

(Again, may vary depending on calling convention and processor)

In addition to this, variables may be stored on the stack if there aren’t enough registers to go around or if the variables are particularly large, e.g. an array.

The important thing to take from this is that when the function returns, dumping the arguments/variables etc is often as simple as changing the value of a register called the ‘stack pointer’.

Certain kinds of objects have to be manually destroyed (e.g. objects representing file handles, objects that hold a pointer to dynamically allocated memory), but most objects are just built from plain old numbers, so they can just be ignored.

Also don’t worry about the idea that the stack grows down.
Whether the stack grows up or down is arbitrary, it’s how it interacts with other memory constructs that’s important.

Problems with the stack

Naturally there is only so much memory available for the stack.
Call too many functions or use too many local variables and you can end up with a stack overflow error.
This is why recursive functions should be avoided.
Generally not an issue with most programs.

It’s also possible to cause a stack underflow if you do some really dangerous things, but that’s incredibly uncommon and you’d probably have to be trying to do it on purpose.

What is the heap?

Confusingly ‘the heap’ (or possibly ‘the data heap’) is not actually ‘a heap’.
The data structure referred to as ‘a heap’ is largely irrelevant to the concept of ‘the heap’.

The heap is a block of memory specifically set aside for dynamic memory allocation.
How that allocation is done actually varies depending on the platform.
There are lots of different algorithms available for doing dynamic allocation, each has its advantages and its disadvantages and nearly all of them are complicated.

Think of it like valet parking (this is either going to be a stroke of genius or sound completely insane).
You hand your keys over to the valet (the allocator) and they go and park your car (allocate some memory) somewhere in the parking lot (the heap).
Later on you want your car back (you want to free the memory) so you ask the valet (the allocator) to fetch your car (free the memory), so they find it and bring it to you (they find it and destroy it).
You don’t care where it was parked or how they decided to park it, you just used their services and they did their job.

Note that the valet won’t un-park your car (free the memory) unless specifically asked. This is exactly how memory leaks happen.


It would be great if that was the end of it, but nope.
The heap has its problems.

Problem number 1: heap fragmentation

The best way to explain this is with an image, so I’m stealing one from Bob Nystrom’s Game Programming Patterns.

Essentially the heap becomes ‘fragmented’ as it’s used.
Doing a lot of allocations can lead to there being small ‘holes’ in the heap where objects used to be.
Larger objects cannot fit in these holes and the objects in the heap are not allowed to be moved because they are referred to via pointers to their exact locations.
There are various techniques that can help to mitigate this, either by using memory pooling (the article the image was taken from) or having an allocator that uses an algorithm designed to mitgate the problem.
When fragmentation does happen there’s usually not much you can do, so there’s usually no point trying to check for it.
It’s a problem you have to pre-empt rather than solve after-the-fact.

To reuse the valet parking analogy, your valet attempts to do this:

Problem number 2: when worlds collide

This is exactly what @jonne was talking about.
Typically the heap is situated at the opposite end of memory to the stack and they grow towards each other.
The more you put on the stack, the less memory there is available for the heap.
The more you put on the heap, the less memory there is available for the stack.
Use too much of either and worlds collide.
Sometimes this can be detected as a stack overflow or out of memory error, but sometimes it can’t be detected and the program will just start misbehaving.
Again, you can’t recover from this, the best course of action is to kill the program before it does any damage.

Here’s a delightful graphic I stole borrowed from adafruit:

So what do I do?

The rules of thumb are:

  • If it has a short lifespan, prefer to make it a local variable (i.e. lives in registers/on the stack)
  • If it needs a long lifespan, make it a global (or the member variable of a global object)
  • Use dynamic memory only when:
    • You understand what you’re doing
    • The object has a long lifespan but doesn’t need to exist for the entire lifetime of the program

If you choose to use dynamic allocation, new and delete should not be your first port of call (and malloc and free should almost never be an option).

Your first port of call should be the C++ standard library.
If you’re going to need lots of objects, opt for something like a std::vector or a std::deque.

If you only need the one object and you have access to a C++11 compiler, you’re probably best off opting for a std::shared_ptr.
A std::shared_ptr will automatically handle dynamic memory allocation and will automatically free the memory when there are no more std::shared_ptrs pointing to that block of memory.
That does come with its own caveats which I will cover another day.

Do not try to write your own classes that handle memory allocation until you understand the rule of 0/3/5, or at least ask for help from someone who does. (I’d gladly volunteer.)

Lastly, if you are using dynamic allocation, C++11 is a better option because it introduced a thing called ‘move semantics’ which reduces the number of allocations done by avoiding unnecessary copying.



But just const though? const globals, const locals?

const on its own only means ‘you cannot write to this object’,
it doesn’t actually mean ‘this object will never change’,
and it’s perfectly legal to cast away the const.

(Bjarne himself wishes that cost was called readonly.)

const int supposedlyImmutableGlobalVariable = 5;

int main (void)
	int & oops = const_cast<int &>(supposedlyImmutableGlobalVariable);
	oops = 10;

It’s possible if the compiler does a reaching analysis to look for all the const_cast cases,
but even then it seems like const on its own is a bit of a bad fit.

Yup, just const… though I only checked global arrays, which would be expensive to copy to RAM. For a local int the compiler would have to allocate space on the stack to get a pointer and it should work as you expect. I don’t know what the spec says about it, but it makes sense that, as a good programming practice, if you declare a value as const, then it will never change. To me, casting away const makes sense for const references to things that aren’t immutable.

Fun hack - on the Pokitto it is easy to verify if a pointer leads to an immutable value:
bool isImmutable( const void *ptr ){ return uint32_t(ptr) < 0x10000000; }

1 Like

I don’t have the spec around, but according to cppreference modifying a const object through hackery is classed as undefined behaviour,
so it’s plausible that the compiler can use that as a get-out clause.

I’d still like to run some tests to be sure if I get chance.
I’m assuming const and static linkage will be a progmem case, but the behaviour of non-static consts could go either way (e.g. they could be used as an immediate operand in an instruction).

Good to know.

1 Like

Thanks @Pharap!! We should maybe copy-paste your excellent explanation to a short permanent wiki article.

Everyone struggles with understanding heap & stack at some point.


Are you sure the valet parking analogy isn’t too cryptic though?
I might be able to find a better analogy.
(Then again, it’s probably better than nothing.)

The best way to learn is probably to write an M6502 emulator,
and then write your own malloc implementation for a modern computer. :P

I’m always wary of const_cast haha

Also it’s best to constexpr whenever possible - I’m made some basic class constexpr-compatible so the compiler can optimize out more things.

Talking about the Stack, I’ve seen some kind of special pattern related to it: the Stack Allocator. Basically you reserve yourself a quantity of memory directly on the target scope (which can be the Stack, the Static or Dynamic memory), which constrained the fragmentation to a smaller part of memory, and let it vanishes when it gets cleaned up. It’s especially good when having to deal with a temporary, large number of alloc/dealloc, such as when generating a map, doing physics computation, simulation, etc, but it requires being careful about memory allocation. It works by overloading the new and delete operators too (In fact I’ve seen it being used in Box2D).


I can honestly say I’ve never written a program that needed one.

They’re also one of the reasons I avoid C-style casting.
C-style casting can cast away const without warning.

That sounds very much like the concept of an object pool.

That’s an approach that not many people take since C++ has a concept of an allocator.
Allocators can be used with the standard container classes and only affect a limited area instead of affecting the whole program.

1 Like

Oh yeah allocator concept is much better than overloading new/delete indeed!

Yeah Object Pool is very close indeed, especially if your Object Pool includes in its footprint all of its working memory (e.g. via a private field or something ObjectType objects[SIZE] - where ObjectType and SIZE are template parameters). The main difference is that the other one is more versatile generic and can allocate different-size and kind of objects, within the reserved memory. It’s actually a scoped malloc/free haha.

@Pharap That’s an amazing explenation. Where do you take the time from to do this :sweat_smile:
I’m trying to explane you my problem that i was going to solve with dynamic allocation. Maybe you find a better way to do this?

class Foo
	void* pointers[5];
	template <class Type> void AddPointer()
		for (int i = 0; i < 5; i++)
			if(pointers[i] == nullptr) //if the pointer is not defined
				pointers[i] = new Type(); //allocate new object
	template <class Type> Type* GetPointer()
		for (int i = 0; i < 5; i++)
			if(typeid(*pointers[i]) == typeid(Type)) //if the object is the same type
				return ((Type*)pointers[i]);
	template <class Type> void DeletePointer()
		for (int i = 0; i < 5; i++)
			if(typeid(*pointers[i]) == typeid(Type)) //if the object is the same type
				delete pointers[i];
				pointers[i] = nullptr;

Basicly i want to store some pointers with unspecified type at one point.
(Im sure you will find some mistakes in this example code, its just a concept :innocent:)

Erasing the type of a variable is usually a bad idea.

Have you tested if typeid actually works?
I was under the impression that the Pokitto disabled runtime type information.

I can think of one possible solution for circumventing this, but it’s dangerous, it requires template voodoo and it adds another limitation.

Before I post, do you have access to C++11? (I.e. EmBitz)
I’m assuming yes since you’ve used nullptr, but I like to be sure.

By the way, new Type() will fail if there isn’t a suitable constructor.

I’m not sure how to check which C++ version i use but I’m programming with Code::Blocks, making my binaries with MBed online so it should be C++11.

I have absolutly no clue if this works :stuck_out_tongue:, do you a better idea how to get type info?

Mbed online doesn’t support C++11, it only supports C++03.
EmBitz supports C++11.

Code::Blocks will do C++11 too I think.

I do have a solution, but it’s full of voodoo, which is why I ask “are you sure you really need this?”.
I find in a lot of cases attempting to reach for typeid is a sign of an ‘X-Y Problem’, which means you’re possibly trying to solve a problem with your chosen solution to a problem instead of thinking “actually, is there a better way to solve this problem?”.
XY Problems are common in programming.

Here’s my voodoo solution. I haven’t tested it so it may not work out of the box, but if it’s got any issues then I can almost certainly solve them:

class VoodooCell
	std::size_t index;
	VoodooCell(std::size_t index)
		: index(index)
	std::size_t getIndex(void) const
		return this->index;

template< typename Type >
class MagicVoodooCell : public VoodooCell
	using VoodooCell::VoodooCell;	

template< std::size_t CapacityValue >
class VoodooPrison
	static cons std::size_t Capacity = CapacityValue;

	void * array[CapacityValue];
	template< typename T >
	MagicVoodooCell<T> allocateCell(void)
		for(std::size_t i = 0; i < Capacity; ++i)
			if(array[i] != nullptr)
				array[i] = new T();
				return MagicVoodooCell<T>(i);
	template< typename T >
	T * retreivePointer(MagicVoodooCell<T> cell)
		if(cell.getIndex() >= Capacity)
			return nullptr;
		return reinterpret_cast<T *>(array[cell.getIndex()]);
	template< typename T >
	void deallocateCell(MagicVoodooCell<T> cell)
		if(cell.getIndex() >= Capacity)
			return nullptr;
		delete reinterpret_cast<T *>(array[cell.getIndex()]);
		array[cell.getIndex()] = nullptr;

And you use it like:

VoodooPrison<5> prison;

MagicVoodooCell<int> cell = prison.allocateCell<int>();

int * ptr = prison.retrievePointer(cell);

// Note: ptr is now invalid

So essentially MagicVoodooCell<T> is a token that you keep around so you can access the data.

I could probably come up with something safer and more robust if given time, but like I say, I’ve got a strong feeling that you don’t actually need this mechanism. I suspect you’re trying to solve a particular problem and you don’t know which tool is right for the job (possibly because you haven’t yet encountered the right tool).

1 Like