Pharap & Drummyfish's Ramblings


I try to forget the ‘everything is an object’ approach even exists.
It’s a great marketing slogan but in practice it’s rarely ideal.

Also a lot of different languages have different ideas about what an object is.
The only thing they seem to be able to agree on is that objects have data (fields, member variables, properties) and operations (methods, member functions).
A lot of the time the language-level terminology blows away the theory terminology.

Actually from the standard’s point of view, all instances of all types are objects.
The standard’s definition of ‘object’ isn’t “must have methods and member variables”, the standard’s definition of object is roughly “anything that exists as a block of data and can fit in RAM or a register”.
(Strictly speaking it’s a bit more precise than that.)

From an OOP purist point of view, both struct and class declare classes and those are used to create objects.

Python’s ‘structures’ are illusions.
They’re basically syntactic sugar for what is effectively a class.
You could reimplement them using Python’s class functionality assuming python had some means of dynamic memory allocation.
If python can’t do dynamic memory allocation

Originally C++'s stance was not to bother with excessive syntactic sugar and just give the programmer the tools to build any data structure (a philosophy it inherited from C),
but in C++11 they gave in and introduced list initialisation and user-defined literals,
and then in C++17 they gave in again and added ‘tuple syntax’ (a.k.a. structured binding.)

Internally, C++ and Python are drastically different beasts.

STL != Standard Library.

The STL (Standard Template Library) was created by Alexander Stepanov in 1979 and developed into a popular C++ library.
When the standards comittee came to standardise C++, they looked at the STL and took a lot of inspiration from it, to the point that many of the early stdlib classes were almost identical, hence people sometiems informally label the stdlib as ‘the STL’, but doing so is incorrect because they’re two different things.

The only time I can think that you wouldn’t have it to hand is if you’re compiling for Arduino, and that’s partly because a lot of the data structures aren’t useful for AVR chips because they have such tiny amounts of RAM.

Ugliness is in the eye of the beholder.
Besides which, results and capabilities are far more important than not being ugly.

If you really dislike it that much then feel free to try to get D or Rust compiling for Pokitto, or write a C implementation of the PokittoLib, or stick to using MicroPython.

More’s the pity.

I find them useful:

Sometimes a picture says a thousand words.

Sometimes I use it, but I probably don’t do the relationships properly.
Often I just draw bubbles with class names and connect boxes with function names to the bubbles.
Takes me less time than typing class declarations usually.

Sometimes, but not always.
The code can document what it’s doing, but not why it’s doing it.
If there’s a large system in play then it usually needs some explanation.

I admit to not documenting my code nearly enough because I don’t often work in teams or expect my code to be read, but if I’m making a library rather than a program then I’ll try to put more effort into documenting.

(Small note, ‘code’ is uncountable - it’s ‘say good code is’, the ‘a’ is ungrammatical.)

Another way of looking at it, a good compiler always removes unreachable code.
And documentation isn’t always in the form of comments.

That one I partly agree with you. I don’t like tablets either.
I also hate the word ‘apps’. If I tell someone I’m a programmer and they say “oh, so you write apps” I stare at them intensely and say “no, I write programs”.

If Skyrim is wrong, I don’t want to be right. :P

Different people have different conventions.
Pick a convention and stick to it.
The same as tabs vs spaces, the same as brace style, the same as ++x vs x++ etc.

This one’s easy on the Pokitto: exceptions are turned off :P.

The best approach is to try your damnedest to avoid error conditions in the first place (hence my advocation of references over pointer - don’t have to worry about null or dead objects or invalid pointers).

Otherwise, if you’re writing for desktop, it depends on how often you expect the condition to occur and whether you can handle it.

If it’s a rare condition or you can’t do anything about it then usually you want to use an exception.
For example, new throws an exception if there’s not enough RAM/swap left to allocate more memory.
The program cannot and should not attempt to do anything about this, the program should crash.
Fail fast - a crashed program cannot cause damage, a program running in a state of uncertainty can.

If it’s something frequent (e.g. ‘file not found’) then an error code is often better.
Newer C++ code will prefer to use std::optional which is like how Haskell uses Maybe.
If you do use error codes, prefer to implement them as enum classes (formally scoped enumerations), don’t use ints and macros or unscoped enums because those aren’t type safe.

That’s generally bad practice. (Not everybody writes good code.)
It’s ok if they’re logging the exception and then rethrowing, but otherwise cases where that’s warranted are few and far between.

It’s not about C++, it’s about C#, but one of my favourite articles about exceptions is Vexing exceptions by Eric Lippert.
He was on the C# compiler development team for several years.

Pointers can cause damage if abused, should we get rid of them?
Macros can cause damage, even unintentionally, should we get rid of them?

Every feature has a use case, every feature can be abused.
If you rule out a feature because it has the potential to be abused,
both C and C++ would be very bare languages.

The point of both C and C++ is not to hold the programmer’s hand.
They exist to allow the programmer to do dangerous things.
They trust that the programmer knows what they’re doing.
Sometimes the programmer knows exactly what they’re doing and they create something glorious.
Sometimes the programmer is an idiot and they break everything in the most horrible way imaginable.

Yep, that happens. There’s no avoiding it.

Some libraries opt for no exceptions because they want to be available for lower powered stuff, some do it because they’re used to C style and don’t know any better, some people object to exceptions because they think they’re slow (they used to be, not any more because people got smarter and invented a better exception system).

Equally, sometimes people pick exceptions for things that shouldn’t be exceptions.
You’ll find bad design in libraries for any language.

Nope. I thought about learning it once, but from what I’ve heard of it (code is data and data is code, among other things) I decided to put it near the bottom of my list.

It looks weird and cryptic, and I say that as someone who sometimes uses Haskell and has previously used Perl.

I don’t deny that Haskell can be sort of elegent when you understand it, but I’d never try to write something substantial with it.

I find that the elegance of a language has very little bearing on how usable it is.

Define ‘everyone’.
Part of the reason I opted to learn it was because it looks absolutely horifying and I wanted to be able to scare people.
Most of the people I’ve known take one look at it and say “what the hell is that?” (which I’ll admit to doing both before and after learning it).

I think the main reasons its not more popular are:

  • It’s full of undescriptive, short variable names
  • There’s a ridiculous amount of over-abstraction
  • Unless you’re mathematically inclined, it can be quite difficult
  • Programming without side effects can be really difficult, even if you actually understand monads
  • All looping is done with recursion, which is only more elegant than corecursion some of the time, other times it’s actually more confusing
  • No function overloading
  • It can be very slow and memory hungry
  • Instead of building new features into the language, it likes to make those features optional so you have to manually enable them using a specialised declaration within the code
  • Too many operators
  • Any string of symbols can be turned into an operator
  • For a supposedly elegant language it can be very verbose at times
instance Monad NonEmpty where
  -- Guess which operator this is actually implementing
  ~(a :| as) >>= f = b :| (bs ++ bs')
    where b :| bs = f a
          bs' = as >>= toList . f
          toList ~(c :| cs) = c : cs

instance (Monoid a, Monoid b) => Monoid (a,b) where
        mempty = (mempty, mempty)

instance (Monoid a, Monoid b, Monoid c) => Monoid (a,b,c) where
        mempty = (mempty, mempty, mempty)

instance (Monoid a, Monoid b, Monoid c, Monoid d) => Monoid (a,b,c,d) where
        mempty = (mempty, mempty, mempty, mempty)

-- Repeat pattern ad nauseum for all amounts of tuples

But I still use it sometimes for data processing and maths stuff :P

If you’re interested in learning it at all, there’s no contest when it comes to tutorials.

I’m not sure I agree with that, but I would concur that Haskell’s basic console IO system is annoyingly esoteric and hard to work with, even with do notation.

Seconded. @FManga’s 3rd opinion was greatly appreciated.

Also despite continual disagreement, everything’s managed to stay civil, which is practically a miracle on the internet :P

Those rules are features.
If you started stripping them away, something would have to suffer for it.

I’ve yet to see someone propose a better alternative that still maintains the existing features.

They aren’t always used. There’s more than one way to do it.

The whole “OOP is this and must be programmed like this” that a lot of people seem to believe is utter hokum.

We’ve moved on since then.
People have realised that there are use cases and there are tools and you use the right tool for the right job, if you just adhere to a dogma then you rarely end up with the best result.

If used incorrectly.

Using inheritance to inherit a circle from an ellipse is an example of naive design.
That’s an old-fashioned ‘purist’ OOP approach, the world has moved on since then and realised that you shouldn’t use inheritance just because two things are related in the human’s mental model of them.

The problem here actually lies in a flawed mental model.
A lot of humans tend to think ‘a circle is a special kind of ellipse’,
but actually being a circle is actually a property of an ellipse.
Humans just use the ‘circle’ label for convinience.

Also, rather than just relying on a human mental model, what should really be considered instead is the use case and what the inheritance actually achieves.
Making Circle a subclass of Ellipse doesn’t really achieve anything that couldn’t be achieved better through making isCircle a property of the Ellipse (i.e. via a member function).
When deciding whether to give something inheritance, the Liskov substitution principle should be one of the main guiding forces behind the decision,
not a flawed human mental model of ‘dog is a mammal’ or ‘ferrari is a car’ because that’s too simplistic and theoretical - it’s not based on the constraints of the domain.

A lot of programmers only bother to learn how to use templates and never bother to write them, and it doesn’t burden them at all.

I beg to differ.
Templates actually help to solve certain problems faster.

Take std::vector for example.
Using templates, std::vector operates on all types. Without templates, you’d need to reimplement std::vector for every single type that you needed a vector for.
You’d have to have a std::vector_int, std::vector_bool, std::vector_char etc, all written in full.
With templates, you write once and it works for every type that meets the required constraints (e.g. the type needs to have a copy operator).

Here’s another example. In C++, if you want to know if two types are the same, you #include <type_traits> and static_assert(std::is_same<TypeA, TypeB>::value, "They aren't the same" and if you get a compiler error, the types aren’t the same.
I can’t name any other language that lets you do that.

That’s just the tip of the iceberg in terms of power.
Templates are one of my favourite features of C++ because of the sheer amount of capability it adds to the language.

Admitedly the balance will shift slightly in C++20 when concepts (similar to Haskell type classes) are finally added, but they are nethertheless incredibly useful.

As much as I immensely dislike both JavaScript and PHP, I can’t think of any particular reason why JavaScript couldn’t/shouldn’t be used for the backend of a website.
Perl used to have that role once upon a time.

Though I admit that I don’t like the idea of JavaScript being used to build desktop programs, but that’s more because I think JavaScript’s a bad language.

It’s too weakly typed and has too many arbitrary rules:

  • [] + [] -> "" Array plus array is empty string
  • [] + {} -> {} Array plus object is object
  • {} + [] -> 0 Object plus array is number?
  • {} + {} -> NaN Object plus object is… not a number…
  • Array(8).join("wat" - 1) + " Batman!" -> "NaNNaNNaNNaNNaNNaNNaNNaNNaN Batman!" String minus number is not a number!
    (All examples taken from Gary Bernhardt’s 2012 talk “Wat”.)


Didn’t know, thanks!

No, it looks like it because here I specifically focus on what I dislike, but I’m going to keep coding in C++.

True, but code is usually comparably good (it’s designed to capture algorithms and data structures after all) while being much easier to create, searchable, can be automatically processed etc.

If an image really helps, I first try to make and ASCII version :blush: Then if it doesn’t work I create a real image.

good :smiley:

No, they’re necessary - exceptions we can do without.

I’d rather says “how used” that “how usable” it is. The less used tools (Haskell as oposed to C++, GNU/Linux as opposed to Windows) are in my view superior by pure usability, but the actual popularity is a function of many more factors than just pure usability - more raw features wins over elegance in the market, the power of the creator to push the technology matters very much, the marketing, the support of hardware manufacturers etc. And then of course, popularity is a function of popularity (people make libraries for popular languages which in turn makes them more popular).

This is what it all boils down to - I’m an idealist, I’d like to have all my software free, elegant, minimal… even for the price of convenience, productivity. These are simply my personal values. On the other hand, as you say, someone else sees the beauty in the results and pure performance, not the internals. So… yeah :slight_smile:

Well yeah, it’s feature creep. You can’t easily get rid of them once they’re there, but that’s where we are.

Yep, I appreciate this. You have much more experience with C++ and have a lot more to say about it, but I think the opinions and feeling of average programmers like myself are important too (especially when C++ is being used in projects targeted at learners), so I try my best to put them into words and leave them here. And let the expert have a comment on them.

So thanks for that :slight_smile:


Ok, you’re looking at programming languages with a mathematical definition of elegance. Here I’d agree with you: C++ is inelegant because it is not the simplest possible solution to a problem. That’s alright, it’s not meant to be that. The only programming language with that goal is this.

Instead, you could compare it to spoken languages. C++'s many rules can be a strength in the same way that having a large English vocabulary will help you better express yourself. In the same way, you have to do so carefully, taking the listener (the compiler and other people who will read your code) into consideration. You have no obligation to use your entire vocabulary when talking to someone, nor do you have to use every feature of the language you’re programming in.

Ah, JavaScript. If C++ is inelegant because it has too much stuff, that would make JS elegant because you use one thing to do two things.

First of all, in making desktop programs these unexpected features of the type system (almost?) never show up. In practice it’s pretty good, and is only a problem in two situations: Humorous / Python / Douglas Crockford talks and low-level code like emulators.

I like to tease my co-workers by saying Python has no reason to exist (tongue in cheek, it doesn’t really solve any problems that weren’t solved by other languages already), and Douglas Crockford bashes the language because it earns him money.

As for low-level code, something JS was clearly not meant for, the type system is workable and I actually prefer it to Java’s. No silly boxed and unboxed types that can’t be stored in a container. That is a bad language.

I’d say JS is elegant because the examples of strange additions can be understood knowing just two rules:

  • JS does not have explicit casts, but operators work on defined types so they cast things into types they can work with. When writing high-level code, you’re not too concerned about how, rather what the code does and it ends up working alright in practice even if it looks odd in contrived examples.
    The binary + operator is defined for two numbers or two strings. Anything else gets converted into a string and concatenated. Pretty simple and I prefer it to PHP’s alternative.
    In [] + [], arrays aren’t numbers or strings, so toString gets called on each. Arrays have a toString overload that returns its contents, separated by commas ([1,2,3] becomes "1,2,3"). An empty array becomes an empty string and the result is hardly surprising.
    In [] + {} the result is not {}. Like before, toString gets called and the default result is “[object Object]”. You can overload toString to do something more useful, of course. Adding that to an empty string results in simply “[object Object]”.
  • Just like in C++, the meaning of “{}” depends on context. In C++ it can be an initializer list or a code block. In JS it is either an object or a code block. In {} + [], the parser understands it as an empty code block. That leaves + [] and the unary + operator only makes sense with numbers. Since it can’t work with anything else, like before, it calls toString and gets “”. It still can’t work with that, so it casts the string to Number and that returns 0. A bit surprising, but not the end of the world. It can’t cast “[object Object]” to Number, so no surprise: NaN.

The binary - operator is only defined for numbers, so trying to convert “wat” to a number results in NaN and the result is NaN. This is not surprising, what else could it reasonably do?

Misleading and confusing code can be written in any language. I find that people who dislike JS’s “quirks” do so because they are surprised it doesn’t behave like the languages they are used to. But if all languages behaved the same way, they’d all be the same language with minor syntax variations. Pointless… like Python.


Yes, it’s probably this. Like a basic framework on lambda calculus :slight_smile:

For readers who don’t know esolangs: behold.

I can do that!

  • My native language (Czech) totally sucks, it’s one of the most difficult to learn, but it certainly doesn’t give you more expressive power. It’s simply bad. Still very popular in my country :slight_smile:
  • English sucks a bit less, but still big time. A lot of irregularities, multiple names for the same thing (coincidentally @Pharap just recently told me you can say quadriliteral or tetragon), multiple unrelated meanings of words depending on context (“can”), ambiguous sentences, weird rules with a lot of exceptions, …
  • Esperanto is beautiful - not perfect (probably still can form ambiguous sentences) but much closer, mostly thanks to scratching everything old and going for a new design. It’s completely regular, the same types of words have the same suffix (noun: o, verb: i, …), words are formed from a few basic words, etc. All that while keeping the expressive power (which is proven e.g. by the big amount of both translated and original literature). Still, it’s the least popular so far.
  • Lojban is probably beautiful too, but I don’t know much about it.


I don’t have too much experience, but kind of liked it. Though I never used it for any big program, to me it’s basically a prototyping language.


If a language can prove a concept with a prototype, hasn’t it also proven itself capable of the final product?


But to understand code you often have to read through several source files to understand how it all links together.
If you take a diagram that explains the algorithm, it’s often quicker and easier to understand than the code implementing it.

For example, the common tree:

That diagram is quicker and easier to understand than:

struct Node;

struct Node
	int value;
	struct Node * leftChild;
	struct Node * rightChild;

struct Node * createNode(void)
	return malloc(sizeof(Node));

struct Node * createNodeValue(int value)
	return malloc(sizeof(Node));

void destroyNode(struct Node * node)
	return free(node);

struct Node * buildTree()
	struct Node * root = createNodeValue(2);
	root->leftChild = createNodeValue(7);	
	root->rightChild = createNodeValue(5);
	root->leftChild->leftChild = createNodeValue(2);
	root->leftChild->rightChild = createNodeValue(6);
	root->leftChild->rightChild->leftChild = createNodeValue(5);
	root->leftChild->rightChild->rightChild = createNodeValue(11);
	root->rightChild = createNodeValue(5);
	root->rightChild->rightChild = createNodeValue(9);
	root->rightChild->rightChild->leftChild = createNodeValue(4);

In my experience, most ASCII diagrams tend to be confusing.

They also tend to be mislabelled because they resort to using characters that are only found in extended variants of ASCII, like those found in code page 437.

It depends on the goal.
If you aren’t using dynamic allocation then you can usually do without pointers.

How much have you actually tried to write in Haskell?

It looks pretty, but it quickly becomes difficult to use for anything more complicated than tiny console programs.
I wouldn’t like to write a game with Haskell, I don’t think my brain could take it.
If Java’s motto is “everything’s an object”, Haskell’s motto is “everything’s immutable”.

Ironically a lot of the popular programming languages are ugly in one way or another,
but that ugliness often makes them easier to use.
A lot of really ‘elegant’ languages don’t get used because they usually trade away usability for elegance.

In the words of Bjarne Stroustrup:

There are only two kinds of languages: the ones people complain about and the ones nobody uses.

That is true, but I’d say usability is a big one.

If anything I’d say the availability of tools is also a critical factor.
One of the reasons people use C and C++ a lot is because there are many available compilers that target a lot of different systems.
The way modern compilers are designed, it’s usually relatively easy for people to add compiler support for a new processor because they don’t have to reimplement the language.

I don’t think marketing is that much of a contributor.
Once upon a time COBOL was strongly marketed, and now it’s almost abandoned.
People abandoned it because of its limitations. Other languages came along that could do the same job and do it better.

Popularity is indeed a feedback loop.
If people use a language and enjoy it, they spread the word about it.
If they don’t enjoy it or don’t like it much, they don’t talk about it.
I don’t talk about Haskell as much as I talk about C# or C++ because I find the latter two to be more usable.

People don’t opt to write a library in a language just because it’s popular,
they often opt to write a library in a language if they like that language.

One upon a time, I wrote an SDL wrapper for C# because I like SDL and I wanted to be able to use it with C# because I like C#.
It was a lot of work, but my love of the language justified it.
If I was driven by popularity then I would have figured out how to do the same in Java or Python.

I’m a mix of both.
I believe that code should look good because code that looks good is usually more readable,
but there must be a balance between looking good and functionality.

If there’s a conflict though, realism wins out every time.
If I got too hung up on making things pretty then I’d never get anything done.

Elegance is in the eye of the beholder.

I’ve seen some people claim that Java is beautiful and others claim that it’s inelegnt and needlessly verbose.

You say C++ isn’t elegant, but I think it can be more elegant than C depending on how the code is written, and it’s becoming more elegant with every new standard.

You could see it that way, but I think feature creep is defined by how often a feature is used, and personally I use a lot of those features, especially templates and const correctness.
A lot of the time not having them would make my life harder.

If you’ve looked around here, you’ll see how often I’ve said that not having access to C++11 features horribly limits the language.
C++11 introduced a lot of features, but those features didn’t creep in, they were designed to be useful and they are useful.

I won’t deny, C++ is not the best language to learn as a first language for the same reason C isn’t - it requires the programmer to understand the hardware to some degree, e.g. what RAM is (for pointers).

But when it comes to embedded systems, it’s the best choice because it has the right balance of structure and producing small code.
Most languages with comparable features to C++ produce larger or slower machine code, or require some kind of GC or overhead for their type system.

Also the goals of a beginner are different to that of someone who is experienced.
A lot of beginners like dynamic, weakly typed languages because they’re forgiving of mistakes.
Experienced programmers usually learn to dislike those things because they come to decide that a language that refuses to compile buggy code is better than having to hunt down bugs in valid code.

While I think of it, part of the reason exceptions have taken over from error codes in most languages is because error codes tend to clutter the code.

C style code with return codes:

int writeData()
	int error;

	struct File * file = openFile("filename", "w");
	if(file != NULL)
		return ERROR_NULL;
	error = writeSomeData(&buffer, sizeof(buffer) / sizeof(buffer[0]));
	if(error != SUCCESS_CODE)
		return error;
	error = writeSomeData(&otherBuffer, sizeof(otherBuffer) / sizeof(otherBuffer[0]));
	if(error != SUCCESS_CODE)
		return error;
	error = fileClose(file);
	return error;

C++ style with exceptions:

void writeData()
	File file = file_system::open("filename", file_mode::write);		

In the C++ version, all the error logic is implicit and the file will clean itself up,
reducing the cognitive burden on the programmer and reducing development time.

C++ likes to put the pressure on the library writers to design their code well so that library users don’t have as much to worry about.


I really like this analogy.

Boxing and unboxing is actually quite common in GC’ed languages.
You’ll find it in Java, C# and Haskell.

Although C#'s type system allows containers to hold unboxed types.
The reason Java can’t is because it implements generic using type erasure instead of reified generics, and type erasure is an inferior system.

I have no idea what JavaScript does instead.

I don’t doubt that the rules make sense in context, but I question the wisdom of the rules in the first place.

Is implicit conversion to string common enough to warrant making it so ubiquitous?
I’d prefer a language that forces someone to write [].toString() + {}.toString() (or toString([]) + toString({}) if you’d rather) to achieve the same result and regards [] + {} as a type mismatch error.

I’m not a big lover of Python, but I generally agree with the sentiment that “explicit is better than implict” when it comes to type conversion.
There are exceptions (e.g. a reference of a child class being implictly convertible to the parent class), but overall it’s better to be explicit about what’s happening.

For me that’s a big negative. I greatly dislike implicit casts.

That said, I think there needs to be a balance, even if the right balance is hard to find.
Fundamental types (integers, floats, etc) should have implicit widening conversion.
JavaScript goes too far towards looseness.
Haskell goes too far the other way and is too strict about casting.

Fair enough, I misinterpreted what was going on.
I thought the REPL was calling toString in order to print the object and tried to ‘reverse’ that.

Syntax/type error as it would be in most of the other languages I’ve mentioned.

It depends on your background.
Most of the programmers I know well didn’t go into programming from a mathematics background and get on better with procedural languages than functional languages.

Personally I have very mixed feelings about maths.
In terms of notation I like things like set theory and group theory, but most mathematical notation I find to be too cryptic or confusing and I absolutely detest the wordy pseudo-latin that most advanced mathematics is described with.

Websites like ‘maths is fun’ show that maths can be explained in a simple and accessible way.
But mathematical papers are always written in a stupid way, like:

The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures.

I feel like maths could be a lot more widely understood, but there’s too much pomposity and esotericism around it which makes it difficult for people to get to grips with it.

My favourite esolang is probably False.
Though I don’t like some of the decisions, like not having escape codes,
and ø being one of the symbols (I usually substitute £ when I write implementations).

LOLCODE is fun too.

I’ve heard quite a few people say that about their native language (e.g. German).

Most of them are because of foreign influence and evolution.

In the begining it was mainly just Latin and Greek influence, but over the years there have been tribes and wars and conquests so it’s been influenced by arcaic germanic languages and old french and all sorts of other things.

The best way to be good at English is to understand the etymology of words. Despite the number of influences, there are clear patterns and common prefixes and suffixes.

A lot of the exceptions come from either foreign influence (e.g. borrowing terms from other languages) or people going out of their way to buck trends.

I’d go careful with making the comparison between spoken languages and programming languages too literal.
Human language adapts and evolves to express a near unbounded amount of thoughts and feelings.
Programming languages only have to express programs - they have a niche job.

If English were a programming language, every word has to be defined like a function or class so theoretically everyone speaking English is constantly importing words from libraries.

For example, how many common words were invented and defined by Shakespeare?
If English were a programming language I’d have to prefix every comment with import Shakespeare;.

(Because of this, it’s easier to pick up new programming languages than new spoken languages.)

That sounds like a recipe for creating ridiculously long words to express a simple concept.

You’d probably have to say the equivalent of “the smell of dust after rain” instead of being allowed to say “petrichor”,
or “the lump of wax formed in a whale” instead of “ambergris”.

That’s kind of how I treat Haskell for most things.

Not necessarily, prototypes tend to be noticably flawed.