I try to forget the ‘everything is an object’ approach even exists.
It’s a great marketing slogan but in practice it’s rarely ideal.
Also a lot of different languages have different ideas about what an object is.
The only thing they seem to be able to agree on is that objects have data (fields, member variables, properties) and operations (methods, member functions).
A lot of the time the language-level terminology blows away the theory terminology.
Actually from the standard’s point of view, all instances of all types are objects.
The standard’s definition of ‘object’ isn’t “must have methods and member variables”, the standard’s definition of object is roughly “anything that exists as a block of data and can fit in RAM or a register”.
(Strictly speaking it’s a bit more precise than that.)
From an OOP purist point of view, both struct
and class
declare classes and those are used to create objects.
Python’s ‘structures’ are illusions.
They’re basically syntactic sugar for what is effectively a class.
You could reimplement them using Python’s class functionality assuming python had some means of dynamic memory allocation.
If python can’t do dynamic memory allocation
Originally C++'s stance was not to bother with excessive syntactic sugar and just give the programmer the tools to build any data structure (a philosophy it inherited from C),
but in C++11 they gave in and introduced list initialisation and user-defined literals,
and then in C++17 they gave in again and added ‘tuple syntax’ (a.k.a. structured binding.)
Internally, C++ and Python are drastically different beasts.
STL != Standard Library.
The STL (Standard Template Library) was created by Alexander Stepanov in 1979 and developed into a popular C++ library.
When the standards comittee came to standardise C++, they looked at the STL and took a lot of inspiration from it, to the point that many of the early stdlib classes were almost identical, hence people sometiems informally label the stdlib as ‘the STL’, but doing so is incorrect because they’re two different things.
The only time I can think that you wouldn’t have it to hand is if you’re compiling for Arduino, and that’s partly because a lot of the data structures aren’t useful for AVR chips because they have such tiny amounts of RAM.
Ugliness is in the eye of the beholder.
Besides which, results and capabilities are far more important than not being ugly.
If you really dislike it that much then feel free to try to get D or Rust compiling for Pokitto, or write a C implementation of the PokittoLib, or stick to using MicroPython.
More’s the pity.
I find them useful:
Sometimes a picture says a thousand words.
Sometimes I use it, but I probably don’t do the relationships properly.
Often I just draw bubbles with class names and connect boxes with function names to the bubbles.
Takes me less time than typing class declarations usually.
Sometimes, but not always.
The code can document what it’s doing, but not why it’s doing it.
If there’s a large system in play then it usually needs some explanation.
I admit to not documenting my code nearly enough because I don’t often work in teams or expect my code to be read, but if I’m making a library rather than a program then I’ll try to put more effort into documenting.
(Small note, ‘code’ is uncountable - it’s ‘say good code is’, the ‘a’ is ungrammatical.)
Another way of looking at it, a good compiler always removes unreachable code.
And documentation isn’t always in the form of comments.
That one I partly agree with you. I don’t like tablets either.
I also hate the word ‘apps’. If I tell someone I’m a programmer and they say “oh, so you write apps” I stare at them intensely and say “no, I write programs”.
If Skyrim is wrong, I don’t want to be right. :P
Different people have different conventions.
Pick a convention and stick to it.
The same as tabs vs spaces, the same as brace style, the same as ++x
vs x++
etc.
This one’s easy on the Pokitto: exceptions are turned off :P
.
The best approach is to try your damnedest to avoid error conditions in the first place (hence my advocation of references over pointer - don’t have to worry about null or dead objects or invalid pointers).
Otherwise, if you’re writing for desktop, it depends on how often you expect the condition to occur and whether you can handle it.
If it’s a rare condition or you can’t do anything about it then usually you want to use an exception.
For example, new
throws an exception if there’s not enough RAM/swap left to allocate more memory.
The program cannot and should not attempt to do anything about this, the program should crash.
Fail fast - a crashed program cannot cause damage, a program running in a state of uncertainty can.
If it’s something frequent (e.g. ‘file not found’) then an error code is often better.
Newer C++ code will prefer to use std::optional
which is like how Haskell uses Maybe
.
If you do use error codes, prefer to implement them as enum class
es (formally scoped enumerations), don’t use int
s and macros or unscoped enums because those aren’t type safe.
That’s generally bad practice. (Not everybody writes good code.)
It’s ok if they’re logging the exception and then rethrowing, but otherwise cases where that’s warranted are few and far between.
It’s not about C++, it’s about C#, but one of my favourite articles about exceptions is Vexing exceptions by Eric Lippert.
He was on the C# compiler development team for several years.
Pointers can cause damage if abused, should we get rid of them?
Macros can cause damage, even unintentionally, should we get rid of them?
Every feature has a use case, every feature can be abused.
If you rule out a feature because it has the potential to be abused,
both C and C++ would be very bare languages.
The point of both C and C++ is not to hold the programmer’s hand.
They exist to allow the programmer to do dangerous things.
They trust that the programmer knows what they’re doing.
Sometimes the programmer knows exactly what they’re doing and they create something glorious.
Sometimes the programmer is an idiot and they break everything in the most horrible way imaginable.
Yep, that happens. There’s no avoiding it.
Some libraries opt for no exceptions because they want to be available for lower powered stuff, some do it because they’re used to C style and don’t know any better, some people object to exceptions because they think they’re slow (they used to be, not any more because people got smarter and invented a better exception system).
Equally, sometimes people pick exceptions for things that shouldn’t be exceptions.
You’ll find bad design in libraries for any language.
Nope. I thought about learning it once, but from what I’ve heard of it (code is data and data is code, among other things) I decided to put it near the bottom of my list.
It looks weird and cryptic, and I say that as someone who sometimes uses Haskell and has previously used Perl.
I don’t deny that Haskell can be sort of elegent when you understand it, but I’d never try to write something substantial with it.
I find that the elegance of a language has very little bearing on how usable it is.
Define ‘everyone’.
Part of the reason I opted to learn it was because it looks absolutely horifying and I wanted to be able to scare people.
Most of the people I’ve known take one look at it and say “what the hell is that?” (which I’ll admit to doing both before and after learning it).
I think the main reasons its not more popular are:
- It’s full of undescriptive, short variable names
- There’s a ridiculous amount of over-abstraction
- Unless you’re mathematically inclined, it can be quite difficult
- Programming without side effects can be really difficult, even if you actually understand monads
- All looping is done with recursion, which is only more elegant than corecursion some of the time, other times it’s actually more confusing
- No function overloading
- It can be very slow and memory hungry
- Instead of building new features into the language, it likes to make those features optional so you have to manually enable them using a specialised declaration within the code
- Too many operators
- Any string of symbols can be turned into an operator
- For a supposedly elegant language it can be very verbose at times
instance Monad NonEmpty where
-- Guess which operator this is actually implementing
~(a :| as) >>= f = b :| (bs ++ bs')
where b :| bs = f a
bs' = as >>= toList . f
toList ~(c :| cs) = c : cs
instance (Monoid a, Monoid b) => Monoid (a,b) where
mempty = (mempty, mempty)
instance (Monoid a, Monoid b, Monoid c) => Monoid (a,b,c) where
mempty = (mempty, mempty, mempty)
instance (Monoid a, Monoid b, Monoid c, Monoid d) => Monoid (a,b,c,d) where
mempty = (mempty, mempty, mempty, mempty)
-- Repeat pattern ad nauseum for all amounts of tuples
But I still use it sometimes for data processing and maths stuff :P
If you’re interested in learning it at all, there’s no contest when it comes to tutorials.
LYAH(FGG): http://learnyouahaskell.com/chapters
I’m not sure I agree with that, but I would concur that Haskell’s basic console IO system is annoyingly esoteric and hard to work with, even with do notation.
Seconded. @FManga’s 3rd opinion was greatly appreciated.
Also despite continual disagreement, everything’s managed to stay civil, which is practically a miracle on the internet :P
Those rules are features.
If you started stripping them away, something would have to suffer for it.
I’ve yet to see someone propose a better alternative that still maintains the existing features.
They aren’t always used. There’s more than one way to do it.
The whole “OOP is this and must be programmed like this” that a lot of people seem to believe is utter hokum.
We’ve moved on since then.
People have realised that there are use cases and there are tools and you use the right tool for the right job, if you just adhere to a dogma then you rarely end up with the best result.
If used incorrectly.
Using inheritance to inherit a circle from an ellipse is an example of naive design.
That’s an old-fashioned ‘purist’ OOP approach, the world has moved on since then and realised that you shouldn’t use inheritance just because two things are related in the human’s mental model of them.
The problem here actually lies in a flawed mental model.
A lot of humans tend to think ‘a circle is a special kind of ellipse’,
but actually being a circle is actually a property of an ellipse.
Humans just use the ‘circle’ label for convinience.
Also, rather than just relying on a human mental model, what should really be considered instead is the use case and what the inheritance actually achieves.
Making Circle a subclass of Ellipse doesn’t really achieve anything that couldn’t be achieved better through making isCircle
a property of the Ellipse (i.e. via a member function).
When deciding whether to give something inheritance, the Liskov substitution principle should be one of the main guiding forces behind the decision,
not a flawed human mental model of ‘dog is a mammal’ or ‘ferrari is a car’ because that’s too simplistic and theoretical - it’s not based on the constraints of the domain.
A lot of programmers only bother to learn how to use templates and never bother to write them, and it doesn’t burden them at all.
I beg to differ.
Templates actually help to solve certain problems faster.
Take std::vector
for example.
Using templates, std::vector
operates on all types. Without templates, you’d need to reimplement std::vector
for every single type that you needed a vector for.
You’d have to have a std::vector_int
, std::vector_bool
, std::vector_char
etc, all written in full.
With templates, you write once and it works for every type that meets the required constraints (e.g. the type needs to have a copy operator).
Here’s another example. In C++, if you want to know if two types are the same, you #include <type_traits>
and static_assert(std::is_same<TypeA, TypeB>::value, "They aren't the same"
and if you get a compiler error, the types aren’t the same.
I can’t name any other language that lets you do that.
That’s just the tip of the iceberg in terms of power.
Templates are one of my favourite features of C++ because of the sheer amount of capability it adds to the language.
Admitedly the balance will shift slightly in C++20 when concepts (similar to Haskell type classes) are finally added, but they are nethertheless incredibly useful.
As much as I immensely dislike both JavaScript and PHP, I can’t think of any particular reason why JavaScript couldn’t/shouldn’t be used for the backend of a website.
Perl used to have that role once upon a time.
Though I admit that I don’t like the idea of JavaScript being used to build desktop programs, but that’s more because I think JavaScript’s a bad language.
It’s too weakly typed and has too many arbitrary rules:
-
[] + [] -> ""
Array plus array is empty string -
[] + {} -> {}
Array plus object is object -
{} + [] -> 0
Object plus array is number? -
{} + {} -> NaN
Object plus object is… not a number… -
Array(8).join("wat" - 1) + " Batman!" -> "NaNNaNNaNNaNNaNNaNNaNNaNNaN Batman!"
String minus number is not a number!
(All examples taken from Gary Bernhardt’s 2012 talk “Wat”.)