I enjoy it for this reason as well. Would recommend. The videos are nice.
Hey, you want to learn C first, then move into Rust for embedded systems, which is perfect. For Python suggestions, you already understand the basic concepts, and your goal is embedded programming, so I don’t think you need to learn Python first. Here are some steps that will help to learn C step by step:
-
Learn the basics.
-
Learn memory and pointers
-
Start thinking about embedded systems.
-
Move to rust.
Here are some books or resources for a good learning path. -
Learn C the Hard way
-
Head First C
-
Arduino or STM32 tutorials
-
The Embedded Rust
Start with the book Learn C the Hard Way; this will teach you the practical syntax and basic concepts. Then move to Head First C for deep understanding. Then move to embedded for hardware practice. Then finally move to the book The Embedded Rust Book for Rust for microcontrollers.
Hope this will help you.
K&R is how I would go.
Given that your objective is C, I would jump right into it. That K&R book has everything you will need. C is a simple and small language and the library is also very small. Learn how the library functions work and rewrite them. Re-writing sprintf would be your holly grail. If you can do that you will know most there is to know about the language. But start small like atoi then atof, etc.
C++ on the other hand is a hydra, but if you got a solid C foundation you will be fine.
Not… Quite, seeing as C99, C11 and C23 exist now. But it is a good start nonetheless.
Still, a lot has happened since K&R in C. Don’t fear the changes. ![]()
LOL, I guess I forgot. I was introduced to it in the 80’s. Did not remember all the changes since them. What I liked it is that it was minimalist, yet as deep as you needed it to be.
I actually ended up picking up several “Learn The Hard Way” books they have been great. I have collected quite a few books via humble bundle so I have a great stack of resources. once I am finished moving I will be able to devote time to that again.
“K&R” is really amazing to start your journey but I recommend watching some tutorials on youtube or udemy to learn the application of concepts in a demo/real project before jumping on books.
I know, you won’t understand the concept on projects but you will know which concept solved which problem so while reading from books, you will understand things better.
Trust me, this method works do works. I recommend you try it once.
I learnt Python before C and C before Rust. And a bunch of other languages in there. It was a fine progression.
I live in *nix land so this perspective reflects that (I don’t know if your embedded aspirations involve running under Linux or not).
I love K&R, I found it one of the most effective teaching tools around for learning a programming language. The exercises are really well integrated into the text and it’s just the right level of conciseness. That said it does say this in the preface:
The book is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help.
If your goal is to be a Rust developer, and you are learning C to help along the way, and then your C book says learn Python first, I feel this learn Python to learn C to learn Rust pipeline might be a bit longer than it needs to be.
I’d go all in on C, read a good book and do all the exercises in it, eventually get a copy of a C standard (drafts were free online) - I found C A Reference Manual to be quite good. I wouldn’t overly stress about using the latest C standard, C just doesn’t change much.
This is really important - learn to use this tool or an equivalent tool - Valgrind when you write C.
Writing C requires more than just learning the language, the platform APIs need to be learned, there’s a few places to learn such things, depending on your OS/hardware, for me in *nix land I liked http://www.apuebook.com/, the smaller Beej's Guide to Network Programming is good too.
Once you can write passable C real world programs (no small feat! You’ll become very familiar with the term UB), Rust’s choices will make a lot of sense and you’ll be able to write mixed Rust/C projects.
It’s tempting to suggest just learning Rust straight off, but C is very wide spread, I think all Rust code I’ve written interacts with it in some fashion.
This is also something to consider:
A language that can be scripted (e…g, perl, python, powershell, bash script, etc.) will have real world usable application in your day to day computer janitor work.
Learning by doing is frequently helpful, and languages like the above are useful in many day to day situations so you can get practice with small and then gradually larger things as you gain more experience.
Getting that dopamine hit of actually achieving something useful in real life is important starting out and C, for that, is brutal.
The truth is, heaps of code for your typical app could be written in Python (in fact, much of Eve Online is, and it scales massively) - there’s the old 90/10 rule in development:
- a typical program spends 90% of its time in 10% of the code. Most code in a typical app is glue code that typically deals with higher level logic and then directs control to some small part of the codebase that is called hundreds of millions of times. (e.g., say texture lookup in a 3d game - vs. say, flip frame to a different frame in the buffer).
Even if you make the glue code 90% of the code-base 10x faster, you’ve only gained 9% in terms of performance, at great cost to your sanity, time, etc. Spend some time profiling and then optimise that 10% that does all the work!
Sure, if you’re working in embedded - C or assembly is probably the end goal for size even if speed is not important, but just as if not more important is learning algorithms and data structures.
Algorithms you can do in C relatively easily as it has the flow control, but data structures will be a complete pain in the arse as it barely has any data types and you’ll need to manage pointers everywhere. You’ll spend more time early on fighting compiler errors and runtime bugs than learning how to optimise your code or trying a better algorithm.
Fundamentally, pointers aren’t a complicated concept, but in reality there’s a million different ways to shoot yourself in the foot with them.
The software industry is rife with buggy exploitable software because C/C++ developers typically think they know how their memory management works better than they actually do.
This is why I personally believe coding LED chasers on a microcontroller is such a dope way of introducing people to C. After the quick basics, hold a coding competition, who can design the best Christmas chaser? They got a week or two for that. Voted equally for by a panel of judges and then audience votes according to percentage. ![]()
Although, to be honest, if I were to hold a C beginners course today, I would introduce them to C + git + Test Driven Development, all in one fell swoop. And one of the last lectures from me would be “And if you thought C was dope, have a look at…” with Zig, Python, Rust and C++ being the top contenders…
I don’t think Python is the right tool to learn data structures as you have absolutely no control over data layout or ownership. Also way too much basic stuff is abstracted away. E.g. it’s not even guaranteed that a list will have an O(1) random access, it’s just that all practical implementations of the Python language elected to use a resizable array to implement it.
Even Java is better at it because it will tell you when a list is an array and when it’s a linked list, when a dictionary is a hashmap, and so on.
Python is actually so hilariously bad at performance that you can see a 20x-type improvements just by switching from CPython to PyPy.
This argument is a bit unclear though - how does the 90/10 time split scale when you make that 10% of the code 10x faster? Isn’t it about 50/50 now?
Sure, you probably don’t have to optimize your CLI parser logic and Python is quite handy to write simple parsers, or even generate repetitive code (I sometimes use it as a kind of meta-language to generate lookup tables or manually unroll complex loops).
In C there is no real need to optimize command line parsing anymore, and has not been a problem for the least 15 years.
See also cxxopts which is pretty much the same thing but for C++ and with more syntactic sugar.
The main problem with C is that it is still being taught like it is 1989. Why do professors still teach int and show the illusion that C has a concept of strings?
typedef char int8_t should give you a good hint why I believe strings do not exist in C ![]()
On contrary, the fact that int8_t is not defined this way should tell you that char is more special than you may think:
/usr/include/bits/types.h
typedef signed char __int8_t;
typedef unsigned char __uint8_t;
typedef signed short int __int16_t;
typedef unsigned short int __uint16_t;
typedef signed int __int32_t;
typedef unsigned int __uint32_t;
/usr/include/bits/stdint-intn.h
#include <bits/types.h>
typedef __int8_t int8_t;
typedef __int16_t int16_t;
typedef __int32_t int32_t;
typedef __int64_t int64_t;
https://en.cppreference.com/w/c/language/arithmetic_types.html#Character_types
char — type for character representation. Equivalent to either signed char or unsigned char (which one is implementation-defined and may be controlled by a compiler commandline switch), but char is a distinct type, different from both signed char and unsigned char.
Sure, write the 10% in C or other low level language. Even assembly.
Point being optimise the hot bit. Don’t bother with the 90 percent that is not performance sensitive. Writing 90% of your code in something that isn’t type safe (and maybe slower, because it doesn’t matter) is generally a mistake.
The fact that I can type char c = 0x31; completely shatters everything. Char is an integer and strings are integer arrays where each character is one byte long.
Until we get to the lecture of Unicode, anyway ![]()
So WTF are chars if not integers? In any language a char is an integer, whether you are allowed to convert it implicitly or not.
My point is, once you optimize the 10% you no longer have a 90/10 split and your 90% starts becoming the bottleneck.
And Python is definitely not a type safe language.
You tell me, you are the one arguing strings in C are something more than a sequence of 8 bit integers terminated by zero… ![]()
In C++ std::string is so much more than that. But in C, you might as well type int8_t and cast the array to char, this is equally valid.
Arguably, “strings” in C++ (language) and in C are the same thing. std::string is just another vector-type class that’s designed to hold character strings provided by STL, or more properly nowadays, C++ Standard Library. IF you want to reason about standard libraries, you can’t do this without also including C standard library.
On that note, C++ std::string is not even a “basic” type, it’s a template specification of std::basic_string<CharT, Traits, Allocator> with several common typedefs (std::string being one of them), but nothing is stopping you from doing a std::basic_string<int, ...> provided you also include a reasonable std::char_traits<int> template.
No… you’re not going to get the 10% of the code (by size) optimised down from 90% of the runtime to anywhere near 4-5x the runtime of the 90%. compilers aren’t that terrible.
not without an algorithm improvement and guess what - algorithm improvements and changes area easier to do in higher level languages.
It is all bits, then machines have collections of bits that make a byte, word, dword, etc. You use the level of abstraction that is most appropriate to the problem you are trying to solve.
C/C++ compilers and the standard libraries are tuned to specific platforms. Usually they do a very good job and it does not follow that one will be able to get better results by hand coding things that libraries do. Though it is possible depending on the use case. If you want, just write it two ways and measure it.
If you have an idea, write it and measure it. The results may vary from architecture to architecture. For example I have written critical logic with branches and alternate implementations with bitwise operations. For CPUs and GPUs. In the old days you would expect bitwise to win. But that is not always the case with modern processors. If you are really into it you can look at the assembler that is generated to understand it better.
As said earlier the biggest gains in optimization are usually in algorithm changes. And also not duplicating objects that do not need to be duplicated.
Measure, measure, measure. Don’t assume something that looks faster is faster ![]()