Using smart pointers doesn’t eliminate the memory safety issue, it merely addresses one aspect of it. Even with smart pointers, nothing is preventing you from passing references and using them after they’re freed.
Using smart pointers doesn’t eliminate the memory safety issue, it merely addresses one aspect of it. Even with smart pointers, nothing is preventing you from passing references and using them after they’re freed.
I think that’s very team/project dependent. I’ve seen it done before indeed, but I’ve never been on a team where it was considered idiomatic.
I don’t know about your workplace, but if at all possible I would try to find time between tasks to spend on learning. If your company doesn’t have a policy where it is clear that employees have the freedom to learn during company time, try to underestimate your own velocity even more and use the time it leaves for learning.
About 10 years ago I worked for a company where I was performing quite well. Since that meant I finished my tasks early, I could have taken on even more tasks. But I didn’t really tell our scrum master when I finished early. Instead I spent the time learning, and also refactoring code to help me become more productive. This added up, and my efficiency only increased more, until at some point I only needed one or two days to complete a week’s sprint. I didn’t waste my time, but I used it to pick up more architectural stuff on the side, while always learning on the job.
I’ll admit that when I started this route, I already had a bunch of experience under my belt, and this may not be feasible if you have managers breathing down your neck all the time. But the point is, if you play it smart you can use company time to improve yourself and they may even appreciate you for it.
If we’re looking at it from a Rust angle anyway, I think there’s a second reason that OOP often becomes messy, but less so in Rust: Unlimited interior mutability. Rust’s borrow checker may be annoying at times, but it forces you to think about ownership and prevents you from stuffing statefulness where it shouldn’t be.
You can use the regular data structures in java and run into issues with concurrency but you can also use unsafe in rust so it’s a bit of a moot point.
In Java it isn’t always clear when something crosses a thread boundary and when it doesn’t. In Rust, it is very explicit when you’re opting into using unsafe
, so I think that’s a very clear distinction.
Java provides classes for thread safe programming, but the language isn’t thread safe. Just like C++ provides containers for improved memory safety, and yet the language isn’t memory safe.
The distinction lies between what’s available in the standard library, and what the language enforces.
Modern C++ does use references, which can also reference memory that is no longer available. Avoiding raw pointers isn’t enough to be memory safe.
Try browsing the list of somewhat recent #CVE rated critical, as I just did to verify. A majority of them is not related to any memory errors. Will you tell all them “just use a different programming language”?
I’m sorry, but this has been repeatedly refuted:
And yes, they are telling their engineers to use a different programming language. In fact, even the NSA is saying exactly that: https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3215760/nsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues/
It doesn’t come out today, it’s been there for a long time, and it’s standardized, proven and stable.
This seems like an extremely short-sighted red herring. C has so many gaps in its specification, because it has no problem defining things as “undefined behavior” or “implementation defined”, that the standard is essentially useless for kernel-level programming. The Linux kernel is written in C and used to only build with GCC. Now it builds with GCC and LLVM, and it relies on many non-standard compiler extensions for each. The effort to add support for LLVM took them 10 years. That’s 10 years for a migration from C to C. Ask yourself: how is that possible if the language is so well standardized?
Great suggestions! One nitpick:
But in principle I find this quite workable, as you get to write your CI code in Rust.
Having used xtask in the past, I’d say this is a downside. CI code is tedious enough to debug as it is, and slowing down the cycle by adding Rust compilation into the mix was a horrible experience. To add, CI is a unique environment where Rust’s focus on correctness isn’t very valuable, since it’s an isolated, project-specific environment anyway.
I’d rather use Deno or indeed just
for that.
No, OP asked for a black and white winner. I was elaborating because I don’t think it’s that black and white, but if you want a singular answer I think it should be clear: Rust.
I would say at this point in time it’s clearly decided that Rust will be part of the future. Maybe there’s a meaningful place for Zig too, but that’s the only part that’s too early to tell.
If you think Zig still has a chance at overtaking Rust though, that’s very much wishful thinking. Zig isn’t memory safe, so any areas where security is paramount are out of reach for it. The industry isn’t going back in that direction.
I actually think Zig might still have a chance in game development, and maybe in specialized areas where Rust’s borrow checker cannot really help anyway, such as JIT compilers.
While I can get behind most of the advice here, I don’t actually like the conditions array. The reason being that each condition function now needs additional conditions to make sure it doesn’t overlap with the other condition functions. This was much more elegantly handled by the else
clauses, since adding another condition to the array has now become a puzzle to verify the conditions remain non-overlapping.
I find Linear to be reasonably pleasant.
Issue resolved
Yeah, I mix them too, although I apply quite a bit of functional techniques especially at the architectural level as well. OO I use mostly for dealing with I/O and other areas where statefulness cannot be avoided.
If you’re interested, I also wrote an in-depth blog where I touch on these topics: https://arendjr.nl/blog/2024/07/post-architecture-premature-abstraction-is-the-root-of-all-evil/
Just keep in mind that inheritance is nowadays a very contested feature. Even most people still invested in object oriented programming recognise that in hindsight inheritance was mostly a mistake. The industry as a whole is also making a shift to move more towards functional programming, in which object orientation as a whole is taking more of a backseat and inheritance specifically is not even supported anymore. So yeah, take the chance to learn, but be cautious before going into any one direction too deeply.
and that burden is as far as I’ve seen being forced on those long term contributors.
This is not what is happening. The current long term contributors were asked to clarify semantics about C APIs, so the Rust maintainers could take it from there. At no point were the C maintainers asked to help maintain the Rust bindings.
Of course, I’m a user too, but I don’t think Linux’s UX is that bad. It may be bad in some areas, but it’s not bad across the board.
I also don’t think Linux is only for developers. It’s great for developers, but it’s also great for people with only basic needs of their computer, those that don’t need much more than a browser, an email client and maybe an office suite. The UX is totally adequate for them, as evidenced by ChromeOS.
I think where Linux lacks is mainly for the users in between, those who are not full developers or tinkerers, but do want to mess around and do so from a perspective of expectations of how things worked in the Windows world. And I won’t deny there’s a plethora of legitimate enterprise use cases for which there is no equivalent in Linux today. But those are not UX issues, those are mainly matters market support. Linux is not great there, maybe it never will be. Or if it does, it’ll take a long time.
First example that came to mind was actually Mac users who struggle with external monitors/projectors and things like screen sharing too. I agree they’re things that are so basic they should just work. Reality is often different even on other OSes.
Of course if you have a Windows home and everything works and then you try Linux and it struggles with a piece of equipment, it’s easy to blame Linux. You wouldn’t even be wrong. But you are oblivious to someone else’s experience who uses Linux exclusively and everything works for them, how many of those things wouldn’t work or work well with Windows.
Personally I’m a developer, so I care a lot about integrating parts of my development stack. A lot of those things don’t “just work” on Windows, or even Mac, so I’m happy to stick with Linux instead.
I agree with your examples and it’s certainly true there are plenty of rough edges on Linux. Then again, how many examples are there for things that should “just work” and do on Linux but don’t on Windows? There’s enough that make me not use Windows at all, because it has a subpar user experience. I even used a Macbook for a few years, mainly for work, and there were too many small things that annoyed me about it, so it too had a subpar user experience.
Seems it’s mostly a matter of perspective which issues are more important to you.
I use EndeavourOS and really enjoy it. It’s effectively Arch but without the fuss. You get a GUI with just a few steps to set it up and you’re good to go. I tend to upgrade once a week, while checking the forums to see nothing too bad broke. That’s basically the maintenance I have.
When I do a new install on a new device, I just clone a repo I keep with the most important config files. Then I copy them to where they belong. There’s really not much more to it.