Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”
In small print it says the data is collected over 2022 and 2023.
Productivity is so vague though, Id be interested to see what exactly they measured
Its google, so probably the number of projects launched, never advertised, then abandoned
If that’s the measure then I’m more productive than all of Google combined. Nowhere in the definition says the project has to work as intended or even compile.
I know you are joking but needing to compile is probably one of the reasons “teams” are more productive in Rust.
You cannot check something into the build system unless you can build. Once Rust is compiling, you have eliminate scores of problems that may still be in equivalent C++ code.
Rust works to limit the damage one dev can do to the codebase.
I take that as a challenge. :)
But yes, that compiler checks and awesome linter is one of the main reasons I use Rust. I like working with concurrent and parallel code, and Rust makes that really safe.
my python doesn’t need to parse to pass cI, at least to long as I don’t write tests that run that code section. Checkmate all languages that have to compile. /s
Maybe that counts technically, but it’s just not the same if the project doesn’t have a solid user base when it gets killed.
I am the user base and, despite my best effort, have not yet turned into a liquid. If I kill my project, does it count? Can I be Alphabet now?
“We’re abandoning projects at an unprecedented rate, proving our commitment to the bottom line.”
It seems likely biased as well unfortunately if they let teams decide on their own what to use. I would wager that teams who on their own switched to Rust are probably teams that were already productive.
If you ask my last manager it’s “comments on issues”
I was a lot more productive in C++ 15 years ago when the current project was 100% greenfield. Now that the code is 15 years old I’m much less productive because over the years we have discovered mistakes we made. I suspect I’m still more productive than the average C++ programmer because 15 years ago modern C++ was known (c++11 was still a couple years away though) and so we didn’t do a lot of the mess that people hate on C++ for.
Which is to say I want to know how productive those programmers will be in 15 years when the shiny of rust has warn off and they are looking at years of what seemed like a good design but current requirements just don’t fit.
I suspect a large part of that will depend on how well Rust keeps the feature creep in check. C++ was a bit of a language design magpie. Pretty much any language design idea anyone had ever got pulled into the language and it turned into a real mess. Many of those features are incompatible with each other as well, so once you use one feature, you’re no longer able to use one of the competing ones, which has lead to partial fragmentation of the ecosystem (interestingly enough D who set out to be a “better” C++, also ran into a similar but far worse situation). Many of those features have also been found to be problematic in various ways and have fallen out of favor recently and so are viewed as warts on the language, or failed experiments.
Rust is still young, so there aren’t very many competing features, and none that I’m aware of that are considered things to avoid. If it can manage to keep it’s feature set under control by actively deprecating and removing features that are problematic, and being more judicious than C++ was in pulling in new ones it should be able to avoid the same fate as C++. Time will tell I suppose.
Early in the development of D they had two competing standard libraries that each provided nearly identical functionality but were incompatible with each other. Neither one was obviously the correct choice, and so their library ecosystem split in two, with some projects choosing to use one, while others picked the other one. Of course once a library decided to use one standard they were then locked into it and could only use the other libraries that had made the same choice.
I believe they eventually came to a solution where they merged the two libraries into a new one and deprecated the old ones, but for a while there it was an absolute mess in their ecosystem.
Can confirm, I was super excited about D about 10-15 years ago when all of that had recently been resolved. It’s a really cool language, but it didn’t really get much traction and Rust solves a lot of the problems I have with it, so I use that now.
That said, here are some features I really miss from D:
- compile-time function execution - basically write macros in D; I saw some madlads writing a complete shader render loop at compile-time
- opt-out garbage collection - you get GC by default, but it’s pretty easy to make portions or all of your code safe w/o it
- explicit scopes for finalizers - destructors can be run deterministically instead of “eventually” like in many GC languages
- safeD - things like tagged pure and safe functions; basically, you can write in a checked subset, but it’s opt-in, unlike Rust’s opt-out
- nice functional syntax
- reentrant coroutines
- really fast compiler
But at the end of the day, Rust provides more guarantees, enough features, and a fantastic ecosystem. But if both had the same ecosystem today, I would give D a serious consideration.
compile-time function execution - basically write macros in D; I saw some madlads writing a complete shader render loop at compile-time
There are of course macros, but they’re kind of a pain to use. Zigs
comptime fn
are really nice and a similar concept. Rust does haveconst fn
but of course those come with limits on them.explicit scopes for finalizers - destructors can be run deterministically instead of “eventually” like in many GC languages
You kind of get that with Rust for free. You get implicit GC for anything stack allocated, and technically heap allocated values are deterministically freed which you can work out by tracking their ownership. As soon as the owning scope exits it will be freed. If you want more explicit control you can always invoke
std::mem::drop
to force it to be freed immediately, but generally you don’t gain much by doing so.really fast compiler
Some really great work is being done on that pretty much all the time but… yeah, I can’t reasonably argue that the Rust compiler is fast. Taking full advantage of incremental compilation helps a lot, but if you’re doing a clean build, better grab a coffee.
What would be nice is if cargo explored a similar solution to what Arch Linux used, where there’s a repository of pre-compiled libraries for various platforms and configurations that can be used to speed up build times. That of course does come with a whole heap of problems though, probably the biggest of which is that it’s a HUGE security nightmare. Of lesser concern is the fact that they could not realistically do so for every possible combination of features or platforms, so it would likely only apply to crates built with the default features for a small subset of the most popular platforms. I’m also not sure what the tree shaking would end up looking like in a situation like that.
There are of course macros
Yup, and Rust’s macros are pretty cool, but in D you can just do:
static if (condition) { ... }
There’s a whole compile-time reflection library as well, so you can take a class and make a super-optimized serialization/deserialization library if you want. It’s super cool, and I built a compile-time JSON library just because I could…
You kind of get that with Rust for free
Yup, Rust is awesome.
But in D you can do explicit scope guards:
scope(exit)
- basically Go’sdefer()
scope(success)
- only runs when no exceptions are runscope(failure)
- only runs when there’s an exception
I didn’t use them much, but they are really cool, so you can do explicit cleanup as you go through the logic flow, but defer them until they’re needed.
It’s a neat alternative to RAII, which D also supports.
Some really great work is being done on that pretty much all the time
I still need to try out Cranelift, which was posted here recently. Cranelift release mode could mostly solve this for me.
That said, I haven’t touched D in years since moving to Rust, so I obviously find more value in it. But I do miss some of the candy.
But in D you can do explicit scope guards
Hmm… that is interesting.
scope(exit)
is basically just an inlinestd::ops::Drop
trait, I actually think it’s a bad thing that you can mix that randomly into your code as you go instead of collecting all of the cleanup actions into a single function. Reasoning about what happens when something gets dropped seems much more straightforward in the Rust case. For instance it wasn’t immediately clear that those statements get evaluated in reverse order from how they’re encountered which is something I assumed, but had to check the documentation to verify.scope(success)
andscope(failure)
are far more interesting as I’m not aware of a direct equivalent in Rust. There’s the nightly only feature ofstd::ops::Try
that’s somewhat close to that, but not exactly the same. Once again though, I’m not convinced letting you sprinkle these statements throughout the code is actually a good idea.Ultimately, while it is interesting, I’m actually happy Rust doesn’t have that feature in it. It seems like somewhat of a nightmare to debug and something ripe to end up as a footgun.
I’m still a big fan of D for personal projects, but I fear the widespread adoption ship has sailed at this point, and we won’t see the language grow anymore. It’s truly a beautiful, well-rounded language.
Also just recently a rather prominent contributor forked the entire compiler/language so we’re seeing more fragmentation :/
Rust had the same issue with tokio vs. async-std. I don’t think this was ever resolved explicitly, async-std just silently died over time.
Hmm, sort of, although that situation is a little different and nowhere near as bad. Rusts type system and feature flags mean that most libraries actually supported both tokio and async-std, you just needed to compile them with the appropriate feature flag. Even more worked with both libraries out of the box because they only needed the minimal functionality that
Future
provided. The only reason that it was even an issue is thatFuture
didn’t provide a few mechanisms that might be necessary depending on what you’re doing. E.G. there’s no mechanism to fork/join in Future, that has to be provided by the implementation.async-std still technically exists, it’s just that most of the most popular libraries and frameworks happened to have picked tokio as their default (or only) async implementation, so if you’re just going by the most downloaded async libraries, tokio ends up over represented there. Longer term I expect that chunks of tokio will get pulled in and made part of the std library like
Future
is to the point where you’ll be able to swap tokio for async-std without needing a feature flag, but that’s likely going to need some more design work to do that cleanly.In the case of D, it was literally the case that if you used one of the standard libraries, you couldn’t import the other one or your build would fail, and it didn’t have the feature flag capabilities like Rust has to let authors paper over that difference. It really did cause a hard split in D’s library ecosystem, and the only fix was getting the two teams responsible for the standard libraries to sit down and agree to merge their libraries.
This happened to Scala with cats vs zio. I’m sad it wasn’t more successful, it’s a really, really good language
I feel like I work well even without the
new C++ featuressmart pointer stuff, simply because:- Most of my projects are solo and I keep all flows in my mind
- I started programming with C, then understood memory on systems as well as I could and then came to C++
I’d love to know how they measured this, because if they just took two Jira boards for two projects, compared the ticket times, and said “yep, Rust is good” that’s both insane and completely expected by some big tech managers.
I don’t deny it, it’s just nice to see reasoning with a headline, so that I could take it to another team and say “let’s try Rust because…”
Eww… you’re probably right. TIHI.
On a related note, I’ve always preferred t-shirt sizing over story points. You can still screw that up by creating a conversion chart to translate t-shirt sized into hours (or worse, man-hours) or story points, but at least it’s slightly more effort to get wrong than the tantalizingly linear numeric looking story points.
If I was truly evil I’d come up with a productivity unit that used nothing but irrational constants.
“Hey Bob, how much work do you think that feature is?”
“Don’t know man, I think maybe e, but there’s a lot there so it might end up being π.”
At the end of the day, the first thing managers do is convert story points / tshirt sizes / whatever other metaphor back into time estimates. So why bother with the layer of indirection.
I’ll die on the hill that most teams do not need scrum / agile and all the ceremony that always goes with it.
A kanban board with a groomed Todo column is all you need. Simple and effective and can easily adapt to unexpected scope changes a.k.a production incidents.
*yes I’m aware that if you’re getting bogged down in ceremony you’re doing Agile wrong. I’ve never seen or worked in a place where I’ve felt it’s been done right
My company is just doing a kanban board with weekly meetings to discuss the progress and what tickets will be worked on next. The major problem we ran into was when management asked “So, when is the release going to be? When are you done with that project?” about one month before we actually released. I simply had no answer at that point, because that’s not something these tickets with no estimates and no velocity tracking can provide.
IMO if it is so hard to do right that somehow no company can figure it out, then the whole system must be garbage. The best we can get to is the direct time estimates so that the “velocity” calculations we’re graded on make sense. Still going to be bogged down in ceremony no matter what we do tho.
Yeah, it’s different projects, most probably on different levels.
And considering recent layoffs, having different calibre of programmers on each.
Is it because c++ devs need half their day for recovering from the trauma of reading and writing c++? /s
Half the day coding, the other half the day bandaging their feet.
I don’t know. After writing rust for a while, and slowly putting programs together, I tried Go and I feel so relived I can just write what I want in 10 seconds instead of messing with lifetimes, borrow checker and other stuff I actually don’t care about at all.
A more experienced colleague said that yes that is true, but Go can’t guarantee your code is correct, so you will spend time fixing your code also in Go. Probably true.
Right, it’s essentially the same argument as strong vs. weak typing. The weak typing proponents say JavaScript is best, because you can just write anything and you don’t need to worry about all those pesky types getting in your way. The strong typing proponents (which if it’s not obvious I am one of) point out that you can write incorrect code quickly in just about any language, but writing correct code is much harder, and the cost of correcting code increases the later the mistake is found. Errors that can’t even be written are better than errors that are found at compile time which are better than errors that are reliably caught at runtime, which are all infinitely better than errors that only randomly appear under very specific circumstances.
That is why many people switched to using TypeScript for their websites instead of JavaScript, because even though you have to spend more time putting type annotations on everything, and at the end of the day at runtime TypeScript is literally just JavaScript, the errors it lets you find at compile time instead of runtime make the effort necessary to include those types worth it. Same thing applies with Rust vs. Go. Yes it requires more thinking up front when you’re writing Rust code, and yes it might take you longer to write that code, but it’s also going to be correct code you can be confident in and not have a bunch of ticking timebombs waiting in it that you don’t even know about.
An extra 30 minutes spent having to think about a dozen lines of code, is infinitely preferable to spending 3 hours pouring over stack traces and single stepping debuggers to find that one subtle mistake you made.
I totally agree, though I think it’s worth adding:
-
The advantages of static types is not just finding bugs (though it does do that quite well). It also massively helps with productivity because a) types are now documented, b) you can use code intelligence tools like renaming variables, go-to-definition, find-references, etc. (assuming you use a good editor/IDE).
-
In general stronger types are better but I do think there is a point at which the effort of getting the types right is too high to be worth the benefit. I would say Rust hasn’t reached that point, but if you look at formal verification languages like Dafny, it’s pretty clear that you wouldn’t want to use that except in extreme circumstances. Similarly I think the ability to use an
any
ordynamic
escape hatch is quite useful, even if it should be used very sparingly.
You are right. But I think similar secondary benefits also come from using the borrow checker. Rust developers, by necessity, try to avoid using circular references and prefer immutability where they can. Both of these are advantages because they tend to make for systems that are easier to understand and are easier to maintain.
Yeah I agree. The borrow checker definitely pushes you to write less buggy code.
It also massively helps with productivity
Absolutely! Types are as much about providing the programmer with information as they are the compiler. A well typed and designed API conveys so much useful information. It’s why it’s mildly infuriating when I see functions that look like something from C where you’ll see like:
pub fn draw_circle(x: i8, y: i8, red: u8, green, u8, blue: u8, r: u8) -> bool {
rather than a better strongly typed version like:
type Point = Vec2<i8>; type Color = Vec3<u8>; type Radius = NonZero<u8>; pub fn draw_circle(point: Point, color: Color, r: Radius) -> Result<()> {
Similarly I think the ability to use an
any
ordynamic
escape hatch is quite useful, even if it should be used very sparingly.I disagree with this, I don’t think those are ever necessary assuming a powerful enough type system. Function arguments should always have a defined type, even if it’s using dynamic dispatch. If you just want to not have to specify the type on a local,
let
bindings where you don’t explicitly define the type are fine, but even in that case it still has a type, you’re just letting the compiler derive it for you (and if it can’t it will error).You can go to definition / find references / rename for dynamically typed languages too.
Without static type annotations you can only make best effort guesses that are sometimes right. Better than nothing but not remotely the same as actual static types. The LSP you linked works best when you use static type annotations.
Also I would really recommend Pylance over that if you can - it’s much better but is also closed source unfortunately.
Why would it just be best effort? To find references for a specific thing, it still would parse an AST, find the current scope, see it’s imported from some module, find other imports of the module, etc.
if random() > 0.5: x = 2 else: x = "hello"
Where is the definition of x? What is the type of x? If you can’t identify it, neither can the LSP.
This kind of thing actually happens when implementing interfaces, inheritance, etc. Thus, LSPs in dynamic languages are best effort both theoretically and in practice.
Tbf this example can be deducted as
string | int
just fine.- Look at entire file instead of snippet.
- If there is anything that could create a variable x before this area, then that’s where x originates. If not, and if it’s a language where you can create x without using a keyword like let or var, then x is created in the scope in your snippet.
Types are not necessary at all.
def get_price(x): return x.prize
Ok imagine you are a LSP. What type is
x
? Isprize
a typo? What auto-complete options would you return forx.
?I didn’t say types. I said find references / go to definition / rename.
It breaks down when you do runtime reflection, like
getattr(obj, "x")
.
-
Preach 🙏
instead of messing with lifetimes, borrow checker and other stuff I actually don’t care about at all
There’s nothing wrong with putting Rc<_> or Rc<RefCell<_>> around data if you don’t want to fight the borrow checker or think about lifetimes even if you know it can be written without.
Or even just clone. Depending on use case the performance cost would be negligible.
There’s nothing wrong with putting Rc<> or Rc<RefCell<>> around data
It’s mainly the visual pollution that bothers me. Wrapping everything in the reference counting smart pointers just because you can’t be bothered dealing with the borrow checker seems like an antipattern
I don’t know why so many recommend Rc or Arc as a catchall. 90% of the time if you want to avoid the borrow checker then a clone or copy is good enough.
Really? I might have agreed for some other languages, but Go is so bare bones it feels like it takes way longer to write simple stuff than with Rust - you have to tediously write out loops all the time for example.
Tbf I haven’t used it since it got generics. Maybe it is better now.
#Rust is not high level at all, change my mind.
Rust gives you high level abstractions but also allows lower level control over the hardware. These are not mutually exclusive.
You can easily argue almost any language is high level though, it is such a nebulous term that it is almost meaningless.
But Go has garbage collection so that code is as correct as that of Rust. Go is just a little less performant
Rusts ownership model is not just an alternative to garbage collection, it provides much more than that. It’s as much about preventing race conditions as it is in making sure that memory (and other resources) get freed up in a timely fashion. Just because Go has GC doesn’t mean it provides the same safety guarantees as Rust does. Go’s type system is also weaker than Rusts even setting aside the matter of memory management.
- data races
Rust still can have race conditions
True, but ownership does eliminate a lot of the possible sources of them.
Also go is quite a lot slower than rust. It seems fast compared to python or course but it’s probably half the speed of Rust.
Commenter on Reddit (OP there) gives a talk link and summarization:
In the talk, Lars mentions that they often rely on self-reported anonymous data. But in this case, Google is large enough that teams have developed similar systems and/or literally re-written things, and so this claim comes from analyzing projects before and after these re-writes, so you’re comparing like teams and like projects. Timestamped: https://youtu.be/6mZRWFQRvmw?t=27012
Some additional context on these two specific claims:
Google found that porting Go to Rust “it takes about the same sized team about the same time to build it, so that’s no loss of productivity” and “we do see some benefits from it, we see reduced memory usage […] and we also see a decreased defect rate over time”
On re-writing C++ into Rust: “in every case, we’ve seen a decrease by more than 2x in the amount of effort required to both build the services written in Rust, as well as maintain and update those services. […] C++ is very expensive for us to maintain.”
They should compare defect rate with the Go teams. I’m curious if the advertised benefits of Rust’s type system give some practical advantage.
EDIT: Just watched the actual talk. Apparently they did this comparison, and found that Rust has fewer defects when compared to Go.
That’s pretty cool. I’m not advanced enough to really understand all the ways rust is better but I read nothing but good things about it. It seems pretty universally loved.
Basically modern language with modern tooling. It’s what C++ would look like if it had been designed today. The big thing though is the abstraction of ownership and lifetimes which took C++ ideas of scopes, smart pointers, and destructors and polished them into something much more powerful. Simply put it’s possible to design APIs in Rust that are literally impossible to express in any other language, and that’s a big deal.
Added on top of that is a modern dependency management system that is severely needed in languages like C and C++, and a very powerful meta programming system that enables compile time code generation and feature selection that’s much safer and powerful than C and C++ fairly primitive pre-processor (although C++ STL does come close).
it’s possible to design APIs in Rust that are literally impossible to express in any other language
This sort of violates what I’ve always heard about computer science. I’ve always heard logic is logic.
Hmm, yes and no. You can express a program that does anything in any language, but API design is as much about what can’t be expressed (with that API) as what can. A well designed API lets you do the things that are desirable while making it impossible to do things that aren’t. You can of course bypass APIs to do anything the language allows, even in Rust if you break out the unsafe blocks and functions there’s pretty much nothing you can’t bypass with enough effort, but you very much have to set out to not use the API to do that.
I think your quantifier of “any other language” is the issue. There are certainly languages with far more powerful type systems than Rust, such as Coq or Lean.
Maybe, although I’m not aware of any other language that has the same abstraction around ownership and lifetimes. Most other languages I’m aware of that have more (or equivalently) powerful type systems are also GCed languages that don’t let you directly control whether something gets stack or heap allocated. Nor due they allow you to guarantee that a variable is entirely consumed by some operation and no dangling references remain. While at a high level you can write something that accomplishes a similar result in other higher level languages, you can not express exactly the same thing due to not having direct access to the lower level memory management details.
See this example for Scala:
https://blog.tmorris.net/posts/scala-exercise-with-types-and-abstraction/index.html
Now go further and say you can’t compile a call that leaks memory, or things like that.
Disclaimer: I don’t know Rust so can’t verify the claim. All I can say is it sounds somewhat plausible.
You can leak memory in Rust if you want to (and it’s occasionally desirable). What the type system prevents is mainly accessing memory that has been deallocated. The prevention of memory leaks uses RAII just like C++. The main difference related to allocation is that there’s no “new” operator; you can pretty much only allocate memory through the initialization of a smart pointer type.
I’d argue it also prevents you from accidentally leaking memory. You have to be pretty explicit about what you’re doing. That’s true for pretty much anything in Rust, every bad thing from C/C++ is possible in Rust, you just have to tell the compiler “yes, I REALLY want to do this”. The fact that most of the really dangerous things are locked behind unsafe blocks and you can set the feature flag to disable unsafe from being used in your code goes a long way towards preventing more accidents though.
I agree Rust makes it virtually impossible to leak memory by accident. The difference I wanted to point out is that leaking memory is explicitly not considered unsafe, and types like Box have a safe “leak” method. Most “naughty” things you can do in Rust require using the “unsafe” keyword, but leaking memory does not.
Cyclic reference-counted pointers are the most probable way to accidentally leak memory. But it’s a pretty well known anti-pattern that is not hard to avoid.
It’s what C++ would look like if it had been designed today.
So, it’s C#?
/s
So, it’s C#?
No, that’s what Java would look like today if designed by a giant evil megacorp… or was that J++. Eh, same difference. /s
This did make me laugh though. Anyone else remember that brief period in the mid-90s when MS released Visual J++ aka Alpha C#? Of course then Sun sued them into the ground and they ended up abandoning that for a little while until they were ready to release the rebranded version in 2000.
Added on top of that is a modern dependency management system that is severely needed in languages like C and C++
I won’t disagree, but what Rust did is not the correct answer. Better than C++ perhaps, but not good enough. In the real world my code is more than Rust. I’m having trouble using rust because all my existing code is C++ and the dependency management does not work well with my existing build system and dependency management. If you want a dependency manager it needs to cover all languages and be easy to plug in whatever I’m doing currently. This is NOT an easy problem (it might not even be possible to solve!), but if you fail you are useless for all the times where dependency management is hard.
I won’t disagree, but what Rust did is not the correct answer.
It’s hard to say that what Rust did was not correct when it’s better than what C++ has (nothing, or rather it punts to the OS and user to handle it). I agree it’s far from perfect but it’s as good as pretty much any other languages dependency management system that I’m aware of. If you know a better one I’d love to hear about it because yes, there are still gaps and pain points. I’d argue many of those gaps and pain points are a legacy of C and C++ though. The fact that C/C++ never had an actual dependency management system meant that the OS had to provide one, and every OS more or less went about it in an entirely different fashion (and even worse in the case of Linux, every distro went about it in a different fashion). This has massively complicated things because there is a HUGE body of C/C++ libraries that are very desirable to use with absolutely no consistent way to do so. It’s not as simple as just adding the ability to pull from the C/C++ repo for any of those dependencies, because there is no such thing.
If you know a better one I’d love to hear about it
OCaml’s OPAM. They actually took into account that it could be desirable to use software written in other languages in your OCaml project. It even has a bunch of stuff packaged that’s written in Rust. Imagine that the other way around. It only has stub packages for compilers like gcc but I assume that’s likely because they don’t want to have people spend hours building the whole thing themselves when there’s a perfectly good one on their system, rather than it not being possible to do.
I love Rust but I will die on this hill that combining package manager and build system like Cargo does and then only making it work for a single language is a lot worse than what C++ does, because if it doesn’t work for your project you’re screwed. Everything expects you to use Cargo, especially if you intend to publish a library, with C++ you can at least pretty much always get the build setup to do what you need, and you can import whatever as long as it comes with a pkg-config file.
Added on top of that is a modern dependency management system that is severely needed in languages like C and C++
You’re looking for Nix (unless you’re a Windows developer, work on getting that to work is ongoing). There’s very likely other good ones too, but this is the one I like and am familiar with. The difference is that it’s not a package manager for C++, but a package manager that also packages C++ packages. Which makes it so much more versatile than something like Cargo, because you can accurately represent dependency chains regardless of what language each package is written in. My Nix + CMake projects will build consistently on every Linux or Mac computer (you can’t say the same for Rust crates because they will look for stuff in system directories because Cargo can’t package anything that isn’t Rust), and you can depend on them similarly to how you would a Rust crate, with the difference that you can depend on them not only in another C++ project, but also in a Python package, a Go package, or whatever else that can be packaged with Nix. And if you can’t use Nix, then you can always build the CMake project directly, package it somewhere else maybe, because the two parts are not coupled together at all.
I’ll look into OPAM, it sounds interesting.
I disagree that combining build and package management is a mistake, although I also agree that it would be ideal for a build/package management system to be able to manage other dependencies.
A big chunk of the problem is how libraries are handled, particularly shared libraries. Nix sidesteps the problem by using a complex system of symlinks to avoid DLL hell, but I’m sure a big part of why the Windows work is still ongoing is because Windows doesn’t resemble a Linux/Unix system in the way that OS X and (obviously) Linux do. Its approach to library management is entirely different because once again there was no standard for how to handle that in C/C++ and so each OS came up with their own solution.
On Unix (and by extension Linux, and then later OS X), it was via special system include and lib folders in canonical locations. On Windows it was via dumping everything into C:\Windows (and a lovely mess that has made [made somehow even worse by mingw/Cygwin then layering in Linux style conventions that are only followed by mingw/Cygwin built binaries]). Into this mix you have the various compilers and linkers that all either expect the given OSes conventions to be followed, or else define their own OS independent conventions. The problem is of course now we have a second layer of divergence with languages that follow different conventions struggling to work together. This isn’t even a purely Rust problem, other languages also struggle with this. Generally most languages that interop with C/C++ in any fashion do so by just expecting C/C++ libraries to be installed in the canonical locations for that OS, as that’s the closest thing to an agreed upon convention in the C/C++ world, and this is in fact what Rust does as well.
In an ideal world, there would be an actual agreed upon C/C++ repository that all the C/C++ devs used and uploaded their various libraries to, with an API that build tools could use to download those libraries like Rust does with crates.io. If that was the case it would be fairly trivial to add support to cargo or any other build tool to fetch C/C++ dependencies and link them into projects. Because that doesn’t exist, instead there are various ad-hoc repositories where mostly users and occasionally project members upload their libraries, but it’s a crap-shoot as to whether any given library will exist on any given repository. Even Nix only has a tiny subset of all the C/C++ libraries on it.
Dependency management has to deal with the real world where what we didn’t know in 1970 hurts us.
Dependency management has to deal with the real world where what we didn’t know in 1970 hurts us.
I’m having trouble understanding the point you’re trying to make here. You seem to be angry at the Rust dependency manager for not being perfect, but also admit that it’s better than C++. Is there some dependency manager you like more than what Rust provides? Do you have any suggestions for how Rust could improve its dependency management?
I said this is a hard problem and might not even be solvable.
rust is not better than C++ if you are in any of those cases where rust doesn’t work. Not really worse, but not better either. If it works for you great, but it is worse for me as rust fight our homegrown system (which has a lot of warts )
So you’re point is that your custom home grown workaround to a failure of C++ doesn’t play well with Rusts official solution to the same problem? And therefore Rusts solution isn’t better than C++ lack of a solution?
While that is unfortunate for you and you certainly seem to have tech-debted yourself into a particularly nasty corner, I’m not sure that logic follows.
A lot of it is about moving problems from runtime to compile time. JS, for example, has most problems live in runtime.
Imagine you’re hiring an event planner for your wedding. It’s an important day, and you want it to go well and focus on the things that matter to you. Would you rather hire an even planner that barely interacts with you up until the wedding because they’re so “easy to work with”? Or one that gets a ton of info and meets with you to make sure they can get everything they need as early as possible?
Rust is like the latter. JS is like an even planner who is just lazy and says “we’ll cross that bridge when we come to it” all the time.
C++ is like a meth addict.
Compared to C++, Rust does a lot of things for you to prevent common mistakes. This reduces a lot of the mental overhead that comes with writing C++ programs. Go does this as well, but at the expense of slower programs. Rust programs are still as fast as C++ programs.
I think focusing on speed of programs is looking at the wrong thing. It’s true that at the moment Rust programs are usually faster than equivalent Go programs, but it’s already possible to very carefully tune a Go program to achieve similar levels of speed. The real difference is in productivity.
Go makes the tradeoff that it’s much more verbose, it takes many times the lines of code to achieve in Go what’s possible in either Rust or C++. This is because of their dogmatic obsession with keeping the language “simple”. Yes that makes it easy to learn, but it means if you have something complex to express you need to use more of those simple building blocks to do so rather than just a handful of the more complicated ones you’re provided in Rust or C++. More lines of code is more opportunity for mistakes to be made and more code to maintain. In a more powerful language you can offload much of the work to libraries that are maintained by other people and that expose powerful APIs that are safe to use. In Go because it’s “simple” it’s hard to write powerful APIs that aren’t filled with footguns and are more complicated to use.
The problem with C++ wasn’t that it was a complicated language, it’s that it was an inconsistent language. There were many competing ways of accomplishing things and many of them were mutually exclusive with each other and introduced footguns. It was far far too easy to write code in C++ that looked correct but was utterly broken in very subtle and hard to detect ways. The Go guys looked at that situation and decided the problem was that the language was too complex and had too many features (arguably true), but decided to make the exact opposite mistake by designing a language that was too simple and had too few features.
Rust programs are usually faster than equivalent Go programs, but it’s already possible to very carefully tune a Go program to achieve similar levels of speed
It is much, much more difficult to make Go run as fast as Rust compared to writing that faster in the first place Rust program.
It’s a fair point that the speed of a language is not everything, but that’s not my point. My point is that with C++, the programmer must often play a puzzle of avoiding common pitfalls with e.g. memory allocation - on top of the actual problem the programmer is intending to solve. This hurts productivity, because there’s so much more to be mindful about.
Both Rust and Go are more free from this kind of extra mental overhead. The programmer can focus more attention on the actual problem. This is probably why Google has observed that both Rust and Go developers are twice as productive than C++ developers.
Go makes the trade off of using garbage collectors, which is easier for programmers to work with but comes with extra performance cost.
Having a simple and verbose language is not necessarily a downside. I’d rather take a simple language over all the syntactic sugar that comes with Perl.
My point is that with C++, the programmer must often play a puzzle of avoiding common pitfalls with e.g. memory allocation - on top of the actual problem the programmer is intending to solve.
Both Rust and Go are more free from this kind of extra mental overhead.
This isn’t entirely correct. Rust you do still need to worry about those same problems, it just gives you much better abstractions for modeling and thinking about them, and the tooling of the language “checks your homework” so to speak to make sure you didn’t make any mistakes in how you were thinking about it. The upside is that you can be very specific about how you handle certain tasks allowing you to be super efficient with resources. The downside is that you do still need to worry about those resources at least a little bit. If you wanted to you could write Rust like Go by just making every variable a
Box<Arc<...>>
and using.clone()
like mad, but your performance is going to take a hit.Go (and other GCed) languages on the other hand, do entirely free you from having to worry about memory utilization in a general sense as the language runtime takes care of that for you. The downside is that it often does a poor job of doing so, and if you do run into one of the edge cases it’s not so great at your tools for dealing with that are severely limited. Further it’s very easy to accidentally screw up your memory usage and use far more than is necessary leading to excessive heap churn and serious performance degradation. Because the language makes it easy, even if what you’re doing is wrong, and it lacks the tools to alert you to that problem before you trip over it at runtime.
Having a simple and verbose language is not necessarily a downside.
As programmers, our bread and butter is abstractions. We use abstractions because in a very real sense what we do day to day if we removed all the abstractions would be a herculean effort that not even the best of us could manage for any period of time. Go’s idea of “simple” is limiting the number of abstractions that the language provides, and so it’s up to the programmer to use that small handful to construct more powerful ones. Every code base becomes a snowflake where each team rolled their own solution, or everyone just uses the same sets of libraries that provide the solution. You haven’t removed the complexity, you’ve just shifted it out of the language and onto a 3rd party. It’s better to have a consistent set of abstractions provided and maintained by the language and centrally developed by everyone, rather than a hodge-podge of abstractions by random 3rd parties.
I disagree about comparing languages by speed. Just because you can make Go programs as fast as Rust programs, it’s not going to be as straightforward as doing it in Rust. I’d much rather spend slightly more effort up front to write idiomatic Rust code that’s fast by construction than try to make Go code faster by applying a bunch of arcane tweaks to it.
It is fair to compare speeds, I just think it’s probably the wrong argument to focus on if you’re trying to convince people of the value of a language. It’s definitely a supporting point, but at the end of the day, most programs don’t actually need to be blazingly fast, they just need to not be dog slow. Ease of writing (correct) code and even more importantly maintaining and debugging code are generally far more important factors in a languages success, and those are all areas that Rust excels in.
The problem with a purely speed focused argument is that it’s always possible to cherry pick examples in both directions. You can find some piece of idiomatic Go code that just happens to be faster than the equivalent idiomatic Rust code and vice versa. The fact that it’s undoubtedly much easier to find idiomatic Rust code that out performs most Go code (idiomatic or not) is a much harder argument to use to convince people. The Go proponents will just argue that the ease of understanding the Go code outweighs whatever speed gains Rust has. That’s why I think it’s important to also point out that Go might be easier to write small snippets of, but for any realistic program it’s going to be harder to write and maintain, and it will be more prone to bugs.
I feel like they are just trying to Subliminally push Go and failing at the Subliminal part.
Google: It’s a three-pronged attack: sub-liminal, liminal and super-liminal.
Lisa: Superliminal?
Google: I’ll show you. (leans out of window) Hey, you! Code in Golang!
Carl: Uh, yeah, all right. Lenny: I’m in.
Good god I hope you’re more productive than the C++ people.
I don’t have to be productive, I know C++.
What do you mean by that?
Is the armorer going to get someone shot by being really sloppy with safety checks?
That’s the real question.
but not as productive as those using C
There is no need to use C now that we have Rust
As an embedded systems programmer I’d like to point out that that’s not true at all.
As an embedded systems programmer, I’m really delighted that rust might finally be an option for some projects. Soon. Ish. Maybe.
Oh I agree. But my god the embedded industry is slow to update toolchains.
I would love to have Rust as an option for my ARM development, but that’s years off. ARM is only now about to come out with a visual studio based toolchain for their Keil C compiler instead of the proprietary IDE.
You might want to check out embassy