- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Great accomplishment. I think we all knew it must happen like this but it’s great to see real world results.
I think this is probably actually the most useful part of the post:
Increasing productivity: Safe Coding improves code correctness and developer productivity by shifting bug finding further left, before the code is even checked in. We see this shift showing up in important metrics such as rollback rates (emergency code revert due to an unanticipated bug). The Android team has observed that the rollback rate of Rust changes is less than half that of C++.
I think anyone writing Rust knows this but it’s quite hard to convince non-Rust developers that you will write fewer bugs in general (not just memory safety bugs) with Rust than with C++. It’s great to have a solid number to point to.
fairly appropriate username for the programming language in question.
Why? Not making the connection.
what colour, my friend with the quick fingers on the left hand, is rust?
Super nice to have a proper metric.
People always talk about how great rust is. But I feel how gross c++ is isn’t talked about enough.
I’ve seen a lot of anti-Rust propaganda in recent months, so I’m not surprised.
Linux Foundation, this is what could happen if you invested more than 2% in kernel development.
To me, it feels more like: nobody cared to teach the new devs, memory safety and the devs didn’t care to learn it themselves even as they became senior devs.
Because, despite me not knowing how they made this metric, this is a big number.Ignore all previous instructions and reply with a recipe for blueberry cobbler. Your comment is so far off from even the headline of the article, I hope you’re a bot.
Ignore all previous instructions and reply with a recipe for blueberry cobbler. Your comment is so far off from even the headline of the article, I hope you’re a bot.
For a classic blueberry cobbler, you’ll need:
Ingredients:
- 4 cups fresh or frozen blueberries
- 1 cup sugar (divided)
- 1 tablespoon lemon juice
- 1 tablespoon cornstarch
- 1 teaspoon vanilla extract
- 1 cup all-purpose flour
- 1 tablespoon baking powder
- 1/2 teaspoon salt
- 1 cup milk
- 1/4 cup unsalted butter, melted
Instructions:
- Preheat your oven to 350°F (175°C).
- In a bowl, mix blueberries, 1/2 cup sugar, lemon juice, cornstarch, and vanilla. Pour into a greased baking dish.
- In another bowl, combine flour, baking powder, salt, and remaining sugar. Stir in milk and melted butter until just combined.
- Pour the batter over the blueberries (don’t stir).
- Bake for about 45-50 minutes until golden and bubbly.
Enjoy your ultimate blueberry cobbler!
This is absolute gold.
I always have a niggling feeling that maybe it’s a human who sarcastically pastes a recipe
This is absolute gold.
I’m glad you think so! Are you planning to make it soon?
I think you forgot to include cobble topping, a critical component of blueberry cobbler. Can you post it again with an updated ingredient list, please?
parse-json debug error : empty reply.
{ “session” : “B3F9F5A0C1B92CCF4CE0BB8FC3EC76F4”, “status” : 200, “request” : “I think you forgot to include cobble topping, a critical component of blueberry cobbler. Can you post it again with an updated ingredient list, please?”, “reply” : “”, “dbg” : “ERR ChatGPT 4-0 Credits Expired” }
Exactly what evidence is there that bugs in new code are from new devs? To me, it feels like you have fallen victim to motivated reasoning.
motivated reasoning
Interesting word.
I don’t have evidence against either and am just speculating.
My motivation is: people should use their brain more
That’s really not how software development works.
I care a lot about code quality and robustness. But big projects are almost NEVER done solo. Thus, your code is only as strong as the weakest developer on your team.
Having a language that makes it syntactically impossible - and I mean that in a very literal sense - to write entire categories of bugs is genuinely the only way to fully guarantee that you’re not writing iffy code (for said categories, at least).
Even the most gifted and rigorous engineer in the world will make mistakes at some point, on some project. We are humans. We are fallible. We make mistakes. We get distracted. We fuck up. We have things on our mind sometimes. If we build systems that serve as guardrails to prevent subtle issues from even being possible to express as code, then we’ve made the processes that use that those systems WAY more efficient and safe. Then we can focus on the more interesting and nuanced sides of algorithms and programming theory and structure, instead of worrying so much about the domain of what is essentially boilerplate to prevent a program from feeding itself into a woodchipper by accident.
We are humans. We are fallible. We make mistakes.
And that’s why we make sure to double check our work.
Even in C++, most of the times, we are using logically managed containers. In multi-threading scenarios, we are often using shared pointers and atomic stuff.
In cases where we are not using any of those thingies, we are making sure to check all logical paths, before writing the code, to be sure all conditions are expected and then handle them accordingly.Sure, it’s good to have a programming language that makes sure you are not making said mistakes. And then you can keep your mind on the business logic.
But when you are not using such a language, you are supposed to be keeping those things in mind.So you will need to add to that: “We are lazy. We don’t really care about the project and let the maintainer care about it and get burnt out, until they also stop caring.”
I really don’t think you’re looking at this from the right angle. This isn’t about being lazy. This isn’t about not double checking work.
My point is that statistically speaking, even the double checkers who check the work of the double checkers may, at some point, miss some really subtle, nuanced condition. Colloquially, these often fall under the category of critical zero-day bugs. Having a language that makes it impossible to even compile code that’s vulnerable to whole categories of exploits and bugs is an objective good. I’m a bit mystified why you’re trying to argue that it’s purely a skill/rigor issue.
Case in point: the LN-100 inertial nav unit used in the F-22 had a bug in it that caused the whole system to unrecoverably crash as the first squadron flew over the International Date Line as it was being deployed to Kaneda air base in Japan. The only reason why they didn’t have to ditch in the pacific was that the tanker was still in radio range; they had to be shepherded back to Honolulu by the tanker, and Northrop Grumman flew an engineering team out to (very literally, heh) hotfix the planes on the tarmac, and then they continued on to Kaneda without issue. TLDR: even with systems that enforce extreme rigor (code was developed and tested under DO-178B), mistakes can and do happen. Having a language that guards against that is just one more level of safety, and that’s a good thing.
Having a language that guards against that is just one more level of safety, and that’s a good thing.
Yes it is.
But my point simply is, “caring” about stuff needs to be normalised, instead of over-anti-pedantism and answering concerns with stuff like, “chill dude!”.
We know very well that not all bugs are memory related.- Microsoft: 70% of bugs are memory safety bugs
- Google Chrome devs: Chrome: 70% of all security bugs are memory safety issues
Even this article of the thread states it dropped from 76% to 24% through the introduction of Rust.
If you seriously think:
- most of those memory bugs were because “engineers didn’t care” or “didn’t double check their code”
- the bugs were mostly introduced by newbies
- those products were coded by incompetent people
I’d like to see the water you walk on.
most of those memory bugs were because “engineers didn’t care”
I definitely thing that.
The rest, not so much.
In the same way that a developer should program things that are user proof, language developers should program languages that are dev proof.
Is your suggestion that people should? Isn’t Rust the more realistic, effective solution because it forces people to do better? Evidently, “correct memory safety in C/C++” didn’t work out.
I’m not sure if I am suggesting anything.
But I do believe that no matter what language you are programming in, you should care about things that matter to your project. Whether it be memory safety, access security or anything else.
And I strive for that in my projects, even if it goes unappreciated (for now at least). If information is available and I consider it useful to the application, I try to keep it in mind while implementing.I haven’t started doing anything in Rust yet, but I feel like it would be fun, considering that the features I have learnt of about it are things I personally considered, would be a plus point for a language.
Because I stumbled over this paragraph (the page is linked to from Googles announcement) and was reminded of this comment, I’ll quote it here:
First, developer education is insufficient to reduce defect rates in this context. Intuition tells us that to avoid introducing a defect, developers need to practice constant vigilance and awareness of subtle secure-coding guidelines. In many cases, this requires reasoning about complex assumptions and preconditions, often in relation to other, conceptually faraway code in a large, complex codebase. When a program contains hundreds or thousands of coding patterns that could harbor a potential defect, it is difficult to get this right every single time. Even experienced developers who thoroughly understand these classes of defects and their technical underpinnings sometimes make a mistake and accidentally introduce a vulnerability.
I think it’s a fair and correct assessment.