I’m curious how software can be created and evolve over time. I’m afraid that at some point, we’ll realize there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.

Are there any instances of this happening? Where something is designed with a flaw that doesn’t get realized until much later, necessitating scrapping the whole thing and starting from scratch?

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    7 months ago

    “0.001% of the time, I wanna know every time 👉😎👉”

    Yeah, I get that. But are we talking about during development (which is why we’re choosing between C and something else)? In that case, you should be running instrumented builds, or with debug functionality enabled. I agree that most programs just fail and don’t tell you how to go about enabling debug info or anything, and that could be improved.

    For the “Permission Denied” example, I also assume we’re making system calls and having them fail? In that case it seems straight forward: the user you’re running as can’t access the resource you were actively trying to access. But if we’re talking about some random log file just saying “Error: permission denied” and leaving you nothing to go on, that’s on the program dumping the error to produce more useful information.

    In general, you often don’t want to leak more info than just Worked or Didn’t Work for security reasons. Or a mix of security/performance reasons (possible DOS attacks).

    • taladar
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      During development is just about the only time when that doesn’t matter because you have direct access to the source code to figure out which function failed exactly. As a sysadmin I don’t have the luxury of reproducing every issue with a debug build with some debugger running and/or print statements added to figure out where exactly that value originally came from. I really need to know why it failed the first time around.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        7 months ago

        Yeah, so it sounds like your complaint is actually with application not propagating relevant error handling information to where it’s most convenient for you to read it. Linux is not at fault in your example, because as you said, it returns all the information needed to fix the issue to the one who developed the code, and then they just dropped the ball.

        Maybe there’s a flag you can set to dump those kinds of errors to a log? But even then, some apps use the fail case as part of normal operation (try to open a file, if we can’t, do this other thing). You wouldn’t actually want to know about every single failure, just the ones that the application considers fatal.

        As long as you’re running on a turing complete machine, it’s on the app itself to sufficiently document what qualifies as an error and why it happened.

        • taladar
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          The whole point of my complaint is that shitty C conventions produce shitty error messages. If I could rely on the programmer to work around those stupid conventions every time by actually checking the error and then enriching it with all relevant information I would have no complaints.

        • taladar
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          7 months ago

          I know about strace, strace still requires me to reproduce the issue and then to look at backtraces if nobody bothered to include any detail in the error.

          • uis@lemm.ee
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            Somehow (lack of) backtrace and details in error is “C based assumption”