• linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I love writing tests, It’s all the shit that comes after that that sucks.

      Those first few pushes that all come up green feel like magic. That first red that points out something you missed, you go back and make a quick change and it’s now green and it’s the best thing you’ve ever seen.

      It’s sooner or later, you throw a couple big red bois on a production build that don’t make any sense. You start digging through the code of some guy that only writes comments in haiku and has the impression he gets paid by the number of layers deep he can nest a ternary.

      Sooner or later you figure out it’s just an edge case there’s nothing actually wrong. You’ll need to refactor one of the systems but you still have production to push to fix a critical bug, so you hotwire the test and write it off as P1 tech debt.

      Eventually, you end up with unit tests that aren’t P1 and they fail. If you’re understaffed, or overscoped, sooner or later you just have a bunch of half-assed zombie test sitting around. Unless you can convince production to let you go back and clear up your tech debt it’s just a unit test graveyard. It still has the big bumpers in place so something serious can’t fail. But you never seem to be able to get back to make everything bright new and shiny again.

      • thisisawayoflife@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That does sound like a nightmare. I’m assuming you mean failed test when you say “red boy”, and that made me wonder about PR practices. I’m used to a very strict review environment and fairly quick review turnaround or requests to go over the code. I’ve heard horror stories about people not getting PRs reviewed for days or weeks or some people just plain refusing to review code. I work on microservices that are all usually less than 10,000 lines though, not something with over a million lines of legacy code.

    • fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I wish more developers would do QA. After working with QA my code improved so much because I was proactively thinking about how things might break or potential issues that I never would have thought of.

      • thisisawayoflife@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yep. When I was still doing QA, I saw some pretty terrible practices and tested code that barely built. Now as a software engineer, I have no QA and rely heavily on my own testing practices, namely, unit testing first, integration testing and system/e2e testing. I can’t guarantee the code is bug free and there’s parts I know that could be refactored (tech debt), but I know each piece is tested and does what I expect it to. As corny as it sounds, I’m a big fan of TDD. Unit/IT/E2E don’t replace QA in my opinion, but better set QA up to focus on the bugs that matter and not basic stuff.

    • ExLisper@linux.community
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Personally I dislike writing useless tests. I use test as a development tool (it’s easier to implement some DB operations for example by writing tests than performing some actions manually) and to test logic that can actually fail because of changes in other part of the code withouts me noticing. Testing thinks like “button calls click() method when clicked” is IMHO pointless. If someone can change this code and push to prod without testings manually or doing code review they can also disable the test without anyone noticing.

      • thisisawayoflife@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I agree, writing meaningless tests helps nobody and just creates extra work everyone. Unit tests should prove functionality and integration tests act as a vise. Much like you said, if a test breaks in that scenario, then you know something in another class has violated that contract. Good tests will have meaningful names and prove functionality, especially in the backend where it is especially important…

        You mention (what I would consider) a bad practice of allowing merges without review. While that should be possible on personal projects with only one dev, strict review guidelines should exist so that nobody can just “push to prod”. CICD is your friend - use it so that staging and prod never break. Again, I’m used to working on systems used by scores of millions of users so I appreciate forced automated validation. Nobody likes dumb breaks on a Friday before vacation.