Referring more to smaller places like my own - few hundred employees with ~20 person IT team (~10 developers).

I read enough about testing that it seems industry standard. But whenever I talk to coworkers and my EM, it’s generally, “That would be nice, but it’s not practical for our size and the business would allow us to slow down for that.” We have ~5 manual testers, so things aren’t considered “untested”, but issues still frequently slip through. It’s insurance software so at least bugs aren’t killing people, but our quality still freaks me out a bit.

I try to write automated tests for my own code, since it seems valuable, but I avoid it whenever it’s not straightforward. I’ve read books on testing, but they generally feel like either toy examples or far more effort than my company would be willing to spend. Over time I’m wondering if I’m just overly idealistic, and automated testing is more of a FAANG / bigger company thing.

  • BehindTheBarrier@programming.dev
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    7 months ago

    I’m on a similarly sized team, and we have put more effort into automated testing lately. We got an experienced person in on the team that knows his shit, and is engaged in improving our testing. But it’s definiely worth it. Manual testing tests the code now, automated testing checks the code later. That’s very important, because when 5 people test things, they aren’t going to test everything every time as well as all the new stuff. It’s too boring.

    So yes, you really REALLY should have automated testing. If you have 20 people, I’d guess you’re developing on something that is too large for a single person to have in-depth knowldge of all parts.

    Any team should have automated test. More specifically, you should have/write tests that test “business functionality”, not that your function does exactly what it is supposed to do. Our test expert made a test for something that said “ThisCompentsDisplayValueShouldBeZeroWhenUndefined” (Here component is something the users see, and always exepct to have a value. There is other components that might not show a value).

    And when I had to interact with the data processing because another “component” did not show zero in an edge case. I fixed the edge case, but I also broke the test for that other component. Now it was very clear to me that I also broke something that worked. A manual tester would maybe have noticed, but these were seperate components, and they might still see 0 on the thing that broke becase they had the value 0. Or simply did not know that was a requirement!

    We just recently started enforcing unit tests to be green to merge features. It brings a lot more comfort, especially since you can put more trusting changing systems that deal with caluclations, when you know tests check that results are unchanged.

    • yournameplease@programming.devOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Was there any event that prompted more investment into testing? I feel like something catastrophic would need to happen before anyone would consider serious testing investment. In the past (before I joined) there were apparently people who tried to get Selenium suites but nothing ever stuck.

      I think nobody sees value in improving something that is more or less “good enough” for so long. In our legacy software, most major development is copy+paste and change things, which I guess reduces the chance of regressions (at the cost of making big changes much, much slower). I think we have close to 100 4k line java files copied from the same original, plus another 20-30 scripts and configs for each…

      We are doing a “microservices rewrite” that interfaces with the legacy app (which feels like a death march project by now), and I think it inherited much of the testing difficulties of the old system, in part due to my inexperience when we started. Less code duplication, but now lots of enormous JSONs being thrown all over the network.

      I agree that manual testing is not enough, but I can’t seem to get much agreement. I think I do get value when I write unit tests, but I feel like I can’t point to concrete value because there’s not an obvious metric I’m gaining. I like that when I test code, I know that nobody will revert or break that area (unless they remove the tests, I suppose), but our coverage is low enough that I don’t trust them to mean the system actually works.

      • BehindTheBarrier@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        7 months ago

        Our main motivator was, and is, that manual testing is very time consuming and uninteresting for devs. Spending upwards of a week before a release because the teams has to setup, pick and perform all featue tests again on the release candidate, is both time and money. And we saw things slip through now and then.

        Our application is time critical, legacy code of about 30 years, spread between C# and database code, running in different variations with different requirements. So a single page may show differently depending on where it’s running. Changing one thing can often affect others, so for us it is sometimes very tiresome to verify even the smalles changes since they may affect different variants. Since there is no automated tests, especially for GUI (which we also do not unit test much, because that is complicated and prone to breaking), we have to not only test changes, but often check for regression by comparing by hand to the old version.

        We have a complicated system with a few intergrations, setting up all test scenarios not only takes time during testing, but also time for the dev to prepare the instructions for. And I mentioned calculations, going through all motions to verify that a calculated result is the same between two version is a awfully boring experience when that is exaclty something automated tests can just completely take over for you.

        As our application is projected to grow, so does all of this manual testing required for a single change. So putting all that effort into manual testing and preparation can intsead often just be put on making tests that check requirements. And once our coverage is good enough, we can only manuall test interfaces, and leave a lot of the complicated edge cases and calculcation tests to automated tests. It’s a bit idealistic to say automated tests can do everything, but they can certainly remove the most boring parts.

        • yournameplease@programming.devOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I guess since we have manual QAs, there’s less motivation to get away from manual testing as it’s literally their job description. Not to say we aren’t wasting time and money still. I do find other devs and I still need to spend a lot of time ourselves manually sanity checking things.

          That all does sound like my dream end goal, though, thanks for the responses.