• Modern_medicine_isnt@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        Uptime isn’t quality. Perf and reliability are easily faked with the right metrics. It’s trival to be considered working on PowerPoint without working well for the user.

        • Lightor@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          Uptime is quality. It’s why uptime is in SLAs. A quality product isn’t down half the time.

          • Modern_medicine_isnt@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            Opinions like that are why software quality sucks. And why using software is so painful for most people. “I have to use a stroller to set my phone number on the UI.” “Sure, but uptime if 5 9’s, so it’s quality software”.

            • Lightor@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              Lol, saying uptime is needed for quality of why software quality sucks? What? Uptime is part of quality, it is not the sole determination of quality. You seem to be purposefully misunderstanding that concept.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        Uptime isn’t quality. Perf and reliability are easily faked with the right metrics. It’s trival to be considered working on PowerPoint without working well for the user

        • Lightor@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 months ago

          Uptime indicates reliability. Reliability is a factor of quality. A quality product has a high uptime. What good is a solution that doesn’t work 20% of the time? That’s exactly how you lose clients. Why do SLAs cover topics like five 9s uptime if they don’t matter and can be faked? This makes no sense.

          You said quality doesn’t matter, only features. Ok, what happens when those features only work 10% of the time? It doesn’t matter as long as it has the feature? This is nonsense. I mean why does QA even exist then, what is the point of wasting spend on a team that only worries about quality, they are literally called Quality Assurance. Why do companies have those if quality doesn’t matter, why not just hire more eng to pump out features. Again, this makes no sense. Anyone who works in software would know the role of QA and why it’s important. You claim to work in tech, but seem to not understand the value of QA which makes me suspicious, that or you’ve just been a frontline dev and never had to worry about these aspects of management and the entire SDLC. I mean why is tracking defects a norm in software dev if quality doesn’t matter? Your whole stance just makes no sense.

          It’s trival to be considered working on PowerPoint without working well for the user

          No it’s not trival. What if “not working well” means you can’t save or type? Not working well means not working as intended, which means it does not satisfy the need that it was built to fill. You can have the feature to save, but if it only works half the time then according to you that’s fine. You might lose your work, but the feature is there, who cares about the quality of the feature… If it only saves sometimes or corrupts your file, those are just quality issues that no one cares about, they are “trivial?”

          • Modern_medicine_isnt@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            See, you just set the bar so low. Being able to save isn’t working well, it’s just working. And I have held the title of QA in the past. It is in part how I know these things. And in the last 5 years or so, companies have been laying off QAs and telling devs to do the job. Real QA is hard. If it really mattered you would have multiple QA people per dev. But the ratio is always the other way. A QA can’t test the new feature and make sure ALL the old ones still work at the rate a dev can turn out code. Even keeping up on features 1 to 1 would be really challenging. We have automation to try and keep up with the old features, but that needs to be maintained as well. QA is always a case of good enough. And just like at Boeing, managment will discourage QAs from reporting everything they find that is wrong. Because they don’t want a paper trail of them closing the ticket as won’t be fixed. I’ve been to QA conferences and listened to plenty of seasoned QAs talk about the art of knowing what to report and what not to. And how to focus effort on what management will actually ok to get fixed. It’s a whole art for a reason. I was encouraged to shift out of that profession because my skills would get much better pay, and more stable jobs, in dev ops. And my job is sufficiently obscure to most management that I can actually care about the users of what I write more. But also I get to see more metrics that show how the software fails it’s users while still selling. I have even been asked to produce metrics that would misrepresent the how well the software works for use in upper level meetings. And I have heard many others say the same. Some have said that is even a requirement to be a principle engineer in bigger companies. Which is why I won’t take those jobs. The “good enough” I am witness/part of is bad enough, I don’t want to increase it anymore.

            • Lightor@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              I’m setting a new low sure, and you’re moving the goal posts. What “well” means is incredibly subjective.

              You worked in QA, cool, and I’ve manage the entire R&D org of a nation wide company, including all of QA.

              Your saying that since companies don’t invest in it enough it doesn’t matter at all? Why do they even invest at all then, if it truly doesn’t matter.

              Yes a QA can test old features and keep up with new ones. WTF, have you never heard of a regression test suite? And you worked in QA? ok. Maybe acknowledging AQA is an entire field might solve that already solved problem.

              You did a whole lot of complaining and non relevant stories but never answered any questions I’ve been asking you across multiple comments…

              • Modern_medicine_isnt@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                What goal post have I moved. My initial comment could have said work well for the user. But the second sentence implied that pretty clearly. And I am still saying it now. And great for you. You probably drank the kool-aid to get that position, so you feel the need to claim carry water for the illusion that upper management always try to project. I mean, you might be the exception, and truely believe in the things you say. Maybe you even work for one of the rare companies where it is true. But the vast majority of people working in the field that I have talked to have said that just isn’t how it is most places. Many said it used to be, when their company was small… but that it changed.

                And yes I wrote regression tests. And I worked hard to maintain them while writing tests on features. But with a 5 to 1 ratio of devs to QA, it wasn’t possible to not cut corners. A year after I changed jobs I found out they had lowered the bar for releasing to 55% passing of the regression tests. I never had the tools to make them able to resist change as they had no one owning the automation tools. The next guy just didn’t care as much. The job I moved to was qa automation so the qa’s were my customers. I did my best there to give them automation that would reduce maintenance costs. But we weren’t allowed to buy anything, we had to write it all. And back then open-source wasn’t what it is today. So the story was the same, cut corners on testing. And of course the age old quote… “why is QA slowing down our release process”. Not why are the devs writing poor code. The devs weren’t bad either, but they were pressed to get features out fast.

                As for why do they invest in it at all. Optics is a big part of it. But also to help maintain that low bar you spoke of. The moment industry trends started touting the Swiss army knife developer who could do it all including testing, they dropped qa teams like a bad habit. Presentations were given on how too much testing was bad, and less tests were better… that pendulum swings back and forth every decade or so. Because quality drops below the low bar, and the same exec who got a promotion for getting rid of the qa team at his last job 7 years ago, gets accolades for bringing it back in his new job.

                • Lightor@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 months ago

                  The goal post being moved is what does “not working well” mean. If it’s not working well I would consider it not working as designed. Otherwise I would just call it poorly designed. If something doesn’t work as designed, then things like outages and data issues are a problem.

                  "You probably drank the kool-aid to get that position, so you feel the need to claim carry water for the illusion that upper management always try to project. "

                  Sure. Or I worked my ass off to get there and learned things along the way that you are not aware of. Things I have pointed out through our conversations.

                  "And yes I wrote regression tests. And I worked hard to maintain them while writing tests on features. But with a 5 to 1 ratio of devs to QA, it wasn’t possible to not cut corners. "

                  It is very possible. The standard is 3-5 QA per eng, that has been the standard for a while. Look up what the ratio should be. I don’t know what you’re expecting in order to keep up, a ratio of 1:2? But this is very common. We have a ratio of 1:4 with about 40 devs and my QA team keeps up without issue. If you are seeing issues it’s usually due to a poor process or lack of skill by the team.

                  “And of course the age old quote… “why is QA slowing down our release process”.”

                  You must work for a backward company. I’ve worked for about a dozen tech companies and not once, after explaining the need for QA did they ever say that. You explain what a Sev1 incident is or how a hack can impact the company and smart people listen. You may have worked for bad companies that put this taste in your mouth, but I have worked in some of the largest tech hubs (bay area, NY, SLC) and this is a huge exception not the rule.

                  "The moment industry trends started touting the Swiss army knife developer who could do it all including testing, they dropped qa teams like a bad habit. "

                  What, when was this? AGILE development is pretty much the standard, with SCRUm waterfall a second. In both cases you have dedicated QA with possibly devs writing unit tests. But is a massive antipattern to have a dev be the only one QAing their work, it always has been, always will be.

                  "Presentations were given on how too much testing was bad, and less tests were better… that pendulum swings back and forth every decade or so. "

                  Yah, except that pendulum swinging can cause events that tank entire companies. Any company worth it’s salt would never fall into that trap because they know it could burn their investment to the ground in a heartbeat.

                  • Modern_medicine_isnt@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    6 months ago

                    The shift was around 2018 or so. It was talked about on qa forums and conferences. It likely was talked about at conferences that management go to, but I can’t confirm that. It seems like you must work in a specialized industry. 1 QA to 4 devs is about the standard. And they keep up by cutting corners. The effort required to creat test automation to test a feature is on par with the effort to create the feature. And then you have to add in old tests that need to be maintained. No way one person can cover 4 and not cut corners.

                    The company that takes the risks gets the product out before those that don’t. And the ones who get lucky not to have a major thing tank them win in the end. That is just how the system works.