Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

  • bluGill@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I.prefer to count and report total tests run as part of each build. We get impressive large numbers, but there is no way to put any specific goal on the exact number, we can always go higher.