- cross-posted to:
- noncredibledefense
- cross-posted to:
- noncredibledefense
cross-posted from: https://sh.itjust.works/post/16344591
Context: https://youtu.be/DA3VsMteAxk
cross-posted from: https://sh.itjust.works/post/16344591
Context: https://youtu.be/DA3VsMteAxk
That’s awesome. Reminds me of programming UIs.
I wrote the UI for a media player once. Our QA found a way to crash the player.
“OH hey! I think I found a bad bug. Here, watch. You start a video, immediately hit stop, then start, then rewind, all before the video is done buffering. See? Crashed.”
Me: “I… who would… all before playback begins? Uh… OK yeah, I think I know why it’s… is anyone actually going to… FUCK. Guess I’m staying late tonight”
Rule number 1 for engineering. No matter what, there IS someone stupid enough to get themselves into that edge case if it exists.
I mean, of course it should be able to handle that. Hitting the playback control buttons in any sequence, at any speed, at any given time or any state of the program or media shouldn’t crash it. If it does you did something wrong. Asking questions about who would ever do that is bizarre to me because that’s not the issue. It’s not the user being stupid, it’s just a fragile system that should be robust.
Yes, hence the reluctant acceptance that it was something urgent I needed to fix, and why I fought so hard to get a QA on my team in the first place. They think so much differently than devs, and dream up conditions I couldn’t begin to imagine.
I’d say it’s also bad engineering to assume everyone has a good internet connection. These actions become much more likely as the buffering g time increases
There’s a tool for simulating bad internet, named “comcast.”