Just learned about OpenSpec, driven by an AI development workflow. I’ve read and tweak a bunch of articles regarding the ClaudeCode (eg, Run 5 Claudes in parallel with the Claude CLI, and such) and how to write a good specification for Claude (fewer instructions, more use of progressive disclosure instead of giving all the information that we wanted)

No need to worry about context overload by using Beads practices https://steve-yegge.medium.com/beads-best-practices-2db636b9760c https://anthropic.skilljar.com/claude-code-in-action

Most of them were practically great for coding-development related, but still needed more context for software testing (it’s good for building a test harness, generating automation for API, reviewing test cases, analyzing test case priorities, finding an edge-cases and such)

The issue here is much more context bandwidth vs time, rather than AI tools vs QA/software testing. I guess, the paradigm has been shifted again