3 minute read
Still researching on the testers’ experiences of tools and it is a slow and exploratory, experimental process…
I’m at the University at present, working on my research towards my PhD. Discussion in the kitchen last week over coffee about AI, ChatGPT, its capabilities and flaws and the future of software engineering, learning, what we value in knowledge, and so on…
Someone says to me: you know, people might start asking you why you don’t just ask ChatGPT your research questions… you need to be prepared for that…
My academic supervisor then tells us all that he’d put one of my research questions into ChatGPT during one of our meetings… and ChatGPT just hung… it couldn’t answer…
He says “shows it is a good research question – and shows why we need a PhD student doing this” Suddenly I feel quite good about the difficulty of the endeavour… the answers to my questions are not just out there, it is worth the struggle to research them…
As Lisa Crispin remarked about it when I posted elsewhere, you’d expect that result. My reflection is – why did I immediately assume that what I have done is not going to be as well regarded as what a machine does, or indeed what anyone else does…? That’s a question for another day.
I tried ChatGPT on some simpler questions around the areas I’m researching and it gave standard, text book replies – nothing I’d disagree with much, nothing I’d get excited about. No big insights…With my brain I can come up with questions that haven’t yet been answered, so a search on the internet will not find the answers. Research, like the testing that I’m researching, is exploratory, experimental, contextual…
And at the moment I’m clinging to the outer edge of knowledge… so I spend much time feeling in a fog of stupidity, with forays into making progress as I find insights.
Back to the data analysis… and the experiment design for the next experiment…
At present I am working on designing heuristics arising from my research so far to help underpin better test tool design. The last big survey – to which so many of you contributed – has provided rich data about who is testing and what they are doing. As ever, analysing this data and validating it is time consuming. I’ve embarked on a series of case studies to experiment with heuristics and other ideas from the data, and I’m going to be doing a number reviews with experts to validate my findings.
And thank you to everyone who has helped so far by completing surveys, supplying data, discussing aspects of the work with me… Slow but sure, making progress…