Research: slow but sure, and an encounter with ChapGPT

3 minute read

Still researching on the testers’ experiences of tools and it is a slow and exploratory, experimental process…

I’m at the University at present, working on my research towards my PhD. Discussion in the kitchen last week over coffee about AI, ChatGPT, its capabilities and flaws and the future of software engineering, learning, what we value in knowledge, and so on…

Someone says to me: you know, people might start asking you why you don’t just ask ChatGPT your research questions… you need to be prepared for that

My academic supervisor then tells us all that he’d put one of my research questions into ChatGPT during one of our meetings… and ChatGPT just hung… it couldn’t answer…

He says “shows it is a good research question – and shows why we need a PhD student doing this” Suddenly I feel quite good about the difficulty of the endeavour… the answers to my questions are not just out there, it is worth the struggle to research them…

As Lisa Crispin remarked about it when I posted elsewhere, you’d expect that result. My reflection is – why did I immediately assume that what I have done is not going to be as well regarded as what a machine does, or indeed what anyone else does…? That’s a question for another day.

I tried ChatGPT on some simpler questions around the areas I’m researching and it gave standard, text book replies – nothing I’d disagree with much, nothing I’d get excited about. No big insights…With my brain I can come up with questions that haven’t yet been answered, so a search on the internet will not find the answers. Research, like the testing that I’m researching, is exploratory, experimental, contextual…

And at the moment I’m clinging to the outer edge of knowledge… so I spend much time feeling in a fog of stupidity, with forays into making progress as I find insights.

Back to the data analysis… and the experiment design for the next experiment…

At present I am working on designing heuristics arising from my research so far to help underpin better test tool design. The last big survey – to which so many of you contributed – has provided rich data about who is testing and what they are doing. As ever, analysing this data and validating it is time consuming. I’ve embarked on a series of case studies to experiment with heuristics and other ideas from the data, and I’m going to be doing a number reviews with experts to validate my findings.

And thank you to everyone who has helped so far by completing surveys, supplying data, discussing aspects of the work with me… Slow but sure, making progress…

Advertisement

Research results so far, and a request for your help

I’m now half way through my research on testers’ experiences with tools and automation, with an end date of October 2025. Thank you everyone who contributed their stories and comments to the data so far. I have published three papers (below)

My findings so far provide an insight into the usability and human issues that impede success with tools and automation, and also show the large range of people’s backgrounds before they come into testing. These findings, along with the complexity of testers’ multi-tasking roles means that designing tools for testers, automation for testers, and approaches for testers to use, is not easy.

I’m embarking on the next stages of my research. I’m asking some deceptively simple questions: Who are we testers? Where do we come from? What are we doing? And how are we doing it? The answers are not as straightforward as we might think. But they should feed into development of personas and other models to better tool design, with a better UX for the testers. They also feed into a better understanding of each other, of the diversity in our community of testers, and help us engage with each other – because not everyone doing testing is a tester, and not everyone working in IT is an engineer.

Please contribute to this research – if you have software testing as part of your role: please complete this survey.

The academic papers have more details and argumentation. If you would like copies of these papers, links are below. Note that the published copies are generally behind pay walls from the publishers. Pre-publication drafts can be made freely available, please contact me at isabel.evans.17@um.edu.mt

[a] I. Evans, C. Porter, M. Micallef, and J. Harty, “Stuck in limbo with magical solutions: The testers’ lived experiences of tools and automation,” in Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. SCITEPRESS-Science and Technology Publications, 2020, pp. 195–202. Link: Stuck in Limbo with Magical Solutions

[b] I. Evans, C. Porter, and M. Micallef, “Scared, frustrated and quietly proud: the testers’ experiences of tools and automation,” in 2021 European Conference on Cognitive Ergonomics. ECCE, in press, accepted by 2021 conference. Link: Scared, Frustrated and Quietly Proud

[c] I. Evans, C. Porter, M. Micallef, and J. Harty, “Test tools: an illusion of usability?” in 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 2020, pp. 392–397. Link: Test Tools: An Illusion of Usability?

My first academic papers on their way into the world…

I now have two academic papers on the point of being published.

One I present on Saturday 29 Feb at the HUCAPP conference: “Stuck In Limbo With Magical Solutions: The Testers’ Lived Experiences of Tools and Automation”.

The other is “Test Tools: an illusion of usability?” which I present at TAICPART in March.

Once they are published I will post the links to the papers.

Thank you to everyone who contributed to the workshops and surveys.

EuroSTAR 2019 – working well

It’s been a busy year preparing for and delivering the EuroSTAR 2019 conference; a year when I have learnt so much about myself, and about the team around me. A year when we have worked hard, and worked well, and so delivered a successful conference.

EuroSTAR Conference logo over an image of Charles Bridge, Prague

I set the theme for the year as “Working Well” and my first thoughts around that were about how the software, products and services that we test and deliver must work well for our customers – the people using or affected by the software. We need practices that work well for us, in order to test and deliver good software. How people work together, and the wellness of those people – that is also part of working well. And finally we have challenges – things that might stop us working well. In the call for papers, I asked people to submit ideas for sessions that expressed one or more of those different approaches to Working Well.

Conference theme - Working Well has four part: Purpose, Practices, People and Challenges.

Around me – planning and preparing for the conferences, and of course helping to deliver it, I have had a great team. Both the EuroSTAR team and the Programme Committee have been amazing all year, as were the volunteers, speakers, and exhibitors at the conference. I have had strong and positive lessons and reinforcements in being able to trust other people working as a team, in letting go control to other people to deliver a shared vision, and allowing myself to be surprised and delighted by what other people deliver when I step back.

Programme Committee: Isabel with Ioanna Chiorean, Gitte Ottosen, Jean-Paul Varwijk

Here are some things that we – the team -did together that were new for this year and which made me proud to be part of the team:

Our ability to work well is rooted in a foundation of being well ourselves, and we delivered a conference that in every way expressed the theme – we had wellness embedded, with yoga and a quiet room as well as the traditional morning run, as well as plenty of opportunities to re-hydrate, re-fuel, and relax.

EuroSTAR Wellness Logo

As part of that wellness, within the conference we thought about listening well, discussing kindly, listening to all viewpoints:

We have to walk in a way that we only print peace and serenity on the Earth. Walk as if you are kissing the Earth with your feet. Thich Nhat Hanh

Software and products that work well require – in my view – both diversity and inclusiveness in who we involve in the discussions, decisions and design. We expressed this in the conference by the range of ideas and speakers, by giving the Women in Test diversity session its own conference time on Thursday, and by having a new session – diversity strikes – where we asked the delegates at the conference to put themselves forward to speak for 4 minutes on the main stage – we had 5 wonderful speakers, with a diversity of messages about diversity, coming from the conference itself. Very proud of them, and of the session!

EuroSTAR Conference Diversity Strikes logo

There are many great speakers, and it is often a puzzle to know how you become a keynote – we introduced the New Generation Keynote mentoring scheme, mentored by Fiona Charles. The next generation keynotes were fabulous, and each delivered an important message – about customers, about aiding each other, about listening.

Photographs of the three next generation keynotes: Adonis Celestine, Shelley Lambert and Ryan Volker

The rest of the conference – the keynotes, the tutorials, the workshops, the tracks, the huddle, clinic and test lab, the expo, and the social events were all fabulous, I was so happy with the keynotes; they showed the theme as they took us through challenges that we and they face. They showed us that to tackle those challenges we need knowledge, ethics, empathy, questioning, critical thinking, problem solving, listening, involving customers – all helping us deliver our practices better – modes of testing at different stages through the life of a software product – from concept, to design, to build and into production. We face complex challenges, and we can overcome them.

Photographs of the four keynote speakers: Chris McKillop, Fiona Charles, Alex Bauduin, Dona Sarkar

Finally, the gala awards night was so special, and I am so pleased that Fiona Charles – that modest giant of our industry – received the Testing Excellence Award – I’m proud that happened on my watch!

At the end of conference, we had a moment of gratitude, remembered what we are capable of becoming, and set our intentions to take our learning back with us, and to work well for others and the world, while taking care of ourselves.

Close your eyes for a moment of gratitude. "In the practice of tolerance, one's enemy is the best teacher" HIs Holiness the 14th Dalai Lama
A moment of preparation. "You are already what you want to become" Thich Nhat Hanh
Working well: A moment of intention. "Whenever we see something which could be done to bring benefit to others, no matter how small, we should do it"  Chamgon Khentin Tai Situ Rinpoche

Thank you to everyone who helped make EuroSTAR 2019 happen – you exemplify what working well means, by your dedication and your teamwork throughout the year, and throughout the conference! We rolled the credits at the end of the conference with about 1000 people to thank!! It was a wonderful year of teamwork to bring it all together, and I am so happy and honoured to have been the chair for 2019. Here are some links to photos that other people took:

  • The EuroSTAR team took these photos and published them via (to be added)

Good luck to Rik Marsalis, the EuroSTAR 2020 Programme Chair, and to all the team who will I know work well with him to deliver a great conference!

Conference submissions – why and how

Recently someone asked me about how to become a conference speaker. I have spoken at conferences, and also served on programme committees, so I hope these thoughts are helpful to you in your quest to speak. Additionally, I been giving feedback to people whose submissions did not make it onto the EuroSTAR programme this year and who asked for feedback, and seen some common themes, including that with over 400 people applying for around 60 speaking places, and an excellent field of submission, many great submissions did not make it to the conference programme… not being selected doesn’t necessarily mean you did a bad submission.

Why speak at a conference?

My first question to you: Why would you want to speak at a conference? It is after all time consuming, stressful, and unlikely to be in the obvious mainstream of your job. Here are some reasons I speak:

  • to improve how I communicate about my subject – a skill for work.
  • to learn my subject: to give the talk, I’ll have to learn more, check facts, build my story.
  • to give back to my industry and educate others, by sharing challenges overcome.
  • for the fun of performing: it’s scary and fun, and a chance to play in public…

So, you want to speak at a conference… what to do? I’m assuming you have a story to tell, one you think is worth other people hearing? If you have not got a story to tell, there is no point speaking…

Don’t wait to be asked…

There are two ways to get a speaking place at a conference: you get invited, or you apply via a “call for submissions” (cfs) or “call for papers” (cfp). However famous you are, you might not get invited, so, if you want to speak at a conference, don’t wait to be invited. Instead, apply to speak. Your submission will be reviewed, and you will be accepted, or rejected. Don’t worry if you are rejected, it has happened to all of us – many times in my case. Conferences often have many more applications than they have speaking places. So review, and try again…

Choose your conference…

First job: decide which conferences you want to speak at, look at their websites to see what dates they run on and what style of submission they want. Look very carefully at any guidelines, themes, and style sheets they suggest. Also look at the websites for previous editions of the conference to see if there is a “house style” the conference favours. Also think about whether you can get to the conference if selected – travel visas, availability, dates and costs – can you go if you are selected, and how will you fund it? Some organisations will support you because they want representation at the conference. Some conferences provide funding towards travel and accommodation. When you are applying look at the balance of benefits and costs. Each of us will have a different view about what we want to do, what cost/benefit we need to make it worthwhile.

Investigate what information the cfp requires

Look at the session options offered carefully. Think about what they want for different types of session. Typically the minimum you will be asked for along with contact details is:

  • A title
  • An abstract
  • Your biography

You may be asked for a paper to explain your idea. You may be asked for key learning points, takeaways, what type of session this is, what type of audience it is aimed at, your speaking experience, evidence in the form of supporting documents, videos… it all depends on the conference.

What helps your submission succeed?

Factors that will help your submission for many industry conferences include:

  • telling a really good story – something compelling, coherent, concise, and which flows from the title, into the abstract and through to any takeaways.
  • focusing on your experiences of your projects – things where you are demonstrating your involvement, challenges you have faced and overcome, mistakes you have made and learnt from – rather than using your abstract to regurgitate theory.
  • having a new perspective to offer, something that has not been offered at this conference before.

If you don’t have speaking experience, think about getting mentoring – within your local/national industry communities, within your organisation, or via the conferences. You could look at SpeakingEasy for example ( https://speaking-easy.com/ ). Also look for opportunities to speak at local meet ups and national conferences before going for the larger international conferences. It’s likely that fewer people will apply and this increases your chances of being selected.

Conferences will often have themes that change year to year. Many conferences in addition are looking for speakers and sessions that increase the diversity of ideas and people, improve inclusiveness, are engaging, participative and interactive, allowing the audience to not just listen but also take part.

Do…

  • Have your own compelling story
  • About something unique, transformational
  • About overcoming challenges
  • Provide evidence!
  • Keep it coherent, well focused
  • Keep it clean…
  • Ask for help!
  • Get it reviewed
  • Get it proof read
  • Speak at smaller events first…
  • Ask for feedback

What to avoid

Don’t just send the same abstract to different conferences – they each want something different. Don’t send the same abstract several times to the same conference for different session types – it just annoys the programme committee. Don’t send an excessive number of submissions – it is better to have one really well thought out abstract.

Don’t…

  • Forget to spellcheck
  • Forget to tell your story
  • Present no evidence
  • Use bad language
  • Assume we know who you are
  • Ignore the conference style
  • Forget to ask for time off…
  • Expect to get in … necessarily

Useful links

Here are some useful other blogs and links…

Rob Lambert’s “Blazingly simple guide…”: https://www.linkedin.com/pulse/blazingly-simple-guide-submitting-conferences-rob-lambert/

Steve Watkins’ “How to prepare…” https://stevethedoc.wordpress.com/2019/05/20/how-to-prepare-your-first-conference-talk-1-getting-started/

SpeakingEasy: https://speaking-easy.com/

Good luck!

and give it a go – you won’t get in unless you try!

CHI2019 – pre and lite events – notes for Isabel

CHI 2019 was in Glasgow this year, and although I could not get to the main conference, I attended, and got a lot from, a “pre” event and a “lite” event run around the main conference.

The PreCHI day took place at the University of Dundee, and was a chance to hear a precis of papers delivered by academics from Scottish Universities at the main confererence. This was a good day, mainly fo rme to understand the breadth of research in HCI, and the types of project, that are happening. All the talks were interesting in that respect, ranging from a comparison of comics and infographics for helping to convey factual information, through virtual reality studies, to health care, network analysis, haptics and advocacy. The highlight talk for me was “developing Accessible Services: Understanding Current Knowledge and Areas for Future Support (Crabbe, Heron, Jones, Armstrong, Reid, Wilson) where among other things a useful matrix of accessiblity needs by (time?) against (type/are?) gave me pause for thought. The accesibility areas were: cognitive, communication, visual, physical, emotional, and the temporal axis was permanent, temporary, situational. So someone holding a heavy object is situationally, physically impaired from say picking up another object. When you start looking at accessiblity in that way it reinforces the idea that all of us need accessibility. It could feed into some of the ideas for the test tools work. Accessibility of the tools and of the information the tools generate. Other presentations I should follow up in terms of work on haptics, embodiment, and advocacy when thinking about my next steps. Notes are in Polish notebook.

CHIlite was an evening of highlights from the CHI conference and open to the public. Very good evening, with inspiring presentations that show how the HCI community is seeking to make the world a better place. Talks included “Bringing the Internet to the Brazilian Amazon” (Leal), “Seekign social justice through story telling” (Ahmed) and How can apps support sustainable behaviour” (Nkwo) – so heartening to see younerg people enaged in bringing technology to their communities in a postive way. to do good. Two talks by older practitioners on how we trust IT perhaps too much were by KOnstan “What makes a good recommendation?” and Sundar “Do we trust the machines too much?” which were thoughtful caveats on tech usage. And a few of the presentations spoke about the importance of the user/customer feeding back to the developer(s) about what they needed, what they liked and disliked. A call for the user to have a greater voice in what is delivered. Hofman on “putting a £D printer in the doctor’s office” was a good example, on patients requesting what they needed from a 3-d printed artificial limb, while Trllemans talked about control of the use of our smart environments, and Dereshev asked “What it is like living with a companion robot?” and Miyashita demostrated how technology can fool us with amazing visual effects that disguise reality.

Actions: to take: get the papers that are most relevant, read and add to literature review.

CHIIR 2019 – S3 follow up reminder notes for Isabel

More notes from CHIIR 2019 –
so here are some highlights of session 3 … The audience I anticipate for this blog is 1 – namely myself when I want to remember what happened… so if you are not me reading this, apologies for the quick notes nature of it…. and there is probably both more detail than you need and yet… not enough. Follow the links to the papers if you are interested…

Session 3 paper 1: Knowledge context in search systems: towards information-literate actions By Catherine L Smith and Soo Young Rieh, see https://dl.acm.org/citation.cfm?id=3298940 for the paper. This really interested me – a perspectives paper about how we learn, and whether we learn, when using search engines. Main points:

  • “the knowledge content in SERPs has great potential for facilitating human learning, critical thinking and creativity by expanding searchers’ information-literate activities such as comparing, evaluating, and differentiating between information sources”
  • “we discuss design goals for search systems that support metacognitive skills required for long-term learning, creativity and critical thinking”
  • I made a note during the presentation – we don’t remember information stored on teh computer but we have a feeling that we do know it, and we do remember where we stored it (?) – it makes it harder to learn somethign new. Quoted Sparrow, Liv & Wegner 2011 – we remember where but we don’t remember what e.g. phone numbers. It strikes me that this is perhaps OK for phone numbers – we’ll find them on the phone or in an address phone (virtual or physical) – but for information generally on the web, it must be harder – the “where” is much more diffuse. Comment in the presentation that the feeling of knowing increases with searching on the web even if the search returns irrelevant information. Comment in the presentation that the accuracy of our judgement about whether we know something is reduced by using websearch.
  • the paper and presentation calls for the support of information literate searching. The design of search engines to support greater information literacy by conextualising search results, and actually slowing people down so they are supported in long term learning.
  • I compare this paper to the paper “Chooosing the right test automation tool: a grey literature review of practitioner sources” (2017) Raulamo-Jurvanen, Mantyla, Garousi
    • in the grey literature review, one of the findings was that when people look for information on the web about test tools, they pick off the most popular, most mentioned tools and resources. Therefore if those tools are popular / fashionable but not necessarily right for the searcher’s context, they may end up with the wrong tool for their purpose.
    • quotes from that grey literature review: once people had chosen a tool based on their web-search for information “trial use would often lead to wrong decisions” Question: the popular tools – are they popular because they are good, or popular because they are popular and therefore user groups, support, etc? Also note their point at the end of the paper on cognitive overload – so people choose what is obvious. “tendency for cognitive overload is likely to increase the prevalence of shortcut decision making proportionately” “social proof as a weapon of influence is claimed to be most influential under 2 conditions: uncertainty and similarity” the authors referring to Cialdini.
    • Taking the two papers together, does this indicate that testers (and other people invlved in test tool selection) need support for better decision making – better information literacy when looking for information about tools and automation?
      • do I know it?
      • can I find it?
      • having found it do I know how to judge it and whether to trust it?
  • The knowledge context for a tester is testing as a discipline, within IT the industry, to serve a particular domain. A tester requires knowledge and infomation literacy across all those knowledge contexts. Testers need to be critical thinkers – the points made in Smith & Rieh about the use of ILA “may be seen as an indicator that the system is not sufficiently optimised” – does that indicate that search engines as a source for information about tools reduces critical thinking? Key quote “In order to learn, understand, and gain confidence in their knowledge, information literate people ask and answer questions about the information they encounter” Critical thinking and making indeppendent judgements are key characteristics of good testers.
  • Also explore the points on transactive memory – where teams / pairs “split responsibility for remembering parts of the information required to complete a task” – how does that sit with the dev/test relationship? different track to purpue – not for research, just interesting
  • Summary findings are that when people believe information will be stored on a computer they are less likely to remember it, and more likely to remember where the informaiton is. … the use of web search leads people to overestimate how much they know.”
    • in testing we use the concept of the oracle for test results
    • which I have always found funny given that oracles (eg Delphi) tended to be ambiguous and easy to misinterpret
    • information literacy includes the use of multiple oracles, and comparing them – and indeed not treating them as oracles, but as information sources to be critically assessed and questioned.
    • The ways we understand whether to trust information includes the “bibliographic knowledge-context” (publisher, author, form, reading level scores) and the “inferential knowledge-context” (other works, comparisons, citations, history, versions, valence / biases) – can this be mapped to how we understand tools?
    • for testers, there is a tension between a need to get information quickly and the need to critically assess that information – especially when we are in a hurry. What can we trust?
      • testers use web sources to learn – need to critically assess those sources
      • testers provide information obtained from tools – need to critically assess that information
  • this reminds me of the point in the conversation with Dot Graham on the “illusion of usability”

CHIIR 2019 – papers S1/S2 – follow up reminder notes for Isabel

The whole conference was exciting, friendly, so packed with information that by the end of Wednesday I was unable ingest any further ideas!!! It was just great. I got something from each session and there were a couple I wanted to follow up on for specific reasons – so here are some highlights of session 1 and session 2 … The audience I anticipate for this blog is 1 – namely myself when I want to remember what happened… so if you are not me reading this, apologies for the quick notes nature of it…. and there is probably both more detail than you need and yet… not enough. Follow the links to the papers if you are interested…

  • Session1, Paper 1: Learning about work tasks to inform intelligent assistant design (presented by Johanne Trippas and with a huge list of co-authors – see https://dl.acm.org/citation.cfm?id=3298934 for the paper)
  • Here are some notes I made during the talk… and at the conference after a brief chat with Johanne:
    • wanting to empower people in their work
    • need to understand how people complete tasks
    • looked at cyber, social and physical aspects
    • asked people what tasks they were doing at work, and how much time on each task…
    • what do we mean by “context” when the context is the workplace?
    • need to understand HOW people complete tasks – thinking about collaboration, how much movement/physical activity is involved, how people are using tools (and which tools), how people classify their tasks, how the tasks change over time (of day, of week?)
    • find out what people want from intelligent assistants
      • task management
      • task tracking
      • (Isabel thought – Hmmm – so a mix of a manager and a PA??? As we talk more about self-managed teams, agile methods, etc… as we remove those human interactions and support that we get from a good manager, or a good PA… are we leaving people a little lost? feeling a little abandoned…?)
    • from the findings make recommendations for improving intelligent assistants at work.
    • Information workers do multiple tasks, what is a meaningful breakdown of those tasks? Hierarchy of activity/purpose of tasks – getting people to categorise their tasks is difficult – (thought from Isabel – do people understand their tasks in terms of the reason they are employed, why their organisation needs them, their purpose… or do they see their tasks as a series of small busy things, that don’t particularly relate to a wider purpose?
  • And here are some notes I made when reading the paper post conference:
    • a note is made about several ways to understand tasks – and refs to ways to do this ***follow up*** This could be a way to look at how people relate testing tasks to tools and to automation???
      • diary studies
      • naturaliistid field studies
      • lifelog analysis
      • statistical time use surveys
      • sudies of information needs, communications, information seeking – these could be relevant for methods???
      • survey (method used in this paper)
      • (Isabel note: cyber, physical and social activities – that is an interesting split; being at work is not just about completing tasks, there is also an element of the team or department as a community, and the physical part – that’s interesting – the effect on one’s body of the way the tasks are done…)
      • (isabel note: the poitn about the lack of penetration of intelligent assisitants for more complex tasks… I need to look again at Paul Gerrard’s talk about “testing with my invisible friend” and talk with him about what progress he has made… (see https://conference.eurostarsoftwaretesting.com/event/2017/testing-with-an-invisible-friend/ and Marianne’s sketchnote is a nice summary: https://twitter.com/marianneduijst/status/928189626929614848)
      • a note in section 2.3 about KUshmerick and Lau using FSM’s to formalise e-commerce transactions… Hmmm – could that be a tool / technique to document interactions in a test team between test designers and automators…??? ***think about this***
      • I can see looking at section 2.3 that I am looking at a subset of a subset of tasks… Uness I get interested in what distracts people from their main/key task??? leave that one alone for now…
      • The categories used in this paper’s task taxonomy could be a useful starting point for a taxonomy of testing tasks – it would be interesting to see if testers divided up their time in a similar way, and what sub-categories there might be under each category in the taxonomy. I know how I would break it down for how I work – but would it be the same for other testers? It could be quite different…
        • for example “IT” is one category and “project” is another… so if you are in IT, then (I guess) IT activities you do in order to provide yourself with an infrastructure to do your own testing are in “IT” and activities you do in order to test software being delivered in a project to a customer are “project” activities, so is managing the test automation an “IT” task – because it supports the testing… and is not in itself the purpose of the project… It would interesting to see how testers categorise it…
      • I’m interested in the point in section 4.4 about how intelligent assistants could help with longer durations tasks – the idea of an assistant that keeps a note of incomplete tasks to be resumed for example. (Isabel note to self: Have a look at agile/lean/kanban task duration recomendations and see if that fits with the task times being reported in this paper – what is the longest task people can work with as a “long task”? Is the “length of meeting” rule I was brought up onstill valid? (no more than 2 hours, pref no more than an hour, break after an hour, attention into flow state after 15-20 mins, How does that fit with the “15 min standup meeting advice for Scrum?” )
      • section 4.5 lists some tools people use (digital and physical such as post it notes, paper calendar – make sure I have physical tools included in what I ask about.
      • Concluding note – there is a lot for me to follow up in this paper, and ideas to use as a model for surveys and analysis.
  • Session 2 paper 3: Take me out: space and place in library interactions George Buchanan, Dana McKay, Stephann Makri. The paper is here: https://dl.acm.org/citation.cfm?id=3298935
    • This presentation and paper interested me partly as a library user, partly because of some new-to-me concepts the authors discussed, and partly as some input into UX/devices&Desires/imagine-our-customers sessions that I have coming up soon.
    • I liked the idea of place and space – the physical location and layout, versus the semantic meaning. For example “a place with lots of bookshelves is not necessarily a library” so we look at what people do as well as opposed to what they ask for… or talk about
      • Isabel note: in the same way – when does a test lab become a test lab? When is it an “information place” and what else could it be? Is this s useful idea to explore?
    • They talked about “wizard of oz” methods – I had not heard of that before – need to look into it…
    • They talked about the movement between physical and digital media when looking for information in a library. Isabel note: that too could be analogius?
    • “people reconstruct the technology you give them” – interesting quote – technologists provide methods, approaches, devices, etc but how people react to that may be unexpected, and the devices might be used for different purposes, in different ways. (That came up in the Museums keynote too – that people don’t interact with technology in the way curators expect)
    • from the paper:
      • “information interactions are strongly affected by the place where they occur”
      • “There is considerable ignorance of and resistance to the use of digital resources … some of which is related to the physical realities of the library”
      • section 2.2. seems to indicate that digital resources in a library are behaving like “closed stack” systems – where you need to know what you want and order it by name – rather than open-stack systems where you browse the shelves and serendipity leads you to new books, authors, topics…
      • paper quotes Warwick “danger of technocratic arrogance if we assume everythign can be modelled digitaly and thus improved” [ref is #21 in this paper – Warwick, C., 2017 “Beauty is truth: Multisensory inputand the challenge of designign aesthetically pleasing digital resources”]
      • note from Isabel – I was reminded of my experiences when Worcester public library merged with the Worcester Uni library – so that instead fo finding say “gardening books” all together, they were split across agriculture, horticulture, design… so that the shelves were a mix of amateur / easy to read and academic / industrial / professional – my personal experience was that I know found it harder to find what I needed… or I caught myself up in looking at additional material that was not really relevant. There is tension between relevance and serendipity…
      • note from Isabel: the lesson for the TX research is maybe about making the tester’s workspace (physical and digital) work as one – and also for other stakeholders for testing – think about how the information reaches them, how the medium for that information fits with each person’s working preference? WIthout being “gimmicky” (see section 9 of the paper)
      • quote: “designers should consider space and place carefully when designing mobile experiences”

CHIIR conference report – keynote highlights

The conference opened on Monday with a keynote from Ranjitha Kumar, which I found eye-opening and inspiring. Her team are working on “Data Driven Design: beyond AB testing” She pointed out that money spent on design does not always repay in results, and that A/B testing can be usefully supplemented with oher methods. In particular her team is working on “design mining” (rather than data mining) to find out what designs are being used elsewhere – she said there is a rich seam of designs available which give inspiration and a test / review point. She talked about the need to connect design with KPI’s, and to understand the success of designs in terms of their effect on KPI’s.

The second keynote, on Tuesday was also fascinating. Daniela Petrelli showed three case studies of making visitor experiences during museum visits multisensory, more engaging and more memorable. By using IoT technology, objects can be used to engage visitors in specific stories. I particularly loved the votary lamp that allows visitors to an exhibit on Hadrian’s wall chose three items – each a different god – and receive a personalised postcard with oracle-like messages. This a study at Chesters Fort , specifically around the Visitor eXperience of the Clayton collection. The three case studies indicated that visitors are more engaged and remember more, because they slow down and take longer to examine objects, when they use a physical object to access information – rather than a digital screen/phone. The IoT technology allows small objects – facsimiles that can be held in one’s hand – to be used to interact with video, audio, etc related to exhibits, and allow visitors to choose the viewpoint they experience in their journey through the museum.

I loved these two keynotes, interesting in so many ways – for me as a comsumer of information on the web and in museums, but also as a test consultant. Possible analogies – these gave me some thoughts about the experience of testers in their projects.

  • For example, if it true that people are more engaged and remember more when interacting with physical objects, could we use this idea to change how people examine and interact with information generated by testing? This is NOT age related… What does it tell us about how we generate, use and display information?
  • for example, if design mining is a useful supplement to A/B testing, how could it be used to supplement how we test designs – could it be a source for heuristics to use when testing interface designs?
  • for example, what we as digital experts provide and are proud of, is not always what the consumers of our work want or expect, For example, the questions that a search engine or chat bot responds to are not always the questions consumers want to ask. How can testers find out and understand what consumers actually want? That includes the consumers of the information from testing.
  • From those questions, I wonder about our testing dashboards – not for the first time in my decades in industry – and why we don’t talk with our stakeholders, in their language. I’ve been talking about this for years, presenting on it, teaching about it… I’ll continue with that. Quote from K1 about fashion websites – customers ask for “hot pink” websites talk about “Fuchsia” or “magenta”
  • K2 provided a mini lifecycle for co-design and co-development where a technical person, a designer and a curator get together and split apart repeatedly to generate and test the ideas and design for artefacts. Is there an analogy to the developer, UXer and product Owner, and if so, where is the testing, and is there a need for a specific tester role?

CHIIR Conference Glasgow March 2019: Tutorial report

This was my first time at CHIIR, and it was a really enjoyable experience; lovely people, great community spirit and the sessions were full of information and discussion. I started with the Tutorial on Sunday 10th March “Coding qualitative data: you asked them, now what to do with what they said” led by Dr Rebekah Willson (University of Strathclyde). There is a pleasure in being taught by a good teacher who enjoys their subject, even if the subject is not one of direct interest. As it happens, the subject for this tutorial was right on topic for me, right now, so a double pleasure. A really good session, which Dr Willson described as a “whirlwind tour”, but in fact gave space for us to work in pairs on an exercise, discuss and feedback. I’ve come away from that tutorial feeling more confident that I can code up the qualitative data I have collected so far in my studies.

We covered a step by step approach to coding qualitative data, bearing in mind the “paradigm shift in thinking” as one moves from quantitative to qualitative methods: we’re dealing with the human and that is messy, challenging, based on experiences and beliefs, and allows a broader, holistic understanding, albeit one that is constructionist, with the researcher involved in the research, giving multiple meanings, multiple interpretations. We are there, we are part of the process, so we have to think about the role we have and what we are doing. The result of qualitative data collection is richer data that is more difficult to interpret. We are asking “Why did they do/say that?” There are several approaches to coding, and so it is important to choose one and stick with it. There are challenges of qualitative research being in itself a learning process – it is messy, it is fun, and doing it shows you how to do it. It is normal to be confused and overwhelmed. That’s a helpful thought. Dr Willson chose to show us one route through, with a series of iterating steps, providing a robust and rigorous approach to analysing qualitative data. She reminded us that a negative/opposing result can often be the most useful and interesting thing to explore – why is that case different? It is about following where the data leads, and moving from the concrete to the abstract. Looking for similarities, grouping and classifying. She talked about the process feeling uncomfortable, which I find to be true – like wandering in a fog and occasionally glimpsing the light!

When we gather data for a qualitative study, we usually have a vast volume of material – for a example, transcribing an interview can give you 1000’s of words of material. Furthermore, when you ask open questions, the answers are unpredictable and often richer than you’d anticipated. This fits with what’s happening for me. Instead of asking “what is your job title?” and “what is your education?” in a recent survey, because of a limit on the number of questions – I combined the two into “Tell me a bit about yourself” and received back long essays that told me such a variety of things, and sparked so many questions that I had not thought to ask, around ideas that I now see are interesting to explore… Dr Willson said we must pay attention to anything that is potentially interesting, code it up and then refine our ideas – grouping, splitting up, asking new questions of the data, all the time moving from a broad view of the data to a deeper focus. Also, be rigorous and trustworthy – sharing how we code the data, what steps we took, taking an iterative approach, triangulating across data sources, including negative examples, making our codebook available, making our inclusion/exclusions available. The researcher must be trustworthy, and if more than one persons is coding – this is a good thing to check for consistency of interpretation, provided that there is inter-coder reliability; we need clear codes, clear reasons for using the codes, clear inclusion and exclusion criteria. This means we’ve moved from the initial coding exercise to a focused coding stage, using a code book. The coders code separately and then compare results.

Dr Willson described several methodologies for qualitative analysis, and explained that the choice of methology is affected by the research questions. The methodology she showed us in detail, and which we practised in the exercises is Thematic Analysis. She talked about two levels of engaging with the data: the SEMANTIC level where we look for and code things that are expliciit in the data, and the LATENT level where we look ideas and assumptions implicit in the data. We need to decide ahead of time which we do. In thinking about these levels, we start to realsie that what people say and what they do can be different – so field notes about behaviour become part of the data. As well as text, we might collect and analyse video, audio, images and so on. The steps in thematic analysis are:

  1. familiarise – read the text several times and take notes. Do it line by line!
  2. generate initial codes, get to know the data – again line by line.
  3. start to look for patterns in the codes, perhaps ways they group
  4. make themes of one or more codes – overarching ideas that cut across the codes.
  5. review the themes against the data… do they make sense?
  6. and do it again…

Defining and naming the themes provides the analytic power – think about what the thme can contribute. Themes can have subthemes, so there can be a hierarchy of themes, subthemes, categories, and codes. The code book has the full description of these, and each code and theme has a single word or short phrase descriptive name. Relate the codes and themes back to the research questions. As this process is worked through, the research questions might change – because we realise the data is pointing us in a new direction. We need durign research to constantly revist our questions, out data, our themes and codes t ensure we are following the data, asking the right questions, revisiting, enlarging and clarifying, all the time. Whether we start from a deductive approach (where we predefine the codes to support our idea and the questions we want to explore) or an inductive approach (where we explore the data, come up with codes and build to themes and questions) or move between the two – always we need to keep revisiting the data. Follow up, change the questions, revisit ideas, identify what is different, look for variations…

Later in the week, the conference dinner was at the Science Museum, and while there I noticed a mural/display that said “We are all scientists; we all observe, find reasons, look for relationships, categorise and make models” Unfortunately my photo of it is too blurry to share… but it summarised the tutorial and the week for me. Thank you, Dr Willson for a brilliant tutorial!