CHIIR 2019 – S3 follow up reminder notes for Isabel

More notes from CHIIR 2019 –
so here are some highlights of session 3 … The audience I anticipate for this blog is 1 – namely myself when I want to remember what happened… so if you are not me reading this, apologies for the quick notes nature of it…. and there is probably both more detail than you need and yet… not enough. Follow the links to the papers if you are interested…

Session 3 paper 1: Knowledge context in search systems: towards information-literate actions By Catherine L Smith and Soo Young Rieh, see https://dl.acm.org/citation.cfm?id=3298940 for the paper. This really interested me – a perspectives paper about how we learn, and whether we learn, when using search engines. Main points:

  • “the knowledge content in SERPs has great potential for facilitating human learning, critical thinking and creativity by expanding searchers’ information-literate activities such as comparing, evaluating, and differentiating between information sources”
  • “we discuss design goals for search systems that support metacognitive skills required for long-term learning, creativity and critical thinking”
  • I made a note during the presentation – we don’t remember information stored on teh computer but we have a feeling that we do know it, and we do remember where we stored it (?) – it makes it harder to learn somethign new. Quoted Sparrow, Liv & Wegner 2011 – we remember where but we don’t remember what e.g. phone numbers. It strikes me that this is perhaps OK for phone numbers – we’ll find them on the phone or in an address phone (virtual or physical) – but for information generally on the web, it must be harder – the “where” is much more diffuse. Comment in the presentation that the feeling of knowing increases with searching on the web even if the search returns irrelevant information. Comment in the presentation that the accuracy of our judgement about whether we know something is reduced by using websearch.
  • the paper and presentation calls for the support of information literate searching. The design of search engines to support greater information literacy by conextualising search results, and actually slowing people down so they are supported in long term learning.
  • I compare this paper to the paper “Chooosing the right test automation tool: a grey literature review of practitioner sources” (2017) Raulamo-Jurvanen, Mantyla, Garousi
    • in the grey literature review, one of the findings was that when people look for information on the web about test tools, they pick off the most popular, most mentioned tools and resources. Therefore if those tools are popular / fashionable but not necessarily right for the searcher’s context, they may end up with the wrong tool for their purpose.
    • quotes from that grey literature review: once people had chosen a tool based on their web-search for information “trial use would often lead to wrong decisions” Question: the popular tools – are they popular because they are good, or popular because they are popular and therefore user groups, support, etc? Also note their point at the end of the paper on cognitive overload – so people choose what is obvious. “tendency for cognitive overload is likely to increase the prevalence of shortcut decision making proportionately” “social proof as a weapon of influence is claimed to be most influential under 2 conditions: uncertainty and similarity” the authors referring to Cialdini.
    • Taking the two papers together, does this indicate that testers (and other people invlved in test tool selection) need support for better decision making – better information literacy when looking for information about tools and automation?
      • do I know it?
      • can I find it?
      • having found it do I know how to judge it and whether to trust it?
  • The knowledge context for a tester is testing as a discipline, within IT the industry, to serve a particular domain. A tester requires knowledge and infomation literacy across all those knowledge contexts. Testers need to be critical thinkers – the points made in Smith & Rieh about the use of ILA “may be seen as an indicator that the system is not sufficiently optimised” – does that indicate that search engines as a source for information about tools reduces critical thinking? Key quote “In order to learn, understand, and gain confidence in their knowledge, information literate people ask and answer questions about the information they encounter” Critical thinking and making indeppendent judgements are key characteristics of good testers.
  • Also explore the points on transactive memory – where teams / pairs “split responsibility for remembering parts of the information required to complete a task” – how does that sit with the dev/test relationship? different track to purpue – not for research, just interesting
  • Summary findings are that when people believe information will be stored on a computer they are less likely to remember it, and more likely to remember where the informaiton is. … the use of web search leads people to overestimate how much they know.”
    • in testing we use the concept of the oracle for test results
    • which I have always found funny given that oracles (eg Delphi) tended to be ambiguous and easy to misinterpret
    • information literacy includes the use of multiple oracles, and comparing them – and indeed not treating them as oracles, but as information sources to be critically assessed and questioned.
    • The ways we understand whether to trust information includes the “bibliographic knowledge-context” (publisher, author, form, reading level scores) and the “inferential knowledge-context” (other works, comparisons, citations, history, versions, valence / biases) – can this be mapped to how we understand tools?
    • for testers, there is a tension between a need to get information quickly and the need to critically assess that information – especially when we are in a hurry. What can we trust?
      • testers use web sources to learn – need to critically assess those sources
      • testers provide information obtained from tools – need to critically assess that information
  • this reminds me of the point in the conversation with Dot Graham on the “illusion of usability”

Advertisements