CHIIR 2019 – papers S1/S2 – follow up reminder notes for Isabel

The whole conference was exciting, friendly, so packed with information that by the end of Wednesday I was unable ingest any further ideas!!! It was just great. I got something from each session and there were a couple I wanted to follow up on for specific reasons – so here are some highlights of session 1 and session 2 … The audience I anticipate for this blog is 1 – namely myself when I want to remember what happened… so if you are not me reading this, apologies for the quick notes nature of it…. and there is probably both more detail than you need and yet… not enough. Follow the links to the papers if you are interested…

  • Session1, Paper 1: Learning about work tasks to inform intelligent assistant design (presented by Johanne Trippas and with a huge list of co-authors – see https://dl.acm.org/citation.cfm?id=3298934 for the paper)
  • Here are some notes I made during the talk… and at the conference after a brief chat with Johanne:
    • wanting to empower people in their work
    • need to understand how people complete tasks
    • looked at cyber, social and physical aspects
    • asked people what tasks they were doing at work, and how much time on each task…
    • what do we mean by “context” when the context is the workplace?
    • need to understand HOW people complete tasks – thinking about collaboration, how much movement/physical activity is involved, how people are using tools (and which tools), how people classify their tasks, how the tasks change over time (of day, of week?)
    • find out what people want from intelligent assistants
      • task management
      • task tracking
      • (Isabel thought – Hmmm – so a mix of a manager and a PA??? As we talk more about self-managed teams, agile methods, etc… as we remove those human interactions and support that we get from a good manager, or a good PA… are we leaving people a little lost? feeling a little abandoned…?)
    • from the findings make recommendations for improving intelligent assistants at work.
    • Information workers do multiple tasks, what is a meaningful breakdown of those tasks? Hierarchy of activity/purpose of tasks – getting people to categorise their tasks is difficult – (thought from Isabel – do people understand their tasks in terms of the reason they are employed, why their organisation needs them, their purpose… or do they see their tasks as a series of small busy things, that don’t particularly relate to a wider purpose?
  • And here are some notes I made when reading the paper post conference:
    • a note is made about several ways to understand tasks – and refs to ways to do this ***follow up*** This could be a way to look at how people relate testing tasks to tools and to automation???
      • diary studies
      • naturaliistid field studies
      • lifelog analysis
      • statistical time use surveys
      • sudies of information needs, communications, information seeking – these could be relevant for methods???
      • survey (method used in this paper)
      • (Isabel note: cyber, physical and social activities – that is an interesting split; being at work is not just about completing tasks, there is also an element of the team or department as a community, and the physical part – that’s interesting – the effect on one’s body of the way the tasks are done…)
      • (isabel note: the poitn about the lack of penetration of intelligent assisitants for more complex tasks… I need to look again at Paul Gerrard’s talk about “testing with my invisible friend” and talk with him about what progress he has made… (see https://conference.eurostarsoftwaretesting.com/event/2017/testing-with-an-invisible-friend/ and Marianne’s sketchnote is a nice summary: https://twitter.com/marianneduijst/status/928189626929614848)
      • a note in section 2.3 about KUshmerick and Lau using FSM’s to formalise e-commerce transactions… Hmmm – could that be a tool / technique to document interactions in a test team between test designers and automators…??? ***think about this***
      • I can see looking at section 2.3 that I am looking at a subset of a subset of tasks… Uness I get interested in what distracts people from their main/key task??? leave that one alone for now…
      • The categories used in this paper’s task taxonomy could be a useful starting point for a taxonomy of testing tasks – it would be interesting to see if testers divided up their time in a similar way, and what sub-categories there might be under each category in the taxonomy. I know how I would break it down for how I work – but would it be the same for other testers? It could be quite different…
        • for example “IT” is one category and “project” is another… so if you are in IT, then (I guess) IT activities you do in order to provide yourself with an infrastructure to do your own testing are in “IT” and activities you do in order to test software being delivered in a project to a customer are “project” activities, so is managing the test automation an “IT” task – because it supports the testing… and is not in itself the purpose of the project… It would interesting to see how testers categorise it…
      • I’m interested in the point in section 4.4 about how intelligent assistants could help with longer durations tasks – the idea of an assistant that keeps a note of incomplete tasks to be resumed for example. (Isabel note to self: Have a look at agile/lean/kanban task duration recomendations and see if that fits with the task times being reported in this paper – what is the longest task people can work with as a “long task”? Is the “length of meeting” rule I was brought up onstill valid? (no more than 2 hours, pref no more than an hour, break after an hour, attention into flow state after 15-20 mins, How does that fit with the “15 min standup meeting advice for Scrum?” )
      • section 4.5 lists some tools people use (digital and physical such as post it notes, paper calendar – make sure I have physical tools included in what I ask about.
      • Concluding note – there is a lot for me to follow up in this paper, and ideas to use as a model for surveys and analysis.
  • Session 2 paper 3: Take me out: space and place in library interactions George Buchanan, Dana McKay, Stephann Makri. The paper is here: https://dl.acm.org/citation.cfm?id=3298935
    • This presentation and paper interested me partly as a library user, partly because of some new-to-me concepts the authors discussed, and partly as some input into UX/devices&Desires/imagine-our-customers sessions that I have coming up soon.
    • I liked the idea of place and space – the physical location and layout, versus the semantic meaning. For example “a place with lots of bookshelves is not necessarily a library” so we look at what people do as well as opposed to what they ask for… or talk about
      • Isabel note: in the same way – when does a test lab become a test lab? When is it an “information place” and what else could it be? Is this s useful idea to explore?
    • They talked about “wizard of oz” methods – I had not heard of that before – need to look into it…
    • They talked about the movement between physical and digital media when looking for information in a library. Isabel note: that too could be analogius?
    • “people reconstruct the technology you give them” – interesting quote – technologists provide methods, approaches, devices, etc but how people react to that may be unexpected, and the devices might be used for different purposes, in different ways. (That came up in the Museums keynote too – that people don’t interact with technology in the way curators expect)
    • from the paper:
      • “information interactions are strongly affected by the place where they occur”
      • “There is considerable ignorance of and resistance to the use of digital resources … some of which is related to the physical realities of the library”
      • section 2.2. seems to indicate that digital resources in a library are behaving like “closed stack” systems – where you need to know what you want and order it by name – rather than open-stack systems where you browse the shelves and serendipity leads you to new books, authors, topics…
      • paper quotes Warwick “danger of technocratic arrogance if we assume everythign can be modelled digitaly and thus improved” [ref is #21 in this paper – Warwick, C., 2017 “Beauty is truth: Multisensory inputand the challenge of designign aesthetically pleasing digital resources”]
      • note from Isabel – I was reminded of my experiences when Worcester public library merged with the Worcester Uni library – so that instead fo finding say “gardening books” all together, they were split across agriculture, horticulture, design… so that the shelves were a mix of amateur / easy to read and academic / industrial / professional – my personal experience was that I know found it harder to find what I needed… or I caught myself up in looking at additional material that was not really relevant. There is tension between relevance and serendipity…
      • note from Isabel: the lesson for the TX research is maybe about making the tester’s workspace (physical and digital) work as one – and also for other stakeholders for testing – think about how the information reaches them, how the medium for that information fits with each person’s working preference? WIthout being “gimmicky” (see section 9 of the paper)
      • quote: “designers should consider space and place carefully when designing mobile experiences”