It’s been a busy year preparing for and delivering the EuroSTAR 2019 conference; a year when I have learnt so much about myself, and about the team around me. A year when we have worked hard, and worked well, and so delivered a successful conference.
I set the theme for the year as “Working Well” and my first thoughts around that were about how the software, products and services that we test and deliver must work well for our customers – the people using or affected by the software. We need practices that work well for us, in order to test and deliver good software. How people work together, and the wellness of those people – that is also part of working well. And finally we have challenges – things that might stop us working well. In the call for papers, I asked people to submit ideas for sessions that expressed one or more of those different approaches to Working Well.
Around me – planning and preparing for the conferences, and of course helping to deliver it, I have had a great team. Both the EuroSTAR team and the Programme Committee have been amazing all year, as were the volunteers, speakers, and exhibitors at the conference. I have had strong and positive lessons and reinforcements in being able to trust other people working as a team, in letting go control to other people to deliver a shared vision, and allowing myself to be surprised and delighted by what other people deliver when I step back.
Here are some things that we – the team -did together that were new for this year and which made me proud to be part of the team:
Our ability to work well is rooted in a foundation of being well ourselves, and we delivered a conference that in every way expressed the theme – we had wellness embedded, with yoga and a quiet room as well as the traditional morning run, as well as plenty of opportunities to re-hydrate, re-fuel, and relax.
As part of that wellness, within the conference we thought about listening well, discussing kindly, listening to all viewpoints:
Software and products that work well require – in my view – both diversity and inclusiveness in who we involve in the discussions, decisions and design. We expressed this in the conference by the range of ideas and speakers, by giving the Women in Test diversity session its own conference time on Thursday, and by having a new session – diversity strikes – where we asked the delegates at the conference to put themselves forward to speak for 4 minutes on the main stage – we had 5 wonderful speakers, with a diversity of messages about diversity, coming from the conference itself. Very proud of them, and of the session!
There are many great speakers, and it is often a puzzle to know how you become a keynote – we introduced the New Generation Keynote mentoring scheme, mentored by Fiona Charles. The next generation keynotes were fabulous, and each delivered an important message – about customers, about aiding each other, about listening.
The rest of the conference – the keynotes, the tutorials, the workshops, the tracks, the huddle, clinic and test lab, the expo, and the social events were all fabulous, I was so happy with the keynotes; they showed the theme as they took us through challenges that we and they face. They showed us that to tackle those challenges we need knowledge, ethics, empathy, questioning, critical thinking, problem solving, listening, involving customers – all helping us deliver our practices better – modes of testing at different stages through the life of a software product – from concept, to design, to build and into production. We face complex challenges, and we can overcome them.
Finally, the gala awards night was so special, and I am so pleased that Fiona Charles – that modest giant of our industry – received the Testing Excellence Award – I’m proud that happened on my watch!
At the end of conference, we had a moment of gratitude, remembered what we are capable of becoming, and set our intentions to take our learning back with us, and to work well for others and the world, while taking care of ourselves.
Thank you to everyone who helped make EuroSTAR 2019 happen – you exemplify what working well means, by your dedication and your teamwork throughout the year, and throughout the conference! We rolled the credits at the end of the conference with about 1000 people to thank!! It was a wonderful year of teamwork to bring it all together, and I am so happy and honoured to have been the chair for 2019. Here are some links to photos that other people took:
Recently someone asked me about how to become a conference speaker. I have spoken at conferences, and also served on programme committees, so I hope these thoughts are helpful to you in your quest to speak. Additionally, I been giving feedback to people whose submissions did not make it onto the EuroSTAR programme this year and who asked for feedback, and seen some common themes, including that with over 400 people applying for around 60 speaking places, and an excellent field of submission, many great submissions did not make it to the conference programme… not being selected doesn’t necessarily mean you did a bad submission.
Why speak at a conference?
My first question to you: Why would you want to speak at a conference? It is after all time consuming, stressful, and unlikely to be in the obvious mainstream of your job. Here are some reasons I speak:
to improve how I communicate about my subject – a skill for work.
to learn my subject: to give the talk, I’ll have to learn more, check facts, build my story.
to give back to my industry and educate others, by sharing challenges overcome.
for the fun of performing: it’s scary and fun, and a chance to play in public…
So, you want to speak at a conference… what to do? I’m assuming you have a story to tell, one you think is worth other people hearing? If you have not got a story to tell, there is no point speaking…
Don’t wait to be asked…
There are two ways to get a speaking place at a conference: you get invited, or you apply via a “call for submissions” (cfs) or “call for papers” (cfp). However famous you are, you might not get invited, so, if you want to speak at a conference, don’t wait to be invited. Instead, apply to speak. Your submission will be reviewed, and you will be accepted, or rejected. Don’t worry if you are rejected, it has happened to all of us – many times in my case. Conferences often have many more applications than they have speaking places. So review, and try again…
Choose your conference…
First job: decide which conferences you want to speak at, look at their websites to see what dates they run on and what style of submission they want. Look very carefully at any guidelines, themes, and style sheets they suggest. Also look at the websites for previous editions of the conference to see if there is a “house style” the conference favours. Also think about whether you can get to the conference if selected – travel visas, availability, dates and costs – can you go if you are selected, and how will you fund it? Some organisations will support you because they want representation at the conference. Some conferences provide funding towards travel and accommodation. When you are applying look at the balance of benefits and costs. Each of us will have a different view about what we want to do, what cost/benefit we need to make it worthwhile.
Investigate what information the cfp requires
Look at the session options offered carefully. Think about what they want for different types of session. Typically the minimum you will be asked for along with contact details is:
You may be asked for a paper to explain your idea. You may be asked for key learning points, takeaways, what type of session this is, what type of audience it is aimed at, your speaking experience, evidence in the form of supporting documents, videos… it all depends on the conference.
What helps your submission succeed?
Factors that will help your submission for many industry conferences include:
telling a really good story – something compelling, coherent, concise, and which flows from the title, into the abstract and through to any takeaways.
focusing on your experiences of your projects – things where you are demonstrating your involvement, challenges you have faced and overcome, mistakes you have made and learnt from – rather than using your abstract to regurgitate theory.
having a new perspective to offer, something that has not been offered at this conference before.
If you don’t have speaking experience, think about getting mentoring – within your local/national industry communities, within your organisation, or via the conferences. You could look at SpeakingEasy for example ( https://speaking-easy.com/ ). Also look for opportunities to speak at local meet ups and national conferences before going for the larger international conferences. It’s likely that fewer people will apply and this increases your chances of being selected.
Conferences will often have themes that change year to year. Many conferences in addition are looking for speakers and sessions that increase the diversity of ideas and people, improve inclusiveness, are engaging, participative and interactive, allowing the audience to not just listen but also take part.
Have your own compelling story
About something unique, transformational
About overcoming challenges
Keep it coherent, well focused
Keep it clean…
Ask for help!
Get it reviewed
Get it proof read
Speak at smaller events first…
Ask for feedback
What to avoid
Don’t just send the same abstract to different conferences – they each want something different. Don’t send the same abstract several times to the same conference for different session types – it just annoys the programme committee. Don’t send an excessive number of submissions – it is better to have one really well thought out abstract.
CHI 2019 was in Glasgow this year, and although I could not get to the main conference, I attended, and got a lot from, a “pre” event and a “lite” event run around the main conference.
The PreCHI day took place at the University of Dundee, and was a chance to hear a precis of papers delivered by academics from Scottish Universities at the main confererence. This was a good day, mainly fo rme to understand the breadth of research in HCI, and the types of project, that are happening. All the talks were interesting in that respect, ranging from a comparison of comics and infographics for helping to convey factual information, through virtual reality studies, to health care, network analysis, haptics and advocacy. The highlight talk for me was “developing Accessible Services: Understanding Current Knowledge and Areas for Future Support (Crabbe, Heron, Jones, Armstrong, Reid, Wilson) where among other things a useful matrix of accessiblity needs by (time?) against (type/are?) gave me pause for thought. The accesibility areas were: cognitive, communication, visual, physical, emotional, and the temporal axis was permanent, temporary, situational. So someone holding a heavy object is situationally, physically impaired from say picking up another object. When you start looking at accessiblity in that way it reinforces the idea that all of us need accessibility. It could feed into some of the ideas for the test tools work. Accessibility of the tools and of the information the tools generate. Other presentations I should follow up in terms of work on haptics, embodiment, and advocacy when thinking about my next steps. Notes are in Polish notebook.
CHIlite was an evening of highlights from the CHI conference and open to the public. Very good evening, with inspiring presentations that show how the HCI community is seeking to make the world a better place. Talks included “Bringing the Internet to the Brazilian Amazon” (Leal), “Seekign social justice through story telling” (Ahmed) and How can apps support sustainable behaviour” (Nkwo) – so heartening to see younerg people enaged in bringing technology to their communities in a postive way. to do good. Two talks by older practitioners on how we trust IT perhaps too much were by KOnstan “What makes a good recommendation?” and Sundar “Do we trust the machines too much?” which were thoughtful caveats on tech usage. And a few of the presentations spoke about the importance of the user/customer feeding back to the developer(s) about what they needed, what they liked and disliked. A call for the user to have a greater voice in what is delivered. Hofman on “putting a £D printer in the doctor’s office” was a good example, on patients requesting what they needed from a 3-d printed artificial limb, while Trllemans talked about control of the use of our smart environments, and Dereshev asked “What it is like living with a companion robot?” and Miyashita demostrated how technology can fool us with amazing visual effects that disguise reality.
Actions: to take: get the papers that are most relevant, read and add to literature review.
More notes from CHIIR 2019 – so here are some highlights of session 3 … The audience I anticipate for this blog is 1 – namely myself when I want to remember what happened… so if you are not me reading this, apologies for the quick notes nature of it…. and there is probably both more detail than you need and yet… not enough. Follow the links to the papers if you are interested…
Session 3 paper 1: Knowledge context in search systems: towards information-literate actions By Catherine L Smith and Soo Young Rieh, see https://dl.acm.org/citation.cfm?id=3298940 for the paper. This really interested me – a perspectives paper about how we learn, and whether we learn, when using search engines. Main points:
“the knowledge content in SERPs has great potential for facilitating human learning, critical thinking and creativity by expanding searchers’ information-literate activities such as comparing, evaluating, and differentiating between information sources”
“we discuss design goals for search systems that support metacognitive skills required for long-term learning, creativity and critical thinking”
I made a note during the presentation – we don’t remember information stored on teh computer but we have a feeling that we do know it, and we do remember where we stored it (?) – it makes it harder to learn somethign new. Quoted Sparrow, Liv & Wegner 2011 – we remember where but we don’t remember what e.g. phone numbers. It strikes me that this is perhaps OK for phone numbers – we’ll find them on the phone or in an address phone (virtual or physical) – but for information generally on the web, it must be harder – the “where” is much more diffuse. Comment in the presentation that the feeling of knowing increases with searching on the web even if the search returns irrelevant information. Comment in the presentation that the accuracy of our judgement about whether we know something is reduced by using websearch.
the paper and presentation calls for the support of information literate searching. The design of search engines to support greater information literacy by conextualising search results, and actually slowing people down so they are supported in long term learning.
I compare this paper to the paper “Chooosing the right test automation tool: a grey literature review of practitioner sources” (2017) Raulamo-Jurvanen, Mantyla, Garousi
in the grey literature review, one of the findings was that when people look for information on the web about test tools, they pick off the most popular, most mentioned tools and resources. Therefore if those tools are popular / fashionable but not necessarily right for the searcher’s context, they may end up with the wrong tool for their purpose.
quotes from that grey literature review: once people had chosen a tool based on their web-search for information “trial use would often lead to wrong decisions” Question: the popular tools – are they popular because they are good, or popular because they are popular and therefore user groups, support, etc? Also note their point at the end of the paper on cognitive overload – so people choose what is obvious. “tendency for cognitive overload is likely to increase the prevalence of shortcut decision making proportionately” “social proof as a weapon of influence is claimed to be most influential under 2 conditions: uncertainty and similarity” the authors referring to Cialdini.
Taking the two papers together, does this indicate that testers (and other people invlved in test tool selection) need support for better decision making – better information literacy when looking for information about tools and automation?
do I know it?
can I find it?
having found it do I know how to judge it and whether to trust it?
The knowledge context for a tester is testing as a discipline, within IT the industry, to serve a particular domain. A tester requires knowledge and infomation literacy across all those knowledge contexts. Testers need to be critical thinkers – the points made in Smith & Rieh about the use of ILA “may be seen as an indicator that the system is not sufficiently optimised” – does that indicate that search engines as a source for information about tools reduces critical thinking? Key quote “In order to learn, understand, and gain confidence in their knowledge, information literate people ask and answer questions about the information they encounter” Critical thinking and making indeppendent judgements are key characteristics of good testers.
Also explore the points on transactive memory – where teams / pairs “split responsibility for remembering parts of the information required to complete a task” – how does that sit with the dev/test relationship? different track to purpue – not for research, just interesting
Summary findings are that when people believe information will be stored on a computer they are less likely to remember it, and more likely to remember where the informaiton is. … the use of web search leads people to overestimate how much they know.”
in testing we use the concept of the oracle for test results
which I have always found funny given that oracles (eg Delphi) tended to be ambiguous and easy to misinterpret
information literacy includes the use of multiple oracles, and comparing them – and indeed not treating them as oracles, but as information sources to be critically assessed and questioned.
The ways we understand whether to trust information includes the “bibliographic knowledge-context” (publisher, author, form, reading level scores) and the “inferential knowledge-context” (other works, comparisons, citations, history, versions, valence / biases) – can this be mapped to how we understand tools?
for testers, there is a tension between a need to get information quickly and the need to critically assess that information – especially when we are in a hurry. What can we trust?
testers use web sources to learn – need to critically assess those sources
testers provide information obtained from tools – need to critically assess that information
this reminds me of the point in the conversation with Dot Graham on the “illusion of usability”
The whole conference was exciting, friendly, so packed with information that by the end of Wednesday I was unable ingest any further ideas!!! It was just great. I got something from each session and there were a couple I wanted to follow up on for specific reasons – so here are some highlights of session 1 and session 2 … The audience I anticipate for this blog is 1 – namely myself when I want to remember what happened… so if you are not me reading this, apologies for the quick notes nature of it…. and there is probably both more detail than you need and yet… not enough. Follow the links to the papers if you are interested…
Session1, Paper 1: Learning about work tasks to inform intelligent assistant design (presented by Johanne Trippas and with a huge list of co-authors – see https://dl.acm.org/citation.cfm?id=3298934 for the paper)
Here are some notes I made during the talk… and at the conference after a brief chat with Johanne:
wanting to empower people in their work
need to understand how people complete tasks
looked at cyber, social and physical aspects
asked people what tasks they were doing at work, and how much time on each task…
what do we mean by “context” when the context is the workplace?
need to understand HOW people complete tasks – thinking about collaboration, how much movement/physical activity is involved, how people are using tools (and which tools), how people classify their tasks, how the tasks change over time (of day, of week?)
find out what people want from intelligent assistants
(Isabel thought – Hmmm – so a mix of a manager and a PA??? As we talk more about self-managed teams, agile methods, etc… as we remove those human interactions and support that we get from a good manager, or a good PA… are we leaving people a little lost? feeling a little abandoned…?)
from the findings make recommendations for improving intelligent assistants at work.
Information workers do multiple tasks, what is a meaningful breakdown of those tasks? Hierarchy of activity/purpose of tasks – getting people to categorise their tasks is difficult – (thought from Isabel – do people understand their tasks in terms of the reason they are employed, why their organisation needs them, their purpose… or do they see their tasks as a series of small busy things, that don’t particularly relate to a wider purpose?
And here are some notes I made when reading the paper post conference:
a note is made about several ways to understand tasks – and refs to ways to do this ***follow up*** This could be a way to look at how people relate testing tasks to tools and to automation???
naturaliistid field studies
statistical time use surveys
sudies of information needs, communications, information seeking – these could be relevant for methods???
survey (method used in this paper)
(Isabel note: cyber, physical and social activities – that is an interesting split; being at work is not just about completing tasks, there is also an element of the team or department as a community, and the physical part – that’s interesting – the effect on one’s body of the way the tasks are done…)
a note in section 2.3 about KUshmerick and Lau using FSM’s to formalise e-commerce transactions… Hmmm – could that be a tool / technique to document interactions in a test team between test designers and automators…??? ***think about this***
I can see looking at section 2.3 that I am looking at a subset of a subset of tasks… Uness I get interested in what distracts people from their main/key task??? leave that one alone for now…
The categories used in this paper’s task taxonomy could be a useful starting point for a taxonomy of testing tasks – it would be interesting to see if testers divided up their time in a similar way, and what sub-categories there might be under each category in the taxonomy. I know how I would break it down for how I work – but would it be the same for other testers? It could be quite different…
for example “IT” is one category and “project” is another… so if you are in IT, then (I guess) IT activities you do in order to provide yourself with an infrastructure to do your own testing are in “IT” and activities you do in order to test software being delivered in a project to a customer are “project” activities, so is managing the test automation an “IT” task – because it supports the testing… and is not in itself the purpose of the project… It would interesting to see how testers categorise it…
I’m interested in the point in section 4.4 about how intelligent assistants could help with longer durations tasks – the idea of an assistant that keeps a note of incomplete tasks to be resumed for example. (Isabel note to self: Have a look at agile/lean/kanban task duration recomendations and see if that fits with the task times being reported in this paper – what is the longest task people can work with as a “long task”? Is the “length of meeting” rule I was brought up onstill valid? (no more than 2 hours, pref no more than an hour, break after an hour, attention into flow state after 15-20 mins, How does that fit with the “15 min standup meeting advice for Scrum?” )
section 4.5 lists some tools people use (digital and physical such as post it notes, paper calendar – make sure I have physical tools included in what I ask about.
Concluding note – there is a lot for me to follow up in this paper, and ideas to use as a model for surveys and analysis.
This presentation and paper interested me partly as a library user, partly because of some new-to-me concepts the authors discussed, and partly as some input into UX/devices&Desires/imagine-our-customers sessions that I have coming up soon.
I liked the idea of place and space – the physical location and layout, versus the semantic meaning. For example “a place with lots of bookshelves is not necessarily a library” so we look at what people do as well as opposed to what they ask for… or talk about
Isabel note: in the same way – when does a test lab become a test lab? When is it an “information place” and what else could it be? Is this s useful idea to explore?
They talked about “wizard of oz” methods – I had not heard of that before – need to look into it…
They talked about the movement between physical and digital media when looking for information in a library. Isabel note: that too could be analogius?
“people reconstruct the technology you give them” – interesting quote – technologists provide methods, approaches, devices, etc but how people react to that may be unexpected, and the devices might be used for different purposes, in different ways. (That came up in the Museums keynote too – that people don’t interact with technology in the way curators expect)
from the paper:
“information interactions are strongly affected by the place where they occur”
“There is considerable ignorance of and resistance to the use of digital resources … some of which is related to the physical realities of the library”
section 2.2. seems to indicate that digital resources in a library are behaving like “closed stack” systems – where you need to know what you want and order it by name – rather than open-stack systems where you browse the shelves and serendipity leads you to new books, authors, topics…
paper quotes Warwick “danger of technocratic arrogance if we assume everythign can be modelled digitaly and thus improved” [ref is #21 in this paper – Warwick, C., 2017 “Beauty is truth: Multisensory inputand the challenge of designign aesthetically pleasing digital resources”]
note from Isabel – I was reminded of my experiences when Worcester public library merged with the Worcester Uni library – so that instead fo finding say “gardening books” all together, they were split across agriculture, horticulture, design… so that the shelves were a mix of amateur / easy to read and academic / industrial / professional – my personal experience was that I know found it harder to find what I needed… or I caught myself up in looking at additional material that was not really relevant. There is tension between relevance and serendipity…
note from Isabel: the lesson for the TX research is maybe about making the tester’s workspace (physical and digital) work as one – and also for other stakeholders for testing – think about how the information reaches them, how the medium for that information fits with each person’s working preference? WIthout being “gimmicky” (see section 9 of the paper)
quote: “designers should consider space and place carefully when designing mobile experiences”
The conference opened on Monday with a keynote from Ranjitha Kumar, which I found eye-opening and inspiring. Her team are working on “Data Driven Design: beyond AB testing” She pointed out that money spent on design does not always repay in results, and that A/B testing can be usefully supplemented with oher methods. In particular her team is working on “design mining” (rather than data mining) to find out what designs are being used elsewhere – she said there is a rich seam of designs available which give inspiration and a test / review point. She talked about the need to connect design with KPI’s, and to understand the success of designs in terms of their effect on KPI’s.
The second keynote, on Tuesday was also fascinating. Daniela Petrelli showed three case studies of making visitor experiences during museum visits multisensory, more engaging and more memorable. By using IoT technology, objects can be used to engage visitors in specific stories. I particularly loved the votary lamp that allows visitors to an exhibit on Hadrian’s wall chose three items – each a different god – and receive a personalised postcard with oracle-like messages. This a study at Chesters Fort , specifically around the Visitor eXperience of the Clayton collection. The three case studies indicated that visitors are more engaged and remember more, because they slow down and take longer to examine objects, when they use a physical object to access information – rather than a digital screen/phone. The IoT technology allows small objects – facsimiles that can be held in one’s hand – to be used to interact with video, audio, etc related to exhibits, and allow visitors to choose the viewpoint they experience in their journey through the museum.
I loved these two keynotes, interesting in so many ways – for me as a comsumer of information on the web and in museums, but also as a test consultant. Possible analogies – these gave me some thoughts about the experience of testers in their projects.
For example, if it true that people are more engaged and remember more when interacting with physical objects, could we use this idea to change how people examine and interact with information generated by testing? This is NOT age related… What does it tell us about how we generate, use and display information?
for example, if design mining is a useful supplement to A/B testing, how could it be used to supplement how we test designs – could it be a source for heuristics to use when testing interface designs?
for example, what we as digital experts provide and are proud of, is not always what the consumers of our work want or expect, For example, the questions that a search engine or chat bot responds to are not always the questions consumers want to ask. How can testers find out and understand what consumers actually want? That includes the consumers of the information from testing.
From those questions, I wonder about our testing dashboards – not for the first time in my decades in industry – and why we don’t talk with our stakeholders, in their language. I’ve been talking about this for years, presenting on it, teaching about it… I’ll continue with that. Quote from K1 about fashion websites – customers ask for “hot pink” websites talk about “Fuchsia” or “magenta”
K2 provided a mini lifecycle for co-design and co-development where a technical person, a designer and a curator get together and split apart repeatedly to generate and test the ideas and design for artefacts. Is there an analogy to the developer, UXer and product Owner, and if so, where is the testing, and is there a need for a specific tester role?