Initially, we hypothesised through abductive reasoning that burnout was caused by three main factors:
Personal (i.e. compassion fatigue, emotional exhaustion) and dealing with clients
Lack of an extensive support system - supervisors and peers/coworkers
Inefficient use of time - spent on paperwork, organisation and compliance with regulations
Summary Of Problem
Through group synthesis, we identified that these issues were large, complex issues that required not only major organisational change, but also industry-wide reforms. The core issues surrounding social work cannot be solved by simply developing an interface.
We found that self-care (which we found to be the social worker’s responsibility, not their organisation or government’s) was the primary approach to reducing burnout. However, self-care after burning out is not effective, as it is found that many workers leave after being burned out and not returning - a key issue affecting social work.
If the problem situation of preventing burnout is approached as if it is a problem of ineffective self-care, the solution could be self-actualisation. Hence, we hope to create an individualised solution that reduces burnout by encouraging workers to recognise, reflect, be more proactive about their burnout/stress levels before it becomes too late.
Rationalising Our Chosen Design Concept
From three initial concepts we deemed a break tracker and the features we had planned out for it as the best option as it fulfilled three critical success for criteria that we developed (criteria’s can be found below):
the product must allow social workers to recognise the signs of burnout - encouraging users to minimise intense caseloads, a big influence on burnout awareness.
the product must empower social workers to be proactive about their wellbeing - promoting breaks to alleviate workplace stress
the product must require a low cognitive load - social workers are stressed due to the amount of caseload they had, and the limits amount of extra time they have for any other work-related activities
The criteria in bold are the highest weighting, therefore these were higher prioritised when designing the application.
Our Participants
Context Aware Group
THE DEMOGRAPHIC
Social workers with years of experience; both those who have been in social work for 10-20 years and more, to those who've been in the field for less than 5 years.
WHY MORE EXPERIENCED PARTICIPANTS?
We are approaching our user evaluation partly through the lens of more experienced social workers who have been able to reflect or pickup on their burnout. Consequently, they could have conceptualised their own solutions and through this comparison to their own experience, we can reflect and iterate our solution to better fit the needs of social workers.
GENERAL USABILITY GROUP
The other group consists of participants that are not directly involved with the utility of the concept, but rather focus solely on the concept’s usability. We wanted to focus on an age bracket similar to that of our target market, therefore asking individuals 20 or other - the age in which most workers within the social work industry are introduced to the real work circumstances.
low fidelity screens
We conducted a pilot test to test out our first prototype, a motion-based break tracker. Doing so would allow us to practice usability testing methodologies and to be confronted with any potential problems that could lead to missed opportunities in improving product usability and efficiency.
This process consisted of creating:
simple user journey map
concept flowchart
wireflows (sketches)
paper prototyping
By using fellow student designers we were able to get solid feedback on our initial concept. By getting our participants to critique our designs, think aloud their thought processes and observing their pauses/body language we were able to gain a better insight into how usable our paper prototype was.
Iteration 2: Moving Onto Mid Fidelity Prototypes
medium fidelity wireflow
Phone App: Break Tracker
Screens were created based on the previous flowcharts to visualise how interactions could work, drawing on from the features available within the app. There was also the incorporation of a ‘day view’ and ‘night view’, in which during the ‘night view’, the period in which the user has finished their work day, is able to view new features (inputting emotional evaluation) as a result of feedback: "at home is where most of my self reflection happens" (Anastasia, Usability Participant) stating that they wouldn’t input reflective comments during their work time.
After critiquing visual elements we were able to combine aspects we thought would best approach our problem scenario. This led us to the creation of a rough wireframe on Adobe XD. Afterwards, we created mid-fidelity wireframes on Figma and finalised it to produce a more cohesive visual aesthetic for testing.
Reflections of the Methods Employed
TIMING PARTICIPANTS
A part of the think aloud protocol was timing the duration it takes to complete each task. Therefore, we are able to use these times to compare it to later iterations where the same tasks can be completed again to see whether the product has become much easier/more efficient to use over our iterations.
For our mid fidelity prototype, we had three tasks for the user to complete. We found that taking a break took the longest due to difficulty finding the add break section and consequently focusing on other parts of the homepage before trying to add a break.
USABILITY SCALE
SUS Scale Round 1 of Testing
The results from the scale is calculated by subtracting 1 from the odd-numbered statements, then subtracting from 5 for the even-numbered statements, because there is a deliberately alternative tone that occurs, from positive to negative. Afterwards, the totalled score is multiplied by 2.5 so that it is a value out of 100.
According to Jeff Sauro, PhD, the average SUS score from all 500 studies is a 68. A SUS score above a 68 would be considered above average and anything below 68 is below average.
From our testing, we found that out of 5 context-aware participants and 7 general usability participants, we averaged a 62.3 score, which is actually below the scale's standard.
HEAT MAPPING
We heatmapped where our participants found most negative or unclear, allowing us to spend more time focusing on the bigger issues, accommodating for the majority of users rather than trying to solve everybody's individual opinion on the product.
CARD SORTING
Only a few of our participants felt like they wanted to change the flow of the homescreen. These results were taken into consideration for our high fidelity prototype.
CO-DESIGN WORKSHOPS
By printing out our prototype screens, participants were able to annotate and draw in their own changes. This was especially important with our context aware participants as they would be the ones using our app.
PUTTING THE IDEA UP AGAINST THE SUCCESS CRITERIA
Whilst we were able to solve the problem of getting users to recognise the signs of burnout and to reflect on a consistent basis, the product was not easy and engaging to use - this affected the product's usability, and consequently increased the cognitive load of the user.
Iteration 3: Finalising our High Fidelity Prototype
Using Figma, we were able to collaborate and create wireframes based off our feedback.
PICK ME UP PHONE APP
Again, we used sketching to draw up new ideas and refine individual screens. Using Figma, one person lead the overall design aesthetic, so that the design was more consistent.
Below are images to show how we moved from medium fidelity prototypes into high fidelity based on the feedback we had received.
Evaluation of Our High Priority Prototype
For our context-aware group, we kept the same participants as the first round. However, to test the usability of our product we contacted new general usability participants, so that a fresh take on our product enables unbiased quantitative results.
TIMING PARTICIPANTS ROUND 2
Compared to our mid fidelity testing, it can be seen that our timed results under think aloud protocol has been vastly improved, even with the consideration that some context aware participants had already seen the product before.
Furthermore, the shortening of scenario statement cards proved to be helpful, as users were freer to do as they liked.
SYSTEM USABILITY SCALE ROUND 2
After conducting our high fidelity prototype user evaluations, we were able to see how our repeating context-aware participants scored our high fidelity iteration in comparison to our mid fidelity prototype.
From the graph above, it is clear that we have improved the usability of our user interaction by almost 20 points. There is a 21 point difference in the average usability scores compared to the mid fidelity prototype.
HEATMAPPING
We heat mapped where our participants found most negative or unclear, allowing us to spend more time focusing on the bigger issues, accommodating for the majority of users rather than trying to solve everybody's individual opinion on the product.
PUTTING THE IDEA UP AGAINST THE SUCCESS CRITERIA
Through our user evaluations, we found that our high fidelity prototype was able to fulfill our initial success criteria. However, we were not able to integrate the product into the workflow. This is a criteria we had to sacrifice as integration into a social worker's workflow would only increase their cognitive load and also associate the positive action of taking breaks with the negative perspective of the workplace.
REFERENCES:
Tomitsch, M., Wrigley, C., Borthwick, M., Ahmadpour, N., Frawley, J., Kocaballi, A., Núñez-Pacheco, C., Straker, K. and Loke, L. (2018). *Design. Think. Make. Break. Repeat