Ricky Yu | Archero

Archero

Archery training app

UX in audio interface,

ubicomp design

Team: Edward Zhang

10/2017 - 12/2017

 

 

 

 

 

Problem Space

 

In archery, it was essential for an archer to get the correct posture before they shoot. But for most of the beginners in archery, they found it very hard to adjust the postures by themselves because they can't see their bodies.

 

Solution

 

We presented Archero, a wearable system using audio cues to help archers adjust their postures. We put sensors on archers' arms to detect their real-time posture. And the sensors can send back the posture information back to a mobile phone. When archers are preparing for a shooting, they can wear earbuds connected with their phone to receive a series of audio cues which help them to realize the current status of their postures.

 

 

The following video shows the basic concept of this product.

 

My Contribution

 

Audio Design: designed a series of audio cues to effectively convey the real-time information of archers' postures. 

UI Design: prototyped a user interface to help users understand the audio cues and easily navigate. 

Research & Analysis:  participated in identifying users' pain points and analyzed to find design insights.

 

 

Approach

 

 

Research
Ethnography
Interview
Synthesis
Ethnographic codes
Affinity map
Competitive analysis
Design
Brainstorm
Sketches
Prototype
High-fidelity
Javascript coding
Evaluation
​To be continued...
 
Research & Synthesis

Ethnography

At first, we have limited knowledge in archery. My team decided to conduct some ethnographical field studies to better identify users' problems. We spent three weekends in a Georgia-state-sponsored archery program at Red Top Mountain State Park. We observed and practiced with proficient archery instructors as well as beginner archers to gain insights of the sport.

 

Our research methods involved documenting our own practice with photos and notes, interviewing both skilled and non-skilled archers, and looking at the documentation provided by the instructors. 

 

 

 

An example of incorrect postures

 

 

A correct posture demonstrated by the instrucor

 

 

We found that the correct posture is the premise of landing an arrow on the target. But even if after lots of practices, people still have trouble adjusting their postures to the correct ones. They rely on the help from the instructors. So, we decided to focus on helping beginners in archery to learn how to adjust their postures by themselves.

Interviews and Affinity Map

In order to know more about our users' needs and concerns, we conducted interviews with both beginners in archery and experts in archery. When we talk to the beginners, we majorly want to know what they think is stopping them from getting the right postures. As for the experts, we want them to help us identify the most common mistakes people will make when they learn archery. 

 

We used affinity map to gather most of the data we collected from ethnography and interviews. Here is a simplified version of what we got:

Make both arms level
Archers should make sure both arms are level to the ground
Skew arms must result in failure in shootings
Don't know how to adjust postures
Can't see my posture while practicing
Know the right posture but don't know how to adjust
Need a coach aside to tell if their arms are too high or too low
Use a digital interface to inform people of posture information
Can't realize what's wrong with an unsuccessful shooting 
Always get the arrow on the right side of the target
Every time just shoot, don't know what's going on
Aiming adjustment requires a tremendous amount of knowledge, which is hard to transmit.
Expert opinions
Beginners' opinions 

From our research, we found the following points which are important for our design decisions:

The archers can't get the correct postures partially because they can't realize the position of their arms during practice.

 

The archers can't get the correct postures partially because they don't understand the causality between their postures and accuracy of shootings.

 

The archers need to make sure both arms are level when they shoot.

 

 
Design Ideation

Since we need to help archers know about their current postures, our solution needs to collect data about their postures. We realized this involved physical prototyping. So, we looked up some related products.

UPRIGHT GO

UPRIGHT GO is a back posture trainer. Users can put the device on their back. Then it will detect if the users' posture is slouchy. It uses vibration to inform users of their wrong postures.

 

 

iPhone Level Meter

iPhone's level meter can detect if the phone is level to the ground. It uses degrees to inform users.

 

 

After this, we came up with a solution to the posture correction problem:

 

1. Equip level meters on arms to sense if arms are level to the ground.

2. Figure out an effective interface to display the posture information to the archers.

My design mainly focused on what interface we should use to convey the information. By brainstorming and sketching, I got three different interface ideas. 

Idea 1: Visual Arm Guard

The first one is to add a screen to archery equipment people wear in practice and use the screen to display visual information. As demonstrated in the sketch, we planned to put a screen on archers' arm guards. We put a level meter inside as well. In this way, when they move their arms, they can see the positions of their arms described in degrees.

 

Idea 2: Audio Interface

 

 

This idea uses audio interface to convey the posture information. During practice, archers can wear headphones or earbuds to receive voice instructions or audio cues to know about current positions of their two arms.

 

Idea 3: AR Glass

 

 

This idea requires users to wear AR glasses during practice. If they are in wrong postures, they can see augmented information to tell them how to adjust. although this information will cover their sight, they will no longer see any augmented elements when they get the correct posture. Then they can focus on targeting.

 

Pros and Cons

 

 

In order to know which interface works best, I conducted semi-structured interviews with some potential users to know about their thoughts on the three interfaces. Here are the cons and pros of these ideas.

Visual arm guard

Pros: visual presentation is helpful in conveying information

Cons: people watch the screen first to adjust their postures, they might lose the focus on the target, distracting

 

 

Audio interface

Pros: light, convenient,

easy to use

Cons: instructions on postures might be vague

 

 

AR glass

Pros: visual presentation is helpful in conveying information

Cons: heavy to wear,

might cause a burden for archery

 

 

Besides, I found people regarded it most important that they hope to use a lighter and more convenient equipment in archery because they wanted to feel relaxed. And some people doubted the feasibility of AR interface because the AR glasses we have now might not meet the requirements. 

 

 

In this way, at last I decided to use the audio interface to convey the information because it won't create many digital burdens to its users in physical practice. But I need to overcome the disadvantages of the audio interface by designing a series of effective audio cues to convey the posture information.

 

 

 
How it works

Archers should be wearing some sensors on their arms during the whole practice. The sensors can detect if the arms are level. Then they will send this information to archers' cellphone app by bluetooth. The archers can put the cellphone in their pocket and use headphones or earplug to receive audio instructions. Then they can adjust their postures according to the audio cues.

 

The target is also equipped with sensors. They can detect the position of the arrow. Then they can send information to the cellphone about the rings the archers have got. They can also tell archers about the mistakes they made during this practice and give them advice on how to correct them.

 

 

 
Audio Interface

Why non-speech sounds?

 

When designing for auditory instructions, I need to decide whether I want to use speech sounds to give the instructions or non-speech sounds. At last, I decided to go with the non-speech audio cues, here's the reason:

The problems of speech instructions

Here are three examples of speech instructions. I'll show you why they don't work below.

​Too Slow!!

Unlike non-speech sounds which can be as simple as a beep, the speech needs the combination of sounds to convey semantic information. So, it takes much more time to transmit.

 

However, the real-time movement of users' physical position is very fast. The speech instructions could result in delays.

​How to adjust exactly?

Even if we simplified them into two words, it still has problems.

 

It's not accurate to describe users' posture just in two words. Users don't know to what extent they should adjust their arm.

​In this example, users don't know how high they should go.

Can't understand numbers​

Although we add a "30 degrees" here, it is still vague for users to understand.

 

Not everybody is good at transforming numbers into the distance of their movement.

 

And it requires 4 more syllables to describe it.

The advantage of non-speech instructions

1. They are really fast!! Think about the beeps in telegraph!

2. Volume, frequency and timbre of the sounds can add dimensions to the information they convey. Then it largely expands the amount of message they can convey.

3. Although they are not accurate, either. But the gradual changes in perceptual properties of the sounds can help users to explore and learn.

Perceptual properties

 

In archery, the two arms of the archers can be simplified into two lines. So, the sensors need to detect if the lines are parallel to the ground.

 

In this way, this project needs to use non-speech sounds to distinguish left arm and right arm, to demonstrate if the posture is correct. If the posture is not correct, the audio cues should tell people whether they need to raise up their arm or lower down.

 

Some perceptual properties of sounds are used to convey different information.

The following video shows an interactive prototype which demonstrates how audio cues sound when your arms are in different positions. It is a simulator of archers' arms. The arms in the side view can be represented as lines. In this prototype coded in javascript, people can drag the arms to see the differences in sounds.

 

You can play with this prototype by clicking here.

​Voice Interaction

Besides non-speech cues, I still think we need to add some speech instructions to help archers get used to this interface. The workflow for audio instructions is displayed below.

 
Interaction & UI

Before users start to listen to audio instructions, they'll interact with a mobile interface first. So, I also designed the interaction flow of the mobile app.

Except for the feature used in practice, this app also contains two features people can use when they are not in practice.

 

"Help" feature will teach users how to interpret audio cues. It involves a simulator which allows users to quickly simulate the practice situation in order to learn the meaning of audio cues.

 

"Records" feature will analyze data of users' previous shootings to provide advice.

 

Below is the UI of this app. You can also click here to watch the demo video to learn about interaction flow.

This simulator uses two lines to represent people's two arms. In this way, users can change the positions of two arms to hear the changes in sounds. It can help them to learn the meaning of the sounds.

 

Designed by Fangxiao Yu,

last updated in February 2019