The following is an overview of our digital mockup, created with Balsamiq.
Interactive Display Overview
All states:



Phone Web-App Overview
All states:

Task 1: Sharing Emotions: Find a piece of art and share an emotional response. Figure out how many people felt the same way.
The user begins on their phone’s camera screen. A QR code at the front desk of the museum brings the user to the Amarteurs webpage.

The user arrives at the Amarteurs webpage where they are asked for permission to run the webapp. If they hit Decline, the prototype is over. If they hit Accept, they will be brought to the app’s idle screen.

Here is the idle screen of the webapp, waiting for the user to approach a piece of art.

The user approaches a piece of art and is prompted to select how they feel. They can tap to select an emotion from the different emojis below the piece of art.

The user has chosen their reaction emoji. They are shown a screen with the reaction they’ve chosen, with the option to go back or submit their reaction. If they go back, they return to the previous screen with the possible emotions and the photo of the piece of art. If they choose to submit, they are brought to the next screen.

The user has chosen to share their reaction emoji. They are told the number of people who felt the same way (chose similar reactions) and are shown that number of faces with the same expression as the one they chose. The user sees that other people reacted the same way they did (the numbers are slightly fudged so that some minimum number of people are always shown to have reacted similarly).

Task 2: Commenting: Leave a comment about an artwork and see how people who felt differently reacted.
Beginning where they left off, the user can tap the “Comment” button to progress to the next screen.

This screen asks the user to type a comment. They can tap the send button to submit their comment. The comment is pre-prepared in the paper prototype.

The user has submitted their comment and is taken to a screen with comments in speech bubbles. These comments were deemed “similar” to the user’s comment, and the words that triggered the similarity algorithm are bolded. If other people had similar thoughts, the user is shown that other people had similar thoughts and is thus validated. Otherwise, the user is still shown a list of comments that are deemed closest. They are prompted to go to the large screen if they want to see more comments, and can click the button over this text to go to a map showing the location of the screen.


Task 2:
The user has gone to the board. In its initial state, it has a visualization of the exhibit with paintings surrounded by hues representing peoples’ reactions to that art piece. The user is prompted to select an art piece by tapping. In the prototype they choose "Lilac Sheep".


The user has selected a painting by tapping on it. There are a bunch of silhouettes congregated at the painting and a list of emojis at the bottom to choose from to see how many people chose that specific emotion and what they commented. In the actual implementation, these selections would rotate automatically waiting for the user to press one.

The user has tapped an emotion (Angry in this prototype) and now sees the highlighted silhouettes of people who reacted with this emotion. They are also shown comment bubbles coming from the highlighted silhouettes. Tapping the group of people takes the user to a thread of comments from that group


The user has pressed the comment bubbles and sees all of the comments associated with that reaction lain out. The user can tap “See Thread” next to a comment if there are existing replies and can directly tap the reply button to move to the replying screen.

The user tapped the “See Thread” button and now sees all of the replies to that comment uncollapsed. The user can reply to any of these comments by clicking the reply button.

Reply

The user can choose whether to type their reply or write it using a stylus found near the display.
Type

Write

Discussion
Reflecting on our digital mockup, it is clear that we have not made any significant changes to the structure of our design. The mockup includes all the important revisions implemented over the course of our cognitive walkthroughs and usability tests. One change we did implement in the transition from paper to digital prototype pertains to the comment-adding screen on the phone web-app. Previously, the user was not able to see the corresponding artwork while drafting their comment. We have added this functionality to the digital prototype, allowing the user to see the artwork they’re commenting about as they type. This provides context for the user, as well as simply allowing them to continue to be exposed to the artwork as they formulate their thoughts. A relatively inconsequential decision worth noting here is that of our general color scheme. We selected a light pink scheme to match WCMA's primary color scheme so as to contribute to the overall continuity of our design as it takes space in WCMA space. We’ve also added a help button to each page, so as to better address unforeseen confusion with our design. After our cognitive walkthroughs and usability tests, we’re confident in the usability of our design. Even so, we’re aware that even at this stage, users may have questions we haven’t addressed-- a reality of making products for people to use. As such, a help button at every step is necessary.