INSTAGRAM REDESIGN

realinsta.png

Introduction

Misinformation has been a plague on our society and is being perpetuated through ubiquitous use of social media. Each of us have experienced some form of misinformation on different social media platforms, and we all share a common frustration with how common it seems to appear on these apps. Therefore, when we were tasked with basing our project on fighting misinformation. It was only natural for us to address social media, since it is a common source of news for many people, in order to find a potential solution in limiting the spread of misinformation. 

Stakeholders

For our stakeholders, we mainly wanted to focus on average people who use social media, however, we also included content creators/news sources and app developers/social media platform providers as potential stakeholders. We chose the normal social media user as our primary stakeholders because the user is the one who can be influenced the most by misinformation. Normal users may not post their own content, however they consume large amounts of it and are often the ones who end up spreading it more by engaging with posts or sharing it with their friends and family. 

However, we also needed to be able to stop misinformation from those who post them in the first place. Content creators, such as news sources, are the main people pushing out content for these platforms and have a lot of influence over the normal users’ feed. Many content creators have a large audience where they can spread information that may or may not be true. They already have some form of credibility to the public (i.e. the blue tick, verified accounts, etc), however this doesn’t influence whether the information they post is correct or not. 

App developers should also be included as a stakeholder since they are the ones who regulate the platform and have the ability to create new features like flags for content that may contain misinformation. 

Interviews

General Questions

  1. What’s your most used social media platform?

    1. Why? / What do you use it for? / How do you use it?

  2. Have you ever encountered any form of news / information on this platform?

    1. From your experience, do you believe that the news and information that you encounter on these platforms are always/sometimes/mostly correct?

  3. What do you know about misinformation? 

    1. What is misinformation?

    2. Where do you see misinformation?

From our diverse background of interviewees, we identified multiple sources of information and ways they identified them as trustworthy. All of our interviewees, in particular, discussed their own trusted news outlets based on what aligned more with their views but are also aware of the biases in them. In addition, they identified several signifiers in deciding whether or not sources are trustworthy. Prominently, through verified check mark symbols, links to sources in posts, and official website domains. In addition to signifiers, most of our interviewees do their own research through Googling or talking it out with friends and family, while others simply ignore them.

Contextual Inquiry

From the research we gathered through our contextual inquiry, we were able to discover and connect a variety of different trends as seen in our mind map. Misinformation seems to be prevalent on all social media platforms and is especially rampant on platforms with an engagement-based model. People seem to prioritize posts that are engaging or eye-catching. Due to this factor, the most popular posts may not be correct information as people can sensationalize their narratives. Although most platforms don’t have a reliable system to prevent the spread of misinformation in these posts, through our research we found that some platforms have been taking a step in this direction due to backlash from the public. Social media conglomerates like Facebook, Instagram, and Twitter have already begun to employ tactics such as fact-checking from third-party sources to combat the spread of misinformation. One potential problem from this, however, is figuring out how to get people to trust these sources. 

mind map research.png

We created a Figma workspace to organize all the data in our research, interviews, and mind map from our contextual inquiry into important themes and categories. In the process of making one, we found it difficult to categorize a lot of the information nicely. But it made us expand and connect different aspects and gain in-depth insights into misinformation and how we can solve it.

affinity.png

Models

We also created an identity model, sequence model, relationship model, and day in life model. The identity model helps us take into account the different scenarios and types of people in our primary stakeholder group to design for our final solution. In the sequence model, we focused on how a normal social media user reposts as we identified it to be a large cause of spreading misinformation. Additionally, this is useful because we can pinpoint areas in the process that need improvement for our design solution. In the relationship model, we can see how people communicate on social media and to whom. While we can also identify where misinformation spreads from and to. Lastly, the day-in-life model shows one way a potential stakeholder uses social media to help us understand what we need to design for.

identity.png
Day in the Life Model (Kyle).png

Personas

We also created personas for a variety of stakeholders including a news reporter, content creator, software engineer, college student, and app developer. Pictured here are the ones for a content creator and an app developer.

Ideation

We ended up deciding to focus on Instagram because they have already begun making steps toward fighting misinformation with COVID-19 flags. We wanted to build upon that and work on other ways we can fight misinformation within the Instagram platform.

We created storyboards to envision real life scenarios that our design would address.

story1.png
story2.png

Prototyping

Sketches

sketch11.png

Iteration

Some changes that our team has made are with the functionality and location of our credibility rating. We originally researched Instagram’s misinformation flags and imagined we could potentially expand the breadth of what those flags covered. However, from our contextual inquiry, we discovered that one major problem we had to address was making a user trust the flags that were presented to them. That is why we iterated away from our initial idea and will pursue the concept of a credibility icon and rating system. Because these are features that Instagram has never used before, we had a certain amount of flexibility in what our design will display. However, we wanted to still maintain the look and feel of Instagram while incorporating our features.

Our Wireframe

wire.png

Digital prototype

official.png

User Testing

We did a total of 7 user tests. Compiled below are some of the data we collected from these user tests. 2 users wanted more clarity that the credibility number is out of 10. 1 user wanted more clarity on the credibility number calculation overall. We might want to change the location of the credibility number to either the first or last part of the account header. We could also possibly add more than one source of information to the information flags. All 7 users liked the colors. 6 users mentioned that the color coding was beneficial. 1 user, not part of the 6 mentioned above, said that the green stood out. The colors overall helped users quickly determine what was correct or incorrect.

Based on our user testing results, we decided to put additional information to our FAQ page for more clarity. The information we decided to include was a brief explanation of the scale system (out of 10) and how credibility icons are granted (to users with a credibility score of 8 or higher). In addition to this, we added more than one fact checker to our information flag pop ups to give users multiple sources of information. This way, users can easily find more than one article to do their own research if so desired. Additionally, we moved the credibility number to before the number of posts on an account’s profile page. One of our users expressed that the original location of our credibility number was a bit strange, since they were used to seeing three numbers in the header and the rating felt buried in the middle. Moving the credibility number to the first entry would improve discoverability.

Final Design Solution

Our final design solution is a credibility rating system that includes a green credibility check mark, a credibility number on an account’s profile, a credible posts tab on an account’s profile, and new color-coded misinformation flags on individual posts. In addition to color coordination, we also added appropriate icons for users who may have difficulty with color. Accounts that want to have the “News Source” label on their profile will automatically be enrolled in this system. Normal social media users can choose to be enrolled in this system, while accounts that desire to be labeled an official source of news would have to be part of this system. We decided to do this to protect the privacy of those who do not want their posts to be screened as well as hold new sources accountable for the information they post.

ucsdnews.png

Credibility Check Mark

The green credibility check mark that is shown next to the username quickly indicates that the account is credible, similar to the existing blue verified checkmark. We hope that this will be a recognizable signifier for our users to be able to identify credible accounts when scrolling through their home page. These checkmarks are only granted to accounts with a credibility number of 8 or higher.

check.png

Credibility Rating

We also put the credibility rating next to the number of posts on an account’s profile. It is on a scale of one to ten, ten meaning that all their posts are true. This number is located at the top of the user’s profile so it is easy for others to see.

ucsdnewsofficial.png

Credible Posts Tab

In the credible posts tab, we display all the posts from the account with a green, yellow, or red colored overlay on them. Green means that the posts are correct; yellow means that the posts are partially correct; red means that the posts are wrong. At the top of the tab, we display the number of correct posts to the number of posts overall. By clicking on the information icon, it will open a statistic page that shows more information on the number of green, yellow, and red posts. This is followed by a Frequently Asked Questions section that clearly describes what our features are and how they work.

newsofficial.png

Misinformation Flags

We also expanded misinformation flags under posts that will tell the user if the information is true or false using the same color coordination from earlier. These flags lead to pop-ups that describes which organizations have done the fact checking, what their conclusion was, and a link to the site where the user can decide whether to do more research for themselves.

tree.png
correctinfo.png

Conclusion

Originally, our idea was to solely focus on the misinformation flags. However, we thought to create a system where users can have a clear overview of all flagged posts. We wanted users to save time and not need to fact-check themselves. Therefore, users can click on the flag so they can see the original source. We then wanted to separate accounts that are credible from ones that are not. While some of us visualized a rating, others came up with a verification check mark. We decided to use both within the same system to give users more signifiers and adapt to these new features however they see fit. We also wanted to be more transparent with how ratings are given. That is why we aimed to be as clear as possible, giving users relevant information and showing them where the information has come from. For example, we did not simply give each account a credibility rating, but we explain what the rating is, how the rating is given, how it is calculated, and statistics to see the numbers for themselves. Also, because this is a new feature we wanted to include a FAQ when people click on the ratings or check marks so they know what they mean. This addresses our design problem because we now have clearer signifiers for users to see posts that may contain misinformation. We also provide ratings and check marks to show users the credibility of an account, and our pop-ups also allow users to get more information quickly and efficiently. 

Possible Future Work

The things we implemented are only some basic designs. One of our ideas was to add filters to the search bar. The idea is for users to be able to filter their searches by ratings. Users can choose a number from 1 - 10 from a drop down menu. If they choose 7, they will only see users with a credibility score of at least a 7. We also wanted to allow users to filter by verification so they will only see posts by verified credible accounts. We do not have an actual algorithm that gives an account a rating. If we had more time, we would need to find a formula that takes all the green, yellow, and red posts to create a composite credibility score. It would be optimal if false posts negatively impact your score more than correct posts bring your score up. Because of the dangers of spreading misinformation, an incorrect post should be very punishing to their rating. We could also work on adding some more FAQ questions. We want users to clearly know the new features we are implementing. We also want the specific formula we will use to be in the FAQ so users know where the rating number is coming from.

Previous
Previous

UBER EATS REDESIGN