Category: Uncategorized

  • Yinqi’s

    Blog1

    The Impact of Social Media on Happiness: A Blessing or a Curse?

    Introduction: A Love-Hate Relationship

    Let’s face it—most of us check Instagram, Snapchat, or TikTok before we even get out of bed. Social media has become a constant part of daily life, especially for students. But while it helps us stay connected, inspired, and entertained, it also raises an important question: is it actually good for our well-being?

    This blog post explores how social media affects user happiness, mental health, and self-image, particularly for young adults. It looks at how platforms are designed, their business models, and the types of content they promote—all of which play a huge role in shaping how we feel.

    The Design of Platforms: Endless Scrolling, Endless Comparison

    Many social media apps are designed to keep users hooked. Features like infinite scrolling and push notifications encourage constant engagement. According to Alter (2018), platforms use persuasive design similar to slot machines—sometimes you see something great, sometimes you don’t, and that randomness keeps you coming back for more.

    This might seem harmless, but it can lead to unhealthy habits. A study by Twenge et al. (2018) found that teenagers who spent more time on social media were more likely to report feeling depressed or lonely. It’s not just about how much time we spend, but also how we spend it.

    When platforms prioritise likes, comments, and shares, they often promote content that’s flashy, idealised, or even fake. This can lead to constant self-comparison. As Chou and Edge (2012) point out, seeing others’ carefully curated lives can make users feel like they’re missing out, even when they’re not.

    Image suggestion:
    A side-by-side of someone scrolling happily on Instagram and another person looking anxious while doing the same.
    Caption: Same scroll, different effects.

    Business Models: Your Attention = Their Profit

    The truth is, most social media platforms don’t make money from users—they make it from advertisers. And the more time you spend on the app, the more ads they can show you. This leads platforms to develop algorithms that prioritise content that keeps you engaged, not necessarily content that makes you feel good.

    This system can be harmful. The algorithm often promotes content that is extreme, emotional, or controversial—because that’s what gets the most attention. According to Haidt and Allen (2022), this contributes to polarisation and anxiety, especially among young users.

    It’s also worth mentioning that social media tends to amplify unrealistic beauty standards. Influencers often use filters or editing apps, yet their posts still get promoted. This can negatively impact users’ self-esteem and body image, especially among teenagers and young women (Fardouly et al., 2015).

    Image suggestion:
    An ad targeting a user based on their recent search, appearing between personal content.
    Caption: When your feed knows you better than your best friend.

    Content & Culture: Toxic Positivity and Cancel Culture

    Beyond the design and business model, the culture of social media can also affect mental well-being. For example, the pressure to be “happy” or “perfect” all the time can be overwhelming. Posts often show only the good moments, which creates a sense of toxic positivity.

    At the same time, social media can be brutal. Cancel culture and online shaming have made people afraid to speak their minds. According to Ng (2020), fear of being “called out” can lead to anxiety, especially for people who already feel marginalised.

    However, it’s not all bad. Platforms can also be places for support. Communities like #MentalHealthMatters or #BlackGirlMagic offer safe spaces for users to share struggles and feel seen.

    So… Is Social Media Bad for Us?

    It’s complicated. Social media can absolutely harm our well-being—especially when it leads to overuse, comparison, or anxiety. But it can also help. It allows us to connect with friends, express creativity, and find communities we wouldn’t otherwise access.

    What matters is how we use it. Setting screen time limits, curating your feed to follow positive accounts, and taking breaks can all help. As Orben (2020) argues, small changes in user habits and platform policies can significantly improve the overall impact.

    Image suggestion:
    A screenshot of a screen time notification: “You’ve spent 4 hours on TikTok today.”
    Caption: Time flies when your mental health doesn’t matter.

    Conclusion: Scroll Smarter

    Social media isn’t going anywhere. But we can make choices about how we interact with it. By being more aware of how platforms are designed—and why—they stop being invisible forces in our lives. We regain some control.

    Whether it boosts or breaks our well-being depends on how we engage, how much we consume, and what kind of content we let into our minds. It’s not about quitting altogether, but scrolling smarter.


    Alter, A. (2018). Irresistible: The rise of addictive technology and the business of keeping us hooked. Penguin.


    Chou, H. T. G., & Edge, N. (2012). “They are happier and having better lives than I am”: the impact of using Facebook on perceptions of others’ lives. Cyberpsychology, behavior, and social networking, 15(2), 117-121.

    Fardouly, J., Diedrichs, P. C., Vartanian, L. R., & Halliwell, E. (2015). Social comparisons on social media: The impact of Facebook on young women’s body image concerns and mood. Body image, 13, 38-45.


    Haidt, J., & Allen, N. (2022). Social media and mental health: A review. Psychological Inquiry, 33(4), 265–282.


    Ng, E. (2020). No grand pronouncements here…: Reflections on cancel culture and digital media participation. Television & new media, 21(6), 621-627.

    Orben, A. (2020). Teenagers, screens and social media: a narrative review of reviews and key studies. Social psychiatry and psychiatric epidemiology, 55(4), 407-414.

    Twenge, J. M., Joiner, T. E., Rogers, M. L., & Martin, G. N. (2018). Increases in depressive symptoms, suicide-related outcomes, and suicide rates among U.S. adolescents after 2010 and links to increased new media screen time. Clinical Psychological Science, 6(1), 3–17.

    BLOG2


    How Ads Know Everything About You(and Why It Matters)

    Introduction

    Have you ever searched for “cheap headphones” and then suddenly seen headphone ads all over Instagram,You Tube and even while reading the news? That’s not a coincidence—it’s data-driven advertising. These ads follow you around online because platforms collect your data, analyse it, and use it to sell you stuff. In this post, I’ll look at how this kind of advertising works, how it affects us, and why we should care.

    So, How Do They Know What I Like?

    Let’s start with how data-driven ads actually work. Platforms like Facebook, Google, and TikTok are constantly collecting little pieces of information about what you do online—what you search, what you watch, what you like, and even how long you pause on a video. Creepy, right?

    All of this data builds a digital version of “you,” which helps advertisers figure out what to show you. Facebook can track over a thousand data points about a person (Turow, 2017). That’s how it knows you’ve been looking at holiday flights and suddenly hits you with Ryanair deals.

    It’s Not All Bad: Ads Can Be Helpful

    Let’s be honest—sometimes these ads are actually useful. If you’re looking for a laptop and you get shown one that’s cheaper or has better specs, that’s helpful. Or maybe you’re into sustainable products, and a new eco-friendly brand pops up on your feed. Nice!

    Data-driven ads also help small businesses find the exact people who might love their stuff. Instead of throwing money at random billboards, they can speak directly to people who care. Even mental health campaigns have started using this to reach people who might need support (Edelman, 2022). So yes, it can be a win-win sometimes.

    But Then It Gets a Bit… Shady

    Now for the darker side. To make these personalised ads happen, platforms have to track you all the time. Most of us don’t even realise how much data we’re giving away—or who it’s being sold to . It’s like being watched, 24/7.

    Worse, some ads are designed to take advantage of your emotions. Say you’re a teen girl watching content about dieting—suddenly, weight loss supplements show up everywhere. That’s not just annoying. It’s manipulative. These platforms know exactly what to say to make you click.

    The Scariest Example: Cambridge Analytica

    If you’ve heard of Cambridge Analytica, you’ll know things can get really serious. Back in 2018, this company used Facebook data—taken without people’s permission—to target voters in the Brexit vote and the US election.

    They didn’t just guess what people wanted to hear—they used personality data to send ads that would emotionally push people in a certain direction (Cadwalladr, 2017). This wasn’t selling shoes. It was shaping democracy using targeted fear and manipulation.

    Data Can Be Biased, Too

    Here’s another problem: these advertising systems learn from the past. That means if past behaviour is biased, future ads will be biased too.

    For example, research found that job ads for high-paying roles were shown more to men, while lower-paid or cleaning jobs were shown more to women (Ali et al., 2019). That’s messed up. It means even online, people can be shut out of opportunities because of who they are.

    Some users in poorer areas also see fewer financial or housing ads—just because of their postcode. This is called “digital redlining,” and it’s a quiet but serious form of discrimination.

    Are People Doing Anything About This?

    The good news is, governments and tech companies are finally doing something. In Europe, the GDPR law gives people more rights over their data. The UK’s Online Safety Bill is also trying to limit harmful targeting, especially for young people.

    Platforms like Instagram let you check why you’re seeing certain ads, and you can turn off tracking (though it’s often hidden deep in the settings). Some experts say we need algorithm audits—basically, checks to make sure these systems aren’t being unfair or sketchy (Pasquale, 2015).

    What Can You Do

    You don’t have to be a tech genius to protect yourself. Here are a few simple things:
    • Turn off personalised ads in your settings
    • Clear your browser cookies often
    • Use ad blockers or browsers like DuckDuckGo
    • Always check “Why am I seeing this ad?”

    And most importantly—be curious. If something feels targeted or manipulative, it probably is. Ask questions. Understand the system. Because if we don’t, we’ll just keep getting pulled along by it.

    Conclusion

    So, is data-driven advertising all bad? Not really. It can help us find useful stuff, support small businesses, and even promote mental health. But it also collects a scary amount of data, invades our privacy, and sometimes crosses the line into manipulation.

    What we need is more transparency, better rules, and smarter users. The future of advertising doesn’t just affect what we buy—it shapes how we think and live. So let’s start paying attention.

    • Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The guardian, 17(1), 22.
    • Cadwalladr, C. (2017). The great British Brexit robbery: how our democracy was hijacked. The Guardian, 7.
    • Edelman, B. (2022). How Targeted Ads Can Support Mental Health (If Done Right). Harvard Business Review.
    • Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
    • Turow, J. (2017). The aisles have eyes: How retailers track your shopping, strip your privacy, and define your power. Yale University Press.

    Blog3

    How Digital Media Empowers Social Movements

    ☝️Prompt

    👉magine you are an activist for a social movement. Describe how you would use digital media to promote your cause, backing up your choices with information from the required readings, the suggested readings, and high-quality sources you find on your own.

    Introduction

    In today’s hyper-connected world, social movements are no longer confined to the streets or flyers pinned on noticeboards. As an activist advocating for climate justice, I believe digital media is one of the most powerful tools to drive meaningful change. Through social platforms, videos, blogs, and hashtag challenges, one voice can quickly grow into a global outcry. This blog explores how I would use digital media to raise awareness, mobilise support, and push for policy reform.

    Using Social Media to Mobilise

    Social media platforms—especially Instagram, TikTok, and X (formerly Twitter)—have played a key role in recent social movements. As Treré and Mattoni (2020) argue, digital platforms are not just tools for spreading messages; they also generate emotional engagement and digital participation.

    Take TikTok, for example. Short videos showing how extreme weather is affecting local communities can go viral when paired with music and captions that evoke empathy. I would create a set of unified templates for volunteers to share “before and after” climate impact stories, using hashtags like #ClimateActionNow. Hashtags help organise content, build momentum, and create online visibility (Papacharissi, 2015).

    Telling Stories Through Visuals

    People respond more deeply to personal stories than to statistics. As Castells (2015) explains in Networks of Outrage and Hope, visual storytelling helps trigger identification and solidarity. I would feature individuals from climate-affected areas telling their stories through Instagram posts and short videos. These posts would be emotionally resonant, honest, and geographically grounded to make the issues feel real and close to home.

    During COP26, for example, many activists posted 30-second “Why I’m Marching” clips that helped bridge the online-offline divide and drive turnout.

    Creating Digital Campaigns and Petitions

    To turn attention into action, I’d use online petitions and digital challenges. Platforms like Change.org are effective for targeting specific policy goals. For example, I might launch a petition demanding that local councils commit to net-zero emissions by 2030. Links could be shared via Instagram bios, X profiles, or pinned comments under TikTok videos.

    I’d also launch a “Green Check-In Challenge,” encouraging people to share everyday eco-friendly habits—like riding a bike, using a reusable bottle, or eating vegetarian meals—tagging posts with #GreenEveryDay. These kinds of user-generated content campaigns increase visibility and engagement, particularly when boosted by algorithms (Highfield, 2017).

    Navigating Challenges: Algorithms and Burnout

    Using digital media isn’t without its difficulties. On the one hand, social media algorithms favour emotionally intense content, which can lead to polarisation. On the other hand, constant posting and engagement can cause digital burnout for activists.

    As Poell and van Dijck (2018) point out, the “logic of engagement” on social platforms encourages simplification and performance, which may undermine the depth of a movement’s message. As an organiser, I’d need to find a balance between reach and accuracy, and build a volunteer network to rotate social media duties and prevent fatigue.

    Measuring Impact and Building Community

    It’s important to measure what’s working. Metrics like engagement rates, petition signatures, and offline event attendance can help refine the strategy. For example, if personal stories get more interaction than infographics, I’d prioritise those in future campaigns.

    Digital platforms are also spaces to build community. I would create a private Facebook group or Discord server for volunteers to plan, chat, and coordinate actions. These digital “backstages” allow activists to foster solidarity and stay motivated beyond the performative aspects of public-facing content.

    Conclusion

    In summary, digital media is more than a megaphone—it’s a bridge that links people, emotions, and action. In the case of climate justice, tools like short videos, visual storytelling, online petitions, and algorithm-aware content strategies allow activists to educate, inspire, and mobilise. As a digital-age activist, I am not just a poster—I am a storyteller, a connector, and a movement builder.


    Castells, M. (2015). Networks of outrage and hope: Social movements in the Internet age. John Wiley & Sons.


    Highfield, T. (2017). Social media and everyday politics. John Wiley & Sons.


    Papacharissi, Z. (2015). Affective publics: Sentiment, technology, and politics. Oxford University Press.


    • Poell, T., & Van Dijck, J. (2018). Social media and new protest movements. The SAGE handbook of social media, 546-561.


    Treré, E., & Mattoni, A. (2016). Media ecologies and protest movements: main perspectives and key lessons. Information, Communication & Society, 19(3), 290-306.1–1635.

  • Is Social Media Making Us Sick?

    Is Social Media Making Us Sick?

    Last month, a friend told me she deleted Instagram after realizing she’d spent four hours scrolling through strangers’ vacation photos instead of studying for her finals. “I just wanted a quick break,” she said, “but somehow the afternoon disappeared.” The debate about social media’s impact often presents it as balanced, with benefits and drawbacks. Yes, social media offers real benefits. It connects distant friends, helps activists organize, and provides communities for isolated people. During the pandemic, these platforms became lifelines for maintaining relationships.

    But here’s the thing: when you look closely at how these platforms actually work, their design, business models, and the behaviors they encourage, the harm outweighs the good. Social media are carefully engineered systems designed to exploit our psychology for profit, and this comes at a real cost to our well-being.

    The Architecture of Addiction

    Let’s start with the most basic feature of modern social media: the infinite scroll. This isn’t a convenient design choice. It’s a deliberate decision to remove natural stopping points that might prompt you to close the app. Research on platform design shows that features like pull-to-refresh, variable reward schedules for likes and comments, and push notifications all work together to create what psychologists recognize as addictive patterns.

    Think about slot machines for a moment. They’re designed with variable reward schedules because that’s what keeps people pulling the lever. You never know when you’ll hit the jackpot, so you keep trying. Social media works exactly the same way. You never know when you’ll get that dopamine hit from a popular post or an interesting video, so you keep scrolling.

    The platforms also use sophisticated algorithms to learn exactly what keeps you engaged. They track everything: what you click, how long you watch, what makes you comment or share. Then they use this data to serve up an endless stream of content perfectly calibrated to keep you on the platform. It’s not about showing you what’s good for you or even what you really want to see. It’s about showing you whatever will keep you scrolling.

    Following the Money

    To understand why platforms are designed this way, you need to follow the money. Facebook’s parent company Meta made nearly $40 billion in ad revenue in 2024. They don’t make money when you have meaningful connections with friends. They make money when you stay on the platform looking at ads.

    This creates a fundamental misalignment between platform interests and user well-being. The longer you scroll, the more data they collect, the more ads they show, the more money they make. Your anxiety, your FOMO (fear of missing out), your compulsive checking – these aren’t unfortunate side effects. They’re the business model.

    The platforms have become incredibly sophisticated at what researchers call “surveillance capitalism.” They’re not just tracking what you do on their apps. They’re building detailed psychological profiles to predict and influence your behavior. This data isn’t just used to show you ads for products. It’s used to shape your emotions, exploit your insecurities, and keep you coming back.

    The Content Problem

    Beyond the addictive design and exploitative business model, there’s the issue of what these platforms actually show us. The algorithms don’t optimize for truth, kindness, or well-being. They optimize for engagement, and unfortunately, the content that drives the most engagement is often the most inflammatory, divisive, or anxiety-inducing.

    Studies consistently show that false information spreads faster and wider than true information on social media. Outrage drives more clicks than nuance. Fear keeps people scrolling longer than joy. The platforms know this, and their algorithms amplify it.

    This creates what researchers call “toxic content spirals.” The more you engage with negative content, the more the algorithm shows you. Before long, your feed becomes a stream of arguments, bad news, and content designed to make you angry or afraid. Even when you try to curate a positive feed, the algorithm fights against you, always pushing the content that will keep you engaged, regardless of how it makes you feel.

    The Illusion of Control

    When faced with criticism about these harms, platforms offer “digital well-being” tools and tell users to practice better self-control. Instagram has time limits. TikTok offers break reminders. But this is like a casino offering pamphlets about responsible gambling while keeping the lights flashing and the drinks flowing.

    These tools represent what scholars call neoliberal responsibilization – making individuals responsible for managing problems created by systems designed to override their self-control. The message becomes: if social media is harming you, it’s your fault for not using our tools properly.

    But the tools themselves are often buried in settings menus, designed to be forgettable, and easy to override. They’re not serious attempts to protect users. They’re PR exercises designed to shift blame from the platforms to the people they’re harming.

    The Bottom Line

    When I think about my friend who lost an afternoon to Instagram, I don’t blame her for lacking self-control. I see someone caught in a trap designed by some of the smartest people in the world, backed by billions of dollars, and refined by constant experimentation on billions of users. Yes, social media can connect us. Yes, it can spread information and enable organizing. But these benefits could exist without the predatory design, the surveillance, the algorithmic manipulation, and the relentless push for engagement at any cost. The fact that platforms provide some value doesn’t excuse the harm they cause.

    The evidence is clear: social media platforms as they currently exist are net negative for human well-being. They’re designed to be addictive, funded by surveillance and manipulation, and filled with content that makes us anxious, angry, and isolated. Until we fundamentally restructure how these platforms work they will continue to extract far more from us than they give back.

  • When Algorithms Know You Better Than You Know Yourself

    When Algorithms Know You Better Than You Know Yourself

    Last week I was scrolling through TikTok when I noticed something weird. Every third video was about journaling and mental health apps. It felt like the app had somehow figured out I was stressed about my upcoming exams. The thing is, I hadn’t searched for anything related to stress or mental health. I’d just been watching videos late at night, probably scrolling faster than usual, maybe rewatching certain clips. Somehow, TikTok knew I needed those wellness videos before I did.

    This is the strange new world we live in. Social media platforms collect so much information about us that they can predict what we want or need with scary accuracy. Sometimes this feels helpful. Other times it feels like we’re living in a science fiction movie where computers control our thoughts.

    Let me be clear about something upfront. Targeted advertising does have some benefits. If you’re a small business owner, you can find customers who actually want your products without spending a fortune. If you’re shopping for something specific, you might see ads for exactly what you need. Parents discover useful products for their kids. Travelers find good deals on hotels.

    But after looking at how these systems really work, I think the harm they cause is much bigger than any benefits. We’re trading our privacy and freedom for slightly more relevant ads, and that’s a terrible deal.

    The Surveillance Machine

    Most people have no idea how much data social media companies collect. It’s not just about what posts you like or what accounts you follow. These platforms watch everything you do, and I mean everything. Facebook collects thousands of different data points about each person using their platform. They track how long you look at each post, whether you start typing a comment and then delete it, how quickly you scroll past certain topics. They even track your mouse movements on the screen.

    TikTok goes even further with data collection. They monitor your typing rhythms, which can identify you personally. They collect voice data and facial recognition information. They know what other apps you have on your phone and which websites you visit outside their app.

    All this data gets fed into complex computer systems that create a profile of who you are. Not just your interests, but your personality, your weaknesses, your habits, and your likely future behavior. These profiles are so detailed that researchers have found they can predict major life events before they happen.

    Exploiting Psychological Vulnerabilities

    Here’s where things get really concerning. These platforms don’t just use data to show you ads for things you might want to buy. They use it to figure out when you’re most likely to make impulsive decisions. Facebook’s own research showed they could identify when teenagers feel insecure or worthless based on their activity patterns. Think about that for a second. A massive corporation knows when kids are feeling bad about themselves and can target them with specific advertisements during those vulnerable moments.

    TikTok has gotten really good at what I think of as emotional manipulation loops. They notice when someone watches videos about a particular insecurity, then show more content about that topic, which makes the person more anxious, and then serve ads for products that promise to fix the problem. Someone worried about their appearance gets more beauty content, becomes more self-conscious, then sees ads for cosmetic procedures. Studies have shown that platforms can detect depression, anxiety, and other mental health issues with frightening accuracy. Instead of using this information to help people, they use it to sell products.

    The Responsibility Trap

    What really bothers me is how these companies pretend they’re giving us control. They offer screen time limits and ad preferences settings, acting like we’re responsible for managing our own experience. But this is like a casino giving you a pamphlet about gambling addiction while they pump oxygen into the air to keep you awake and playing.

    The whole system is designed to override your self-control. Platforms use techniques from behavioral psychology to keep you hooked. Variable reward schedules make checking for likes as addictive as gambling. Infinite scroll removes natural stopping points. Push notifications create anxiety about missing out.

    Then when people develop problems with social media use, the companies say it’s the user’s fault for not having better habits. They’ve created a trap and then blame us for falling into it.

    Digital Autonomy Under Attack

    Going back to my TikTok experience, I keep wondering: did I actually want to learn about journaling, or did the algorithm decide I should want it? When platforms can predict and influence our behavior this accurately, how much of what we think we want is really our own choice?

    Experts who study surveillance capitalism warn that we’re losing our ability to make free choices. Our future behaviors become predictable and controllable. Companies don’t just sell us products anymore. They sell the ability to modify our behavior to whoever pays them. This isn’t just about seeing ads. It’s about losing control over our own minds. When algorithms know what we’ll want before we do, when they can trigger our insecurities and offer solutions, when they can make us believe our programmed desires are our own choices, we stop being fully human. We become predictable machines ourselves. The benefits of targeted advertising are real but small. The costs to our privacy, mental health, and freedom are enormous. We need to demand better from these platforms. We need laws that protect our data and our minds. Most importantly, we need to understand that when something is free, we’re the product being sold.

  • How Hashtags Saved My Local Library

    How Hashtags Saved My Local Library

    My local library branch closed last year. The building where I spent countless afternoons as a teenager, discovering books that changed how I saw the world, now sits empty. The council said budget cuts made it necessary. What they didn’t say was that they’d barely consulted the community or considered how many people depended on that space for internet access, job applications, and human connection. This experience turned me into an unlikely activist. I’m now part of a growing movement to protect and expand public libraries as essential community infrastructure. Here’s how I’d use digital media to build this movement.

    Starting with Stories, Not Statistics

    While libraries face funding cuts across many countries, throwing numbers at people rarely inspires action. Instead, I’d begin by collecting and sharing personal stories through social media. The #MeToo movement succeeded partly because it made abstract issues concrete through individual experiences. Similarly, our campaign would invite people to share what libraries mean to them using #MyLibraryStory.

    These stories would highlight libraries as more than book repositories. They’re where elderly people combat isolation, where kids without home internet do homework, where job seekers print resumes, where new immigrants take language classes. By flooding social media with these diverse narratives, we’d reframe the conversation from “budget line items” to “community lifelines.”

    Building Networks, Not Just Audiences

    Traditional activism often relies on formal organizations with clear hierarchies. But research on connective action shows that digital movements can succeed through loose networks where participants personalize their engagement. Rather than creating one central “Save Libraries” organization, I’d encourage local groups to form their own campaigns while sharing resources and amplifying each other’s messages.

    We’d create shared Google Drives with template social media posts, fact sheets, and graphics that local groups could adapt. A simple website would map all participating groups, making it easy for people to find and join efforts in their area. This distributed approach makes the movement resilient. If one group faces burnout or opposition, others continue the work.

    From Awareness to Action

    Digital activism often faces criticism for encouraging “slacktivism,” where people feel they’ve contributed by merely liking or sharing posts. To avoid this trap, every piece of content would include specific actions supporters could take. This might include signing petitions, attending council meetings, volunteering at library events, or donating to library support organizations.

    The Women’s March demonstrated how Facebook events could coordinate massive offline action. We’d use similar tactics, creating Facebook events for library read-ins, community meetings, and peaceful protests. The digital tools serve as organizing infrastructure, not endpoints.

    We’d also use digital platforms for coordinating practical support. WhatsApp groups could organize carpools to council meetings. Slack channels could coordinate volunteer schedules. Google Forms could match people’s skills with movement needs. The goal is making real-world participation as frictionless as possible.

    Creating Content That Spreads

    Getting people to share your message online is tricky. After months of trial and error, I learned what actually works. People share things that make them feel something. A dry post about library funding statistics? Nobody cares. But a photo of an 80-year-old woman teaching herself to use email at the library computer? That gets hundreds of shares.

    We started making simple videos with our phones. Nothing fancy. Just real people talking about real experiences. One video showed a teenager explaining how the library was the only quiet place to study because his house was too crowded. Another featured a single mom who used library computers to apply for jobs. These weren’t polished productions, but that made them feel genuine.

    Visual content gets 94% more views than text-only posts, so we focused on images and short clips. We’d photograph the empty shelves where books used to be. We’d film the locked doors of closed branches. Timing matters too. We learned to post when people were actually online. Early morning for parents. Lunch breaks for workers. Evenings for everyone else.

    Measuring What Matters

    It’s tempting to get excited about viral posts and trending hashtags. But likes don’t reopen libraries. We needed to track what actually mattered: real change in our communities. We kept it simple. Every month, we’d count: How many people showed up to council meetings? How many signed our petitions? How many libraries stayed open? We used a basic spreadsheet that anyone could update. No fancy analytics tools needed.

    The Ice Bucket Challenge raised $115 million because it connected online activity to actual donations. We borrowed that idea. Every social media campaign linked to concrete asks: “Call your councillor today” or “Join us at Tuesday’s protest.” Then we’d follow up to see who actually did it.

    Local victories mattered most. When volunteers saved the mobile library service, we made sure everyone knew. These wins showed people their efforts weren’t wasted. Success bred more success. We also tracked our failures. Which tactics flopped? What messages fell flat? Learning from mistakes helped us improve.

    So, What Next?

    Building a movement online takes patience. There’s no magic formula. What worked for us might not work for you. But some basics apply everywhere. Start where you are. You don’t need thousands of followers or a fancy website. Our library campaign started with five people in a WhatsApp group. Small beginnings can lead to big changes.

    Focus on people, not platforms. Social media sites come and go. What lasts are the relationships you build. We made sure people exchanged phone numbers at events. We created email lists as backups. When one platform changed its algorithm, we weren’t starting from scratch. Keep it sustainable. Burnout kills movements faster than opposition does. We rotated responsibilities. Nobody had to post every day or attend every meeting. When someone needed a break, others stepped up. This isn’t a sprint. It’s a marathon.