We are online all day, every day, for all aspects of our lives and business. We want a space that is safe and trustworthy to share information without the fear of getting scammed or hacked. However, there isn’t a silver bullet or quick fix to the safety issue. How does a company treat the real-life problem of internet safety and trust? LinkedIn Director of Product Management Shreyas Nangalia shares how safety PMs balance challenges, solutions, and customers to make secure products.
On classic challenges with online safety
Just because the internet has aged doesn’t mean the same scams don’t still exist. Shreyas shares how these challenges are similar but also more complex as more users access the internet for their daily tasks.
“These days we have Internet access everywhere, we have people using online products pretty much to the point it’s become part of their daily lives. As most of us who are online use these products, most of us always learn about or always hear stories and hear in the news about hackers and data breaches and potential identities or passwords are stolen, or potential content that was misinformation. The work that’s involved with trust and safety has gotten so much more important now. When we even go back 10 years, you would always just hear in passing about how someone got scammed via certain means, whether it was an email link that they clicked that promised them a huge chunk of money left by a Nigerian prince or if it was Western-Union-money-scam kinds of messages that people would receive and fall prey to it. Those types of issues have destroyed people’s lives in a lot of different ways. Again, this was like going back several years when the internet was still in its infancy.
Fast forward today and these problems are so omnipresent, you get to hear about them so much more often now. That’s primarily because we have so many more users on the internet now online, from buying groceries to going to social media and doing lots of other things. To pinpoint some of the classic problems, money scams and phishing continue to be some of those problems that we are still, as an industry, coping with. We have made some good strides, but as is the case with such problems, it’s always an ever-evolving space where you have both sides of the good and bad, trying to outsmart each other in a lot of ways. With millions of people coming online, member education and building the right products that keep members safe proactively has become ever so important. To pinpoint, I would say money scams, phishing, inauthentic entities whether it’s fake profiles or fake content, those things are pretty high up there and are still problems that need to be managed and need to be solved.”
On many solutions to tackling online safety problems
The solutions to safety and security problems online are evolving fast to keep up with scammers and bad actors. However, things can be missed. This is where companies can utilize their best resource — their customers. Shreyas explains how to do just that.
“When we think about the trust and safety area in general, there is like no one solution, or there’s no one set of answers that can solve all of these problems, especially because it’s such an adversarial space. You build something, you do something, and then you see bad actors and bad behavior evolving very, very quickly, just like you would expect. The way we generally tackle any problem that I mentioned in my previous answer is, there are multiple pieces to it, one of them being how can you build technology that can solve these problems more proactively at scale. By itself, it is like a big area of focus and investment for any company.
Talking specifically about LinkedIn where I work, we have pretty big research and development teams that are constantly looking at ways to build defenses that scale, and that proactively moderate content or look at content as it comes in or before it gets posted. We’ve built a whole bunch of filters that look for content that could be harassing in nature or inflammatory in nature, and in some cases, potentially could be misinformation. As you would know, this isn’t this is not a silver bullet, it’s a hard problem, it’s an ever-evolving problem.
A big part of our strategy is to also listen to members, listen to our users when they encounter some of this content online. What we do is as part of that, we make sure that we have a very, very robust and easy and simple discoverable way of giving members a way to report things to us, provide us feedback when we do miss out on things. This applies to the second pillar or the second category of inauthentic profiles, as well. So the general structure we use is, let’s try and build things that can help proactively prevent some of these issues, but as we know, in cases where we are not able to, we want to make sure that we give a very easy way for our members to tell us when things go wrong and use that as a feedback to improve.
So from a technology perspective, this is just the general sort of framework or the general sort of the mindset for companies that are not so tech-focused or may not have their teams building it. They also take a similar approach where the technology probably is not built by them, but they probably buy it from different vendors who help them out in the same way.”
On balancing safety without impacting major company metrics
Safety and security are the primary focus for these PMs, but also all product managers and leads in general. However, an area affected by making a product more secure online could be the metrics and how a company measures success. Shreyas shares areas where teams can collaborate to meet the same goals.
“Every security professional or trust and safety professional encounters that almost on a daily basis. When we build products, when we build technology that is meant to prevent bad behavior, bad actors, given that no solution is perfect, you always end up with some false positives and some good behavior or good engagement getting impacted. It’s always a challenge to ensure that you have a very healthy balance and a healthy trade-off between what you’re trying to prevent and how it impacts good member behavior, which can have some impact on a company’s top-line metrics or the core engagement and revenue metrics.
One thing is that we deeply care about this trade-off, and one of the things that we focus on is to make sure that we have very robust metrics and measurements in place, both in terms of what we are trying to stop, like what the bad behavior looks like and what sort of measurements and metrics we have to understand for how what we are building and what we are ramping impacts that, alongside having very clearly aligned guardrails and shared goals with the respective teams that are driving revenue and engagement metrics to ensure that those metrics, when they do get impacted, are within the realm of what’s acceptable to the company. There is a lot of pre-work that happens before you launch something, or before you ramp a product in this area, and you tend to sort of forecast and predict, whether you’re building a model or whether you’re building some set of rules, you run them in some passive offline manner to understand what the potential impact could be. Then you work with the appropriate teams within your companies to ensure what kind of trade-off is good. Again, there are times where you may not come to an agreement, in which case you have to just assess what’s the right thing to do for our members. That’s at least the approach we use here at LinkedIn, where we are in close collaboration with these teams who also understand the work we are trying to do, and we have a bit shared goals and guardrails in terms of our measurements and metrics. We keep a very close eye on how we can reduce friction to real people. Ideally, that number should be zero, but no system is perfect. We have to just ensure that when we do impact good users we build experiences that are not as stringent and the friction is not super high.”