A Video Conversation with Shrenik Jain, Founder of Beacon - Part III

12/23/16

Shrenik Jain

Click here for Part IPart II

Transforming mental health care through an anonymous, scalable group therapy platform

Shrenik Jain is the founder of Beacon (formerly Grüp). Labeled “Most Disruptive Startup”, by the American Psychiatric Association, Beacon is a mobile therapy software-as-a-service platform that grows through natural language processing. Beacon seeks to fill the current gaps in mental health care by providing users with anonymous, persistent communities of like-minded individuals. The application gives users the opportunity to connect with others experiencing issues such as PTSD and depression without fear of stigma. Aside from his role at Beacon, Shrenik is also a trained first responder (EMT) who volunteers with the Montgomery County Volunteer Fire/Rescue Association.


Q. Can you walk us through how the natural language processing function in Beacon operates?

SHRENIK JAIN: With regards to our use of natural language processing, it’s a very interesting thing to consider. Now, the first thing is why we chose to adopt natural language processing at all. That’s because when we looked at the models for online support groups, we saw a lot of people resorting to online, anonymous social media like Reddit and Yik Yak, where they form these impromptu support groups for their issues just because care was so inaccessible.

We saw people on Reddit, you know, talking about their depression, talking about their relationship problems, and generally people were supportive, but we also saw on these platforms that occasionally there would be these trolls—as they say in internet jargon—which are basically people that would just maliciously come on and encourage negative behavior. So, someone would say, “I am feeling terrible. I think I’m going to end my life tonight.” And someone else would be like, “That’s a great idea. You should do it.” It was atrocious and there was absolutely no way to moderate against that. Reddit’s solution must have at least five moderators read through every single line of text on a 24/7 online forum.

We thought there had to be a better way, so what we looked into was natural language processing, which is a subset of machine learning in AI that deals with actually processing human language. For us what that means is we wanted to develop algorithm that could effectively deduce the sentiment behind what people were saying in a text message. This is a more complicated problem than you would initially think, because you can’t just search for keywords. A lot of platforms do today. You can’t just search for “suicide” as a keyword because sometimes people will say things like “I want to walk out into traffic.” That obviously shows suicidal ideation but there is no keyword there.

To get around this we had to design what they call an “ensemble” model: We combined a bunch of different state-of-the-art models and ran them on large volumes of data to develop the algorithm that can actually understand underlying sentiments behind messages, even if there’s no keyword. It mimics the way people already talk.

The second thing we had to do was understand context. For example, if if someone says like “I want to kill myself,” and then someone else says, “that sounds cool to me,” you need to be able to flag the response to that. We had to develop algorithms that could parse whole blocks of conversation, and what this means on a larger scale is that we developed this really robust set of tools that are not based on any keywords or anything like that; they’re based on the data that they’re fed. They’re continually improving and they have the ability to attune themselves to specific populations.

If we were, say, working with a group of veterans, it’s very important to realize that they have a unique way of speaking. They’re very… “caustic” would even be an understatement in their language. We need to be able to normalize that and understand that context and jargon. We developed tools that could do that. The beautiful thing about these algorithms is they get better the more they are run and we developed a failsafe system where users still have the ability to manually flag something that they think needs the attention of a moderator or therapist. So, if someone says “I want to go hang myself,” and that’s not picked up in the group, in the algorithm by some chance, then someone else would just manually flag it when they saw it and we would feed that data point right into the algorithm with an additional weight, so the algorithm would not make the same mistake twice. So, once we finish our pilots, we’ll really have quite a robust technology tool because it gets better the more it’s used.

Q. What’s ahead for Beacon?

A. The future is extremely exciting for us, because in the next three months we’ll be doing our pilots to validate our product. Then, we’re going to re-brand our name for each different vertical we target—whether it’s firefighters, police officers, veterans—but from there, because we’re developing a software platform, once it’s validated it’s tremendously easy to scale.

We will be looking to grow our team a little bit and raise a seed round. We’ve already applied for funding from TEDCO and we will be looking at government SBIR grants, but then we’ll just be pushing out our platform and scaling tremendously because there is really no dearth of places we can use our platform.

We’re going to start with police and fire department and universities. Then, we’re going to go to healthcare systems and the federal government but also, for example, schools could really use this platform because kids are always on their phones, and most mental health illnesses actually manifest between the ages of 10 and 15. You might have even seen that the Baltimore school system just applied for, I think, $2 million in federal grants to improve mental health care access in their schools.

Then, there is, of course, corrections officers and jails. There are people who are isolated—say, oil rig workers over long distances. Really, I don’t see a limit on the uses of what we’re building, and that’s super exciting to me. So, I expect in the next three months we will be validating our product in beta and then next year we will be looking to scale pretty quickly.

Connect with Shrenik on LinkedIn

Sponsored by:

ABOUT OFFIT KURMAN

Offit Kurman is one of the fastest-growing, full-service law firms in the Mid-Atlantic region. With over 120 attorneys offering a comprehensive range of services in virtually every legal category, the firm is well positioned to meet the needs of dynamic businesses and the people who own and operate them. Our eight offices serve individual and corporate clients in the Maryland, Delaware, New Jersey, and Northern Virginia markets, as well as the Washington DC, Baltimore, Philadelphia, and New York City metropolitan areas. At Offit Kurman, we are our clients’ most trusted legal advisors, professionals who help maximize and protect business value and personal wealth. In every interaction, we consistently maintain our clients’ confidence by remaining focused on furthering their objectives and achieving their goals in an efficient manner. Trust, knowledge, confidence—in a partner, that’s perfect.

You can connect with Offit Kurman via our Blog, Facebook, Twitter, Google+, YouTube, and LinkedIn pages. You can also sign up to receive Law Matters, Offit Kurman’s monthly newsletter covering a diverse selection of legal and corporate thought leadership content.

MARYLAND | PENNSYLVANIA | VIRGINIA | NEW JERSEY | NEW YORK | DELAWARE | WASHINGTON, DC

Edwin Warfield, CEO of citybizlist, conducts the CEO Interviews.

If you're interested in reaching CEOs, please contact edwin.warfield@citybuzz.co

Connect on LinkedIn

Recent Deals

Interested in advertising your deals? Contact Edwin Warfield.