A Student’s Perspective with Seth Rabinowitz
KIMBERLY NEVALA: Welcome, everyone. I'm Kimberly Nevala, and in this episode, we're pondering AI with Seth Rabinowitz. Seth is a student in the Master’s program for Data and Business Analytics at UNC Charlotte. Welcome to the show, Seth.
SETH RABINOWITZ: Thank you. Thank you for having me. It's a pleasure.
KIMBERLY NEVALA: The pleasure is all ours. Now you studied biology as an undergraduate and have now shifted your focus in your master's to data and business analytics. So what sparked your interest in the topic and caused you to change that focus?
SETH RABINOWITZ: Yeah, so I graduated with a biology degree undergrad, and I moved back to Charlotte. That's where I'm from. So I got a job as a salesman trying to figure out how I can work my way into the medical device sales industry. I quickly realized that I didn't love sales, so I did like the account management side of things at the freight forwarding company that I was working for.
And from there, I moved to a different company. Again, account management, freight forwarding, the whole thing, but there's so much information and data behind the scenes when it comes to transportation. And that part is what I fell in love with. I fell in love with the puzzle pieces and the mystery behind it, making everything come to the forefront for people who don't necessarily see the background information. Being that support character in a video game is pretty fun for me.
So I really-- that's where I fell in love with it, and I had the great opportunity to go to UNC Charlotte for data science and business analytics. The master's program lined up exactly with my intentions for my career and what I wanted to do, so it seemed like a great fit, and that's where I'm at now.
KIMBERLY NEVALA: That's awesome. And it's amazing that you fell into that analytic mindset so early. And by the way, we've had so many guests on the show that take the most circuitous paths to their current job, so I think this bodes very, very well for you moving forward.
Now in the Data Science and Business Analytics Program over the last few years, which have been tumultuous and eventful by any standard, how has your perspective on AI changed, either during the course of your studies, and as you've observed what's happening out in the wild today?
SETH RABINOWITZ: Yeah. It used to be I was very scared of it. And I think that it's because I didn't have all of the information. So a lot of people think that AI is this ultimate big, scary, going to take everything away from us, but my view has changed. I see it more now as a tool, similarly to a calculator or more so a computer. You can use a computer to hack into things and cause problems or you can use a computer to talk to people across the world and create a better worldview for yourself and gain an understanding. So it's all about how you use the tool.
So it's transitioned from scary to more neutral. I think now I'm more scared of the people. So who uses it. I think it's very similar to a race car. A race car is something that can be very scary for the inexperienced, but when there's someone who really knows what they're doing driving the car, you can get a lot done. It's very productive. So that's how my view has changed. I see it more now as a tool and I'm less scared of it. Still wary, but less scared.
KIMBERLY NEVALA: And we'll talk about some of the reasons that you may still be wary, but I'm interested just anecdotally, when you look around at friends, or maybe it's your family, or just folks outside of the field, as it were, what are you observing about how they perceive AI and whether they're able to use it-- I don't want to say properly, but do they understand the ramifications and the boundaries around the tools that are in front of them today?
SETH RABINOWITZ: Yeah, I think that the people that I hang out with are very skeptical of it, similarly to how I was before I really understood it. I think that it's important for us as individuals who work with the data - within the data industry - to be a steward and teach the people so that they can be less scared of it and have a better understanding because I think people are just scared of what they don't understand.
When it comes to how it's being used, I think it needs to be seen as more of a purpose-built tool rather than this all-encompassing thing. I'm big on cars. I'm probably going to talk about cars a lot. It's probably going to be most of my analogies. But you wouldn't take a limo off-roading, right?
So there's certain things that these LLMs are purpose-built for and they excel at it, like sentiment analysis or code debugging. Certain things they're not good at, like telling you how many R's are in strawberry.
Like, there's a famous quote of AI is dumber than my cat. And I think the ramifications are we need to understand that it's the little things that we can't let it take over for us. When I say that, I mean like the thinking behind what we do. Very often you'll have people write emails and then put them into ChatGPT to have it-- spruce it up a little bit. I think once we blur that line into simply having AI write the email for us from scratch, that's when we start to lose the plot and we start to do a disservice to ourselves.
So anecdotally, I've heard stories of teachers. Some of my friends are teachers for lower grades. And the kids-- I mean, we were talking about it a little bit before this, but everything is getting shorter and shorter and shorter, and we're not able to have these long, drawn-out conversations. And I think AI is only perpetuating that within the school system. So you have children who are using LLMs like Grok, ChatGPT, Gemini, what have you as a replacement for their brain instead of critically thinking and then possibly using that to further what they already think. So I think that's where we need to focus on, is making sure that we stick to the little things so that we can build the fundamentals first before we start using AI, especially in schools.
KIMBERLY NEVALA: This is a great on-ramp to a discussion about how you and your peers use AI within your education today, and you've alluded there to potentially your own thinking about when am I going to use these tools and when am I not? So talk to us a little bit about how you see that.
SETH RABINOWITZ: Yeah. I have AI have a friend who is vehemently against AI. I mean, she just won't use it. And she sent me an Instagram reel where she compared using AI similar to a cigarette where if you're going to use it, you can't use it in the house. You have to take it outside. And I thought it was funny, but I think it had some sort of underlying truth to it where you're starting to lose all of your soft skills when you rely too much on AI. So I see how I want to use it as a tool, but not more so than I would my own brain.
So being intentional on how you decide to use it. Thinking about the ramifications that using it could cause on how you think and the productivity that you could have in the future. So if I don't know something, sometimes I'm not going to be able to rely on AI. If I'm in a meeting and someone asks me a question, and I let AI do all the thinking for me when I wrote a certain proposal, I'm not going to have the answers to the question, AI will. And then that trust in me to do my job is going to go downhill.
So I think there needs to be a fine line of, yes, we can use it to help with maybe creativity or debugging or looking at certain things that I might have missed when I wrote this proposal, but it should be an afterthought. It shouldn't be the first thing that you jump to. And I think when we lose sight of that, that's when it goes wrong.
KIMBERLY NEVALA: Yeah. And so are those, the things that you just mentioned, are those the types of usage that you see your peers going to AI for?
SETH RABINOWITZ: Yeah, unfortunately. I think that a lot of people are not intentional with it and they don't see, oh, what could go wrong? And it starts with the very small things of just, hey, can you help me with a couple lines of code? I just don't want to write it. Instead of writing the code and then have it debug it for you, or look at certain syntax errors, or if you could make it tighter. But the thought isn't first coming from the human, it's coming from the AI. And that's where I see the problem.
I would like to say that there-- it is not a lot of people within the data industry that view it like that, at least from a student's perspective. I think most of that thought comes from my friends and like the people that aren't so involved with learning about AI and learning about data manipulation and how it can be used.
And I think that, like I said earlier, it's very important for us to educate the population on these certain topics because some people just don't find this interesting. It's important that we know the safety and the ramifications of what could happen. So I think it's important for us to be those educators as well.
KIMBERLY NEVALA: You and I were talking before, and I had asked a question if different cohorts of your peers actually rely on or use AI differently. And you had a really interesting observation.
SETH RABINOWITZ: Yeah so I think that you see a huge divide when it comes to experience level.
The people that come fresh out of college, yes, they're native to-- and coming into the master's program, yes, they're completely native when it comes to LLMs and using AI and implementing it into their systems. The part that's interesting is seeing the effectiveness of both sides.
So you have those people coming straight from college, and then you also have people that are VPs or directors in their industry who are coming back to school just to catch up a little bit on AI and figure out how they can implement it.
So it's interesting to see the fundamentals with the people who are experienced and have 20-plus years in the industry, and then those who have really less than three years working with data and seeing how they are intuitively implementing AI into certain situations.
We have this project now in one of my classes that's so interesting. It's like a free rein web app that we're allowed to create-- backend, frontend. And all we have to do is figure out how to leverage an LLM. That's it. The way that we do it is up to us. So me and my partner have decided to try and redefine how searches are made within a food-ordering service like Gopuff or something like that. So when you search "sweet" on Gopuff, you'll get keywords. Like, there's a barbecue shop near me, it would pull Sweet Lou's Barbecue. Shout-out Sweet Lou's, they're the greatest. Or Sweet Frog or something with the word "sweet." What we're trying to do is scrape Yelp reviews and Google reviews, throw that into a database, and then have the LLM leverage that database and those text reviews to give you a sentiment of what that person ordered, and if it was sweet, or if it was savory, was it enjoyable?
So I think the usage of AI and where it can be implemented is very interesting between those who know the fundamentals and those who grew up with AI and LLMs. It's a fun dynamic, too, because you'll have more experienced people going, that's just going to break. And it's up to us to be like, well, OK. Can we try it? So it's a fun balance.
KIMBERLY NEVALA: So you've got folks who are maybe more AI natives - I'll use that term somewhat loosely, and everyone hold the phone on submitting complaints because I admit that it's not a great term - who will sort of instinctively and naturally look to do that. And then folks maybe who that isn't the first place they go. But I also wonder if there's a difference in how folks and your fellow students think about gaining a level of confidence in your own judgment, in your own knowledge.
Most of us don't like to be wrong. We don't want to say something that looks abjectly stupid. But is there a difference that you notice in different groups when they're interacting with the system, the level of confidence in their own intuitions, in their own knowledge? And their ability to say, yeah, the system said this, but I really don't think that and here's why. Or the system gave me this answer but I don't believe that because of x, y, and z and to have some of those sometimes messy conversations?
SETH RABINOWITZ: Yeah, I think it boils down to knowledge. I think confidence comes from a good foundation.
I think you can also have confidence without having a good foundation, but you're just going to have to deal with the consequences and learn from that. So I think it's more person to person, but I do see-- I think it's really-- and this is going to be completely off-topic, but I think it's the effect of social media.
I think that social media influences my generation to try and be perfect and have everything together at an early age, and it puts a lot of stress on people to make the right decision. And I think the right decision is just any decision forward, whether that's a, quote unquote, "failure" or a "success," because if you fail, it's just a learning opportunity. And you have to be critical of what did I do, where can I learn, how do I advance my knowledge so that I am more confident the next time this comes around?
I'm a snowboarder, so there's terrain that you just haven't explored yet, and you have to be confident in your skills to be able to explore certain things that you want to explore. So translating that to data is no different. You just have to have the fundamentals and mess up a couple times in order to build that confidence.
I think people are scared because this is such a smart industry. A lot of people are highly intelligent in this industry, and I think a lot of people are scared to look dumb. And I think that's honestly a hindrance because you don't know what you don't know. So sometimes you've got to look a little dumb, maybe. And, I mean, I'm fine with that. I fall on my face all the time, so it's not-- get up, dust it off and move on. Just keep going forward.
KIMBERLY NEVALA: Yeah, no, I think those are wise words that many of us have been around for much, much longer have still yet to learn and hearken to, so I think that's great.
Now I want to take this from a slightly different view. How do you see the advent of AI as an educational tool, as an educational-- some people might think it's a crutch, some people think it is the answer to all of the difficulties that we have had in making education accessible, all of these things.
So as with anything, there's a broad spectrum of opinions there, but certainly it is here. As you have said, you and your peers do use it. Not just when you're being asked to build with it but also in conjunction with your learning. What impact do you think this has on teacher and student dynamics, and what do you think, for educators, is paramount? Where should their focus really be?
SETH RABINOWITZ: Yeah, I think we see a lot of AI slop. And I think it takes the personal aspect of education out of it. Going back to the whole email thing, if I were to use an email to contact my professor, and I were to use AI to write that email, I would lose certain soft skills. It's just certain disservices that you're doing to yourself because you want either an easier or quicker or a better answer.
And I think, going back to the whole confidence thing that you were saying, I think we have to build confidence in the fundamentals first and being intentional with our soft skills and how to learn them. I think the main piece is creating that personal connection student to teacher. And when you use AI within those circumstances and those instances, it ruins that connection, and honestly, that mentorship that could happen because you and that person get along is just-- it's not you anymore, it's the AI.
So I think that what we need to focus on is those soft skills. Making sure we're intentional about how we use it within the classroom. I think it has its uses. I don't want to be the person that is just, AI's bad, AI's bad. It can be very, very helpful. I use it for studying all the time. It is a very harsh grader.
So there are ways to implement AI when you're studying, but it should be a tool to check what you're learning, not a tool to learn for you.
And it should be more supplementary than I think people are making it. I think people are relying on it too much, and it should be more supplementary, what can I get when I'm on my own? It's 10:00 PM and I just saw this really cool article and I want to learn a little bit about it, and my professor's probably not going to answer an email. So I think that's where it can be used, in those sorts of instances.
KIMBERLY NEVALA: Yeah, absolutely. So it's quite clear, you see definitely the potential, and, as you've said, you also have some reservations about our ability to grow this effectively. And you've mentioned a few times, really becoming better stewards for ourselves and for folks at large.
So what do you think that we should be doing or actions we can be taking, either as individuals or collectively, to be better stewards of this tech? And what might that require of us?
SETH RABINOWITZ: Yeah, I think it's going to be uncomfortable for a lot of people. I think that it's going to happen in conversations more so than it will on a grand scale.
So the conversations that I have day to day with people that either don't understand it or just simply don't like it because they don't understand it, or they really do understand it and they still don't like it, even those conversations, like, it's going to be difficult conversations to have, but I think those are the ones that we need in order to make progress when it comes to literacy within the community outside of the data industry.
I think it's important that we know it's our role to do this because no one else is. You have to take it upon yourself, if you want someone to learn about your industry, you have to teach them because people may be curious, but they're not going to dive into your industry as much as you will.
And there are certain nuances that you pick up when you're within industry that you're able to teach to other people. So I think it's just going to be these everyday interactions, trying to change certain mindsets, and that's-- the hard part is, it's going to be different for everybody. I might not like it as much as someone else does. So I think just having conversations-- and stuff like this as well, getting different perspectives is important.
KIMBERLY NEVALA: You've also said that everyday people need to be really comfortable doing more of their own research, by which I don't think you meant just asking the LLM, and also resisting dependency. When you said resisting dependency, what did that mean?
SETH RABINOWITZ: So when it comes to dependency, I think people are just going to rely solely on, oh, well, this thing knows everything, and it doesn't. Doing your own research is important, not only to check what it gives you, but also to be able to give it the right information to get the response that you are looking for. I think that's the line that switches for a lot of people.
I think another thing that we shouldn't be so dependent on is the idea that this thing knows everything. It doesn't. It's just probabilities and weights that are deciding what word to return next. It's like someone who is constantly trying to people-please. Trying to say the right thing for you all the time. And when you get a lot of people, well, that are listening to something that is just trying to tell it what it wants to hear, you're not going to get a good result. It's not going to be what you actually want, it's going to be what it thinks you want.
KIMBERLY NEVALA: Yeah, that's right. Now, I know that you've taken a business leadership course as part of your course of studies and really took a lot away from that. So I'm interested in your perspective on the importance of ethics in business courses for data science and computer science majors, and if we are integrated enough of those perspectives into these courses of study today.
SETH RABINOWITZ: Yeah. I cannot harp on it enough. I think that-- I love talking about ethics and leadership, and I'm probably biased. I already love all of these things. So I think it should be implemented in every program where it's mandatory.
But I do see just objectively the good parts about learning about this. So if you can understand a system, and we can make it technical, if you can understand the system behind an LLM, you can manipulate it to do the function that you want it to do. And manipulate for good, bad; it's just changing something within the LLM. If you can understand the business as a whole, and you can understand the goals and how everything moves within the company or within the business, you can better leverage what you need to do and your part in the business to further its mission.
So I think it's very important and very critical for these highly technically advanced people to understand the business side of it as well. Because that will only help them further their career and further their advancements for the mission of the business. Like I said, knowledge is power, so, I mean, the more you know.
KIMBERLY NEVALA: Knowledge Is power, absolutely. Now I also found it really interesting you've said that your personal attitude towards data privacy has really fundamentally changed. And perhaps in the opposite direction to your perspective on AI. So talk to me about how your learning more about what's going on under the covers here has changed your attitude towards data privacy in particular.
SETH RABINOWITZ: Yeah, I used to be super gung-ho about it when it comes to can just have my data, I think you know what's best for me. Being behind the scenes a little bit and trying to learn about the ethics and the business acumen behind it, I think it's very important that we take that and completely flip it. [LAUGHTER]
I think we need very strict guidelines. I think data privacy is, above all else, the manipulation of the world around us. We are the data that we consume and the data that we put out. I mean, you hear about it, you have examples of it on your social media. You get ads for things that you talk about or things that you text between people. That's not coincidence. I mean, it could be, but it's probably not.
But companies use your data to manipulate - and that just means change, in my opinion, it doesn't mean good or bad - but to manipulate the business and leverage your information to keep you consuming what they're selling. And I think that can be a dangerous cycle. I think we start to lose the human aspect of what a company might be there for. It's to help the people with their everyday lives and not to buy, buy, buy, buy. I do think that companies are there to make money, but I think we need to put people first.
And that's why my view has changed. I don't want somebody who has all of my information to be able to manipulate me into buying. I know that's all marketing, but I think it's important that we have a sense of self within a company. We get all these numbers coming across our table and we lose sight of these are people. They're not just numbers. And in certain instances, it'd be hard to discern person from numbers. Like 16 million people bought a sock today. Just some data point. That's where the gray area is for me. It's like, yeah, they're people, but they just bought a sock. How can you use that information? But somehow, these companies, they find a way. They're really, really good at it.
And your information is their persona on you. They have fully built out profiles for each individual person. And this is where I have a huge problem with it. They start selling to individuals at different price points and trying to see what they can get away with, kind of. So you have this dynamic pricing for different people. And I was thinking about it earlier, too. It becomes less like supply and demand based on a market and a very personalized supply and demand of does this one person buy enough to where we can raise the price for them just a little bit more to where they keep buying?
It's great business strategy. I mean, you would make more money, but I don't think that it's the right business strategy or a morally good business strategy. And I think that that's the part that a lot of people are losing, the ethics behind what we're doing. So I think that's an important piece to look at.
KIMBERLY NEVALA: I could not agree with you more. And I heard this phrase that stuck with me recently and someone said, it's not personalization. It's not that they're offering you personalization, it's that you are being personalized. And I think there's good arguments to be made that with, for instance, like the snare you talked about before with dynamic pricing, to not expect that that is going to lower prices for anybody. It's likely to raise it up for everybody.
It will be interesting to see if we end up seeing any - if that goes, continues to proceed at pace - if we'll see some pushback and what that pushback actually looks like.
SETH RABINOWITZ: I hope so.
KIMBERLY NEVALA: Yeah. So all of that being said, Seth, are there questions or areas of interest you would like to see us, us as educators, us as industry experts, us as tech companies or coworkers, do more to address or advise folks like yourselves as you're coming up in the field and learning all about it?
SETH RABINOWITZ: Yeah. I think that there is a large push for AI and I think the fundamentals need to be harped on more. Understanding the math behind things needs to be harped on more. I think we're getting very loose with the idea that we're just going to be using AI and you don't really need to understand the math behind things anymore. We're just brushing it over. I think it's very important to teach the math. I think it's very important to teach the fundamentals.
And I think it's important to create the relationship between students and professors. Having someone you can trust to go to is, first of all, paramount in any part of your life. But I think for students especially, and within an industry where you don't want to sound silly or dumb, having that trust with that professor is a large portion of where, at least for me, I find confidence in going to that professor and saying, hey, I don't really know what I'm doing here, can I have a little bit of help? And I think that's lost with AI now. We're now using AI because we don't want to look silly.
So I think foundations need to be drilled, and I think the personal aspect of teaching also needs to be paramount. Not so much like personalizing programs for each individual kid but knowing them to a certain extent. And a little bit of tough love never hurt anybody.
KIMBERLY NEVALA: A little bit of tough love. Excellent. All right, well, any final words you'd like to leave with the audience?
SETH RABINOWITZ: It's an interesting time to be a student for sure. Not only with the job market, how it is, but also the projections for data individuals being as high as 30%, 34% in growth over the next 10 years or so. That's the part that's giving me hope, honestly. But it is a very interesting time. Having to balance this idea of very little jobs being hired for and growth opportunity being huge.
I think another thing that's very interesting for me is seeing the shift between fundamentals to AI usage. I came in at a time when AI was starting to be leveraged, leveraged highly, for every situation. And it was important for me, at least, to try and understand the fundamentals on my own or with the professor so that I could better use the building blocks that I have to further my career or further my studies.
But yeah, super interesting time as a student. Not a lot of times do you get the crest before the wave. You're just about to have it crash or you're going to ride it out and you just don't know where you stand a little bit. It can be nerve-wracking, but it's a little exciting as well.
KIMBERLY NEVALA: I'm going to guess that your skills as a snowboarder going down what sounds like very tough terrain will hold you in good stead in this regard. As well as holding your nerve as this goes forward. So I have really loved getting to know you and our discussions. And I think having these as well provides great confidence for me that all will be well as we move forward in the future. So thank you so much for your time and sharing your thoughts today.
SETH RABINOWITZ: Yeah, of course. Thank you. I really appreciate it.
KIMBERLY NEVALA: Absolutely. And to continue learning from thinkers, doers, and advocates such as Seth, you can subscribe to Pondering AI now. You'll find us wherever you listen to podcasts, and also on YouTube.
