The ethics of AI and data usage in marketing

Share this:

As we witness the rising significance of data and the increasing use of AI to optimize personalized advertising, concerns regarding data acquisition and ethical usage become increasingly pressing. In this episode, Ananya Bhargava interviews Seth Rachlin, a social scientist, business leader, and entrepreneur currently active as a researcher and assistant teaching professor of Social Data Science at Arizona State University.

Dr. Rachlin discusses how social media data mining shapes targeted marketing, the moral considerations of gathering personal data, and the broader consequences of AI integration in business.

Transcript

INTRO: [MUSIC PLAYING]

Ananya Bhargava: With the rise of the digital age, information has become the most valuable commodity. Data collection and AI integration have become the bread and butter of targeted marketing. And as data emerges as a priced asset, and AI is used to optimize personalized advertising, questions surrounding data acquisition and usage grow more urgent.

I am joined here by Seth Rachlin, a social scientist, business leader, and entrepreneur currently active as a researcher and assistant teaching professor of social data science at Arizona State University. With a career spanning three decades and consulting, Dr. Rachlin brings a wealth of knowledge and insights into the evolving relationship between technology and society. In this episode, we’ll peel back the layers on how social media data mining shapes targeted marketing, the ethics surrounding personal information collection, and the broader implications of an AI-driven world.

Bhargava: So you’re in the industry for quite a bit of time. So I’m curious to learn how you’ve seen things change in terms of development and technology, and how more novel forms of technology have shaped society.

Seth Rachlin: The fundamental intent behind the use of data hasn’t changed. We want to use data to understand people, understand processes better so that we can predict outcomes with more certainty, so we can be more efficient in pursuing business objectives so that we can be doing all of those things.

The fundamental changes are really on the how of that. So over– I mean, I’ve been doing this 30 plus years. Over 30 plus years, two major things have happened. The first is that we continue to increase both the quantity and, frankly, the quality of the data that we can work with in business. And we’ve also, in part because of the increase in computing power that’s been a constant throughout my career, we’ve increased what we can actually do with that data.

So from a methods perspective, the kinds of ways we manage data, the kinds of ways we perform analytics on data have evolved significantly. And that’s obviously where AI fits in. AI is kind of a step change in how we work with data and how we use data. And those are really the two major changes. It’s the variety, the volume, the quality of the data we have to work with, and our ability to actually do things with it.

So when I started my career in 1990-ish, if there had been this thing called social media and streaming data from social media, that would have been awesome. But we would not have had the computing power or technology to do anything with it. So the technology advancement and the data advancement have gone hand in hand, at least in my perspective.

Bhargava: That’s interesting. And since you mentioned that there’s now so many possibilities with data, I’m curious to know if you have any thoughts on the right or wrong ways to collect and use data.

Rachlin: So the first is practicing. So privacy has been, once again, an issue that I’ve been focused on, that I’ve been concerned about throughout my career. Privacy is not a new concern. There were concerns about privacy during the early days of mass journalism in this country where people felt that their privacy was violated by what was published in the newspaper. But the ability or the ability to violate people’s privacy and to treat everything about them as public and usable has, in my mind, never been more of an issue than it is now. So privacy is obviously one big issue.

Another big issue is manipulation. So I’ve always believed that it’s perfectly acceptable to use data to predict behavior. So if I know all these things about you, I may have a very good sense about what you’re going to buy when you go into the supermarket. I may have a very good sense of what kind of ad you would respond to. But there’s a line, and it’s a tough line to define between prediction and manipulation.

So when I think about– I’ll just give you an example– this nicotine product called ZIN, Z-Y-N. And a lot of how they’ve gone to market, a lot of how they’ve created a market for these nicotine-based products, to me, represents the use of data and the use of information to manipulate people, as opposed to simply predict their behavior. And I think that’s something that we have to be concerned about. So privacy and manipulation are the things that I worry about the most.

If we were having a political conversation, I might talk about disinformation and misinformation. But in the context of marketing, in the context of business, it’s more about manipulation. So pushing people to do things that they might not ordinarily do, even if they’re bad for them or socially undesirable.

So like, for example, I mentioned ZIN, another good example might be sports betting. And the way sports betting has become a huge industry, and sports betting on its face is not necessarily a terrible thing. But if people are manipulated into place and best, that they can’t afford to make or they can’t afford to lose, that is indeed a problem. And there’s a lot that suggests that sophisticated data targeting methodologies are being used to get sports betting messages in front of people who probably shouldn’t be betting their next meal on the outcome of an athletic contest.

And so those are the things that concern me. And since you brought up the whole concept of prediction versus manipulation, I’m curious to know your thoughts on targeted marketing as a whole.

Bhargava: Do you think there’s an ethical way to do targeted marketing?

Rachlin: So to me, targeted marketing in and of itself is a fairly morally neutral thing. So if I use data to predict who is most vulnerable to a particular disease, a particular health malady, and then I market something that addresses that disease or that health malady to them, that to me is a positive. And if I have only X dollars to get the message out about something that is positive for people, then targeting that message to the people who would benefit the most or who most needed, well, that’s probably a good. If I use the data to target it to the people who are most likely to develop a gambling addiction, that is bad.

But this practice of targeted marketing to me doesn’t in and of itself, unless as I said, it moves into this area of manipulation, doesn’t in and of itself have this moral or ethical context. It’s about what we use this capability, what we use this technology for that really matters. I started my– I discovered targeted marketing. I started my career actually working for a company called Scholastic. And Scholastic makes books and materials for schools. And I helped Scholastic get better at getting the right materials to the right kids at the right time. I thought that I was actually doing good. I didn’t think I was doing something that was morally questionable. So figuring out, well, these schools might benefit from this material, and these other schools might benefit from that material. So let’s target our marketing appropriately to the school in question.

That to me is a good– But how would you navigate more of a gray area? Like if we’re talking about market segmentation, the most common type is obviously like demographic or geographic. But it can be segmented further into behavioral and other aspects.

So where are we drawing the line where it’s good targeted marketing and bad targeted marketing? You have to look at the product or the service you’re offering. And that’s obviously an elemental one. So books for kids feels pretty obvious. Sports betting feels like you better ask yourself some more questions. And I think that there’s a double-edged sword to demographics.

So on the one hand, demographics point to need. On the other hand, they point to vulnerability. And to me, when the demographics become a way of understanding vulnerability and when you are exploiting that vulnerability, then you begin to cross an ethical line.

I’ve done a lot of– look, I’ve done a lot of work in the insurance world. And insurance is, as a business, it’s all about vulnerability. It’s all about people who have risk. And the nature of risk is highly correlated with the broader nature of economic vulnerability in the world. So if you live in a neighborhood with a high crime rate, you’re far more likely to have your car stolen than if you parked it in a garage in a gated community. I mean, that’s just a given. And to exploit that fact to market or price a product based on that fact, to me, crosses a line.

So these are hard decisions, but I reject kind of the sort of broad brush that it’s bad to market to people based on their demographics or it’s bad to market to people because of who they are. And it’s bad to leverage data to do that just because I think that’s an easy out. That’s a kind of just sort of broad brush. And it’s also a journey of fantasy world. I mean, the notion that you wouldn’t do that and that you could stop people from doing that to me is silly. As I said, people have been doing this for a very, very long time. It makes a lot of sense. Marketing is incredibly expensive.

And so the old adage, a long time ago, someone said, I know that half of my advertising is wasted. I just don’t know which half it is. Well, we’re getting better at actually knowing which half it is. And to me, that’s actually a good thing. That’s good. And I like that you mentioned that there’s no way to really stop this.

Bhargava: So as a professor who teaches the ethics and policies of social data, how do you educate future professionals about the ethical implications of data collection and use?

Rachlin: I think you start by trying to understand how people approach this problem. And so to me, I take a risk-based approach to it. And so what is a risk-based approach? It means what are the risks associated with the use of data? And how do we– which risks are too great? So there are things that are simply too risky. I’ll give you a perfect example.

There is this thing called social credit scoring. And what social credit scoring is, is basically I mine all of your social data and I determine whether you are a trustworthy, upstanding person or not. And based on that, I decide whether you can rent an apartment or get a job or do any of the other things you might want to do. Like most people outside of China think that that’s a bad thing and that that is simply too risky and has too much risk for abuse.

Other things with the right guardrails, I think, are OK. So you look at what those guardrails might be. You look at various protected classes. You know, like what’s different about men versus women? What’s different at the level of race or ethnicity? What’s different at the level of age? So there are things that are perfectly, I think, acceptable in most people’s mind when you’re dealing with adults that are totally off limits when it comes to children. And that’s where, you know, I mentioned the nicotine thing. That’s where that whole nicotine thing is just a huge, huge issue.

So I think you take a risk-based approach. I think you try to focus on what the risks are, what risks are completely unacceptable. And then for the risks that are acceptable, how do you put the right guardrails around?

Now, we have an advantage in that the European Union in particular has done a lot of thinking about these issues and has done a lot of regulating around these issues, both at the level of privacy, but also at the level of what’s acceptable for AI and what’s not acceptable for AI, as well as digital services more broadly. And there’s a lot of thinking there. So, you know, in the classroom, I’ll say, let’s look at what they’ve done in Europe. Let’s try to understand that. Let’s figure out what aspects of that are so specific to Europe that they would never work anywhere else. So let’s figure out what we can actually learn from this. So I tend to take a more policy-focused and risk-focused view of things because my feeling is that these are not new issues, that these are issues that people have been wrestling with.

You know, there are easily seven or eight major statutes relating to privacy. There’s a whole legal history of privacy cases that have been adjudicated in courts. And so, you know, the technology is always new and interesting and creating new problems. But when you look at them a little hard, you realize that they may in fact be versions of older problems.

Bhargava: And since you’ve mentioned the European Union and their regulation, I was reading quite a bit about the Meta AI and how there’s an option to opt out of it in Europe, but there’s no such thing in the US. So I’m very curious to know your thoughts on, like, AI integration in social media and how that affects privacy. And then also, what do you think, how do you think the US should approach privacy laws in this sector?

Rachlin: The United States needs a federal US privacy law, one that is general and not specific. So right now in the US, you have laws about healthcare data. You have laws about financial data and what financial institutions can do. But you don’t have a general privacy law. Period, full stop. You have, I think at this point, four or five states that have passed general privacy laws, but you do not have a federal law. So, you know, step one, we actually need a privacy law.

The fundamental piece, and you mentioned the opt out with Meta, the fundamental insight that I think Europe has that we necessarily don’t relates to who actually owns the data. So in the European context, you own your data and you consent for other people to use it.

In the US context, other people own your data and maybe you have the ability to tell them not to use it. And those are very, very different things. There’s an opt in versus an opt out. And opt ins are obviously much more powerful because they actually force, you know, number one, a level of consciousness and number two, a level of responsibility. You know, it’s, AI is like the, is like a data vacuum cleaner, right?

I mean, it’s, I mean, I’m old enough to remember when storage and computing power was so expensive that you literally had to justify whether you needed a particular one or two character field in a data set. That’s the level you had to go to because of this, of the limitations of compute and expense. Now, AI has taken the presumption that more data is better. I mean, that’s the fundamental mantra. The more data I can give it, the better it is. And that has to do with how models get trained and the number of parameters and all of those types of things. And part of, part of what’s enabled that is obviously the compute power to be able to deal with it.

What’s also enabled it, and I think equally important is that money appears to be no object. And so what’s different in my mind about like this wave of tech versus some of the other waves of tech that I’ve seen is just the level of money you need to actually play. And so you get a scenario in which the startups, which would typically be, I spent a lot of my career in startups. And when you’re a startup, you think, what I’m going to do is I’m going to disrupt or I’m going to break what some big company does. So I’m going to change the way people do X, Y, or Z. And here what you have is you have the big guys, the big tech and the startups have to be aligned because of the sheer amount of money that’s involved.

So because you have this mantra, which is more data is better, right? More, you know, is that expression less is more? Well, in the AI world, the expression is more is more. Because of that, you have an entire dynamic around how all this stuff is getting better. And you know, the fundamental, you know, I think, you know, one of the big viability questions for AI as a technology is, can we can we bend the cost curve of this thing? Because it’s simply too expensive right now.

And people are kind of living with that and going, well, it’s okay, but it’s really expensive because ultimately it will get cheaper. And this is how we prove it. And this is how we show people what can happen. And this is how we develop the technology. Now, other aspects of the evolution of computing have indeed followed that model. I mean, you know, so when we went from, you know, large computers to the ones you can carry in your pocket, I mean, a lot of that was a significant bending of the technology cost curve to make that possible.

You know, if you tried to create the computing power of an iPhone 20 years ago, that would probably would have cost a whole lot of money. So the question, you know, the question is, does that cost curve get bent? Or does AI need to get smarter about the data that it needs and the data that it works with and move beyond this more is more paradigm. But right now it’s a more is more paradigm.

The whole concept of data privacy and all of these issues is not necessarily a new thing. AI has existed for decades now. But I’ve seen an insane consumer facing push for it only in the past, like four years. So I’m curious to know what you think is driving this push and why is it now? These things come in waves. You know, there was a period when Siri came out originally and then Alexa on Amazon and these things you could talk to you could turn your lights on with voice controls. And everybody thought that was really, really cool. And then you get to kind of the natural limits on those things like, well, they don’t really understand what you’re talking about.

So with large language models with the incredible computing power that’s been thrown at creating these models, you’ve got an exponential improvement, not a linear improvement, but an exponential improvement in the ability of these models to actually interact and comprehend. And so that generates a tremendous amount of excitement because, you know, the way technology works typically is people invent stuff. There’s like one group of people who invent stuff. And the second group of people goes, okay, now that that’s invented, what do I actually do? So I think we’ve gotten to like, there’s a new invention, right? I mean, definitely a new invention, much like Siri was a new invention and Alexa was a new invention.

But now we have the next invention, we have the new thing, right? And then you get this fury of people who go like, okay, now what are we going to do with it? How do we make money with this thing? And how do we, how do we change the world? Right?

And when they invented the iPhone, there weren’t millions of apps, there was like six apps, right? You could, you could talk on the phone and you could send the text message and you could listen to some music and you could take a picture. And that was it, right? But everyone goes, oh my god, this is so cool. Like we’ve got a computer in your pocket. Now what, what does that enable us to do?

And so we’re at that, we’re at that point again, we’re like, we see something that we think is unbelievably cool. And then we sit there and go besides, it’s like cheating on a paper for school, what can I actually do with this thing? And that’s the stage we’re at. So you get, you get a tremendous amount of excitement. And, and then when it’ll end up happening is you get this kind of shakeout period where people go, all right, well, maybe you got too excited because you’re trying to figure out what can you really do with it. And then, then you go through a more sort of linear expansion of this capability and its ecosystem and all those types of things.

Definitely, a lot of business leaders are hopeful for AI’s potential in the future. So there’s definitely potential there, but a lot of employees are especially concerned that AI, instead of being a tool, it’s more of a replacement of human like ingenuity and thought.

Bhargava: So I want to know your thoughts on like, how AI should actually be integrated into business processes, and how we can make sure it stays a tool, should it stay a tool, should it mimic more human ingenuity, like what are your thoughts on that?

Rachlin: You know, for me to sit here and tell you that AI shouldn’t eliminate anybody’s job is, is to me would be intellectually dishonest. And so for me to suggest that there’s a way to build out AI and deploy AI in a way that doesn’t affect a single job would be would be ludicrous. You know, there are clearly things that AI will be better at than humans and better might simply mean faster.

Where, where I have concern, where I have concern is, you know, number one, I think there’s a lot of jobs that it should not eliminate. And those in particular are jobs and, and even you mentioned creativity, I think creativity is an important, an important piece of that. But I also, I also think that we don’t want to automate away human intentionality in terms of the way the world works. So there’s a lot of, there’s a lot of human intent in, in business and a lot of purposeful value driven action in business.

And I, I personally think it would be a major mistake to, to not recognize that. So I do totally believe that, you know, if I were a computer scientist right now, I’d be worried about how much coding people are going to do by hand going forward. Much as frankly, I would have been worried if I were a typist in 1977 or 78. Right.

Because there used to be people who sat and typed things for people, but that, that kind of went away. No one does that anymore. So there will be job shifts when things that are, are become obvious, obvious uses of this technology. But it also creates opportunities for people to use the technology the right way. I mean, I think it’s all, I mean, it’s all potentially exciting and it’s all potentially terrified. And the difference between exciting and terrifying is whether, you know, whether it’s done well or it’s, or it’s done, it’s done poorly.

You know, my personal passion for data has always been about deeper understanding. We as humans often stop before we consider all the information we need to consider when we’re trying to understand something or we’re trying to make a decision or we’re trying to do something like that. And so what excites me about AI, what excited, what’s always excited me about data, is creating more informed people, more informed decisions, more informed strategy, more informed ability to act in good ways.

And so, you know, what excites me about AI is not, you know, whether I can say 15 seconds on a customer service call because I’m talking to a bot. Like, I mean, that’s important, I guess, if you’re in business, if you’re in the business of saving money on the customer service, but it doesn’t really excite me.

What excites me is the idea that where I used to be able to consider 10 sources of information or 15 sources of information, now I can consider 15,000 sources of information when I want to understand something and make a decision. That’s, that to me is, is, is empowering to humans as opposed to disempowering to humans. So AI uses like quite a bit of natural resources to power itself.

Bhargava: So I’m just wondering what are your thoughts on that and what like regulations in the future or investments can be made to make AI and other new technologies like more sustainable.

Rachlin: There’s simply no way that we as a world can deploy AI at the scale we continue to deploy it without considering the environmental aspects. I have friends who are, you know, starting businesses to try to figure out how to harvest excess capacity, harvest excess daytime capacity in the solar grid, which there is, right? You know, there’s, there’s excess capacity during the day. There’s, there’s a deficit of capacity at night, but the, there have to be better answers to this, to this.

So, you know, I’m a, you know, deep, I’ve deeply, you know, for my work in insurance over the years, deeply, deeply concerned about climate change, deep, you know, I think it’s real. I think it’s immediate. And the notion that we’re going to build the economy of the future, where we have to harvest this level of, of power to make it run. I mean, it’s, it’s almost as irresponsible as Bitcoin mining. I mean, that’s just another example of, you know, a technology that basically uses electricity to, you know, converts electricity into, into capital, which is something that if we’re going to do, which we will do, we have to, we have to figure out a bit of where.

So I think the short answer is that I think that the AI piece hopefully triggers further interest and further investment in sustainable sources of energy. Looking to the future, how would you predict, like the advancements and data mining and AI will like further evolve? And how do you think they can further be integrated, bringing it back to marketing? How do you think it can be further integrated into marketing processes and social media and things like that?

Yeah, I mean, I think, I think a couple of things. Number one, I think that marketing content will hit higher and higher levels of personalization. Okay. Because, you know, the limiting factor in terms of personalization, the personalization being basically when you see the marketing message, it looks like it’s talking to you as opposed to generic customer. So I think significantly heightened personalization of messages.

I think significant capabilities from increased capabilities for mining online behaviors, social behaviors, those types of things, and then generating content that corresponds to those. So I think, you know, in essence, and it’s sort of a scary thing, right? I think in essence, what ends up happening, and this has just kind of been where marketing goes, is that everybody becomes a segment of one and that the uniqueness that we each represent as people becomes more and more discernible and actionable in the context of marketing. And, you know, while that’s a good thing, which it’s a good thing in some senses, because it means that the stuff that you see message-wise actually makes sense and is relevant to you and horrible when you have to sit or you see advertising that completely doesn’t speak to you.

It’s like one-way. On the other hand, it, you know, third-order roads, sort of the common things that we share. So if everybody sees different content, then it’s just one less thing that we have in common. You know, as a sociologist, as it was to a marketing guy, that’s something that I worry very deeply about.

Consumers need to do two things. You know, I think number one, we absolutely have to raise people’s consciousness about data, what they share, and how they protect aspects of their identity, so that they don’t become subject to manipulation and violations of privacy or what kind of another. So I think there has to absolutely, there has to be, I mean, way more education than there already exists around the risk, the privacy risks and the other risks of big data, so to speak. And secondarily, I think consumers have to be far more sensitive to the authenticity of the content that they’re exposed. So I know myself that I have, you know, that as machine-generated content, bot-generated content has proliferated, my personal response has been to trust fewer and fewer sources of content, because, you know, I’m looking essentially for somebody, for the gold standard of the content that I consume.

And I think that understanding the sources of content and understanding where content comes from will help us be better consumers of content. And I think that ultimately, and people talk about this whole idea of watermarking stuff and saying that it’s from where it comes from. And I think that becomes like a real thing, right?

I think that ultimately, as a society, we will get to a point where we demand that the source of content, the true source of content be, you know, AI gets better about where it’s getting its information from, where we know when something’s generated versus something being authored by a human. All of those kinds of things become bigger and bigger issues for people as we go forward. The most important part is that we don’t know where a lot of this stuff is ultimately going to go.

So we can use sort of, we can use, you know, our 30 plus year digital revolution playbook to kind of say, what is this mostly like and what sort of issues is it raised? I think that’s all very, very useful. But I think that ultimately, we shouldn’t just think that the technology will go where it goes. And we have to deal with it and cope with it.

I think we still have the opportunity to shape, to shape its direction and to shape the future we create with it. And I think that oftentimes people forget about that and neglect that, that they think that, well, it is what it is and technology, you know, is following its own path. But I think the importance of humans directing technology toward pro-social and pro-people ends, I think, is more important than ever.

That’s why I teach this stuff, it’s because I want people, I want it to create enlightened people who can go and shape the direction of these technologies in ways that are good for people.

OUTRO: This is the Marketing Edition of We Mean Business, sponsored by the Reynolds Center at Arizona State University.

Author

  • Ananya Bhargava is a Junior at Arizona State University, pursuing a bachelor’s in Digital and Integrated Marketing Communications with a certificate in Leadership in Business and a minor in Public Relations and Strategic Communication. She loves to l...

Search

Get Two Minute Tips For Business Journalism Delivered To Your Email Every Tuesday

Two Minute Tips

Every Tuesday we send out a quick-read email with tips for business journalism. Sign up now and get one Tuesday.