Apple Podcasts Link
NetElixir founder Udayan Bose and Rutgers professor Jim Samuel join host Jim Barrood to discuss AI’s transformative impact on business and society. They explore NetElixir’s evolution into an AI-first e-commerce agency, Rutgers’ pioneering public informatics program, and their joint research on how generative AI can create economic value while enhancing - not replacing- human potential.

Udayan Bose: I run a company called NetElixir. We are a digital marketing firm. We have been around for a little over 21 years now, and we are trying to establish ourself as America's first independently held AI first digital agency for the e-commerce brands.

Jim Barrood: Awesome. I look forward to hearing more about that in just a minute.

Sure. In the meantime, Jim, tell us what you do.

Jim Samuel: I'm a professor of artificial intelligence strategy at Rutgers University. I'm also the co-founder of Exosphere, an artificial intelligence strategy company.

Jim Barrood: Fantastic. Okay, so let's circle back to Udayan. Tell us how NetElixir has grown over these past 20 some years.

Udayan Bose: Yeah, I mean we focus almost entirely on the e-commerce space and mostly the mid-sized e-commerce businesses. Retail e-commerce is primarily the area that we have been in, and our success has been driven over these years. So we are one of the Google Agency leadership faculty members, which are one of the top 25 agencies independently held for Google in North America.

It has been primarily driven by three core areas. The first is culture. I think the NetElixir culture has really helped us. To really grow the company to this level and really stay relevant for so long and really evolve over time. The second is innovation and I think that sort of directly links with the culture as well.

I think being able to constantly see around the corners and innovate and really stay ahead of the game has been an important component. And the third component is just our philosophy of fail faster. We really have just. I think I failed. And then suddenly something has really worked.

That has been our story essentially probably 10 times. We have tried nine times, we have failed once we have succeeded, and we have moved forward. But I think it has really given us a lot of resilience, a lot of courage and a lot of really drive to keep moving forward. And that's probably one of the very few reasons why we are one of the only five digital agencies of our size, which has been around for 21 years.

There are none, literally. In the US at least.

Jim Barrood: That is incredible. That is incredible. And one thing I wanted to highlight here is your engagement with academia, right? And that's an important partnership. And so that's why I wanted to really highlight what's going on here. And Jim, talk to us. You are from academia and you are obviously been doing some really good, important work.

So talk to us about. Your journey, how through academia and maybe even before, and how you got to where you are now and all the interesting things that are going on now, particularly with AI at Rutgers.

Jim Samuel: This is going to be in bits and parts and hopefully it will all come together. I'm currently the executive director of Master of Public Informatics Program at Berkeley University.

This is a visionary program. There is no public informatics program to the best of my knowledge, or at least has not been any till 2025 anywhere else. So we are innovating in this space and the key concept here is artificial intelligence and public good. In the public informatics program, we are trying to bring together artificial intelligence and everything that's in the big data space together with a mindset of how can we use it for public good.

Ai, as we know, it's a powerful cluster of technologies. It's transformative, it's changing society. Even right now before our ai. Now when we are dealing with such a technology, I think academia has a responsibility to innovate such that these technologies are used for public good, primarily, and private profit should be an outcome and not the spearhead of the primary use of such technologies.

So that's where the Master of Public Informatics Program and our educational initiatives at Rutgers University and the Stein School come in at Rutgers University, I do a fair amount of research on artificial intelligence, specifically on natural language processing, and I run two classes. I teach two classes.

One is a public informatics studio where we work with clients. Companies like Stanford Research Institute, that's an organization and others. Our client for Fall 25 is better Future labs where we're going to have a project for the students on artificial intelligence agents so we can study agent ai.

And the way this class is structured is high function as pretty much like the principle of management and consulting firm, and I serve as a bridge between the organization. The student teams who will research and implement this project. So that's one class. The other course that I teach is a pioneering course in artificial intelligence strategy.

The world is divided into two parts. Three chat, GPT, AI professionals and post ChatGPT AI Professionals. I come into pre ChatGPT AI professionals. I launched my first artificial intelligence strategy course before anyone ever heard of ChatGPT and artificial intelligence was not such a common word those days.

I launched the first course on Coursera on AI strategy to the best of my knowledge launched the first artificial intelligence strategy course at Rutgers University, again, to the best of my knowledge. And that course is still life. So if anyone wants to take a look at the materials, it's available freely to audit.

And the company. The second bucket is the company. So the co exosphere, the entire focus is on human enhance ai. So we have heard a lot for a couple of decades, maybe at least a decade. There's this, there's been this push towards human centered. Which is great, and which was I think, the right thing to do at that point of time.

But today we have to go beyond the passive principles of human centered AI into more aggressive and proactive principles that we have associated with human enhanced ai. And that's what we are implementing through the company exosphere. We work on AI strategy. We look at how to mitigate AI risks and maximize human potential using artificial intelligence.

In terms of a background, I have a, I don't know if this is the right time to speak about it, but I have a very mixed background. I started off my early career as an architect. My first company was actually an architectural firm. I did urban planning, and then I fight for that. At heart, I'm an innovator and I'm a very curious person.

So at some point I got a little bored and I was fascinated by what's happening in the world globally. And so I jumped into international finance, worked with two large banks here in the US and then transitioned into technology because while working with the banks, I was a part of the technology finance teams and I saw how the world was changing, how technology was transforming the landscape.

Very often such technological transformations first happen, not all the time, but very often in the financial services space. And I transitioned to artificial intelligence via a PhD at Baruch College City, university of Ping Law. Fast forward, joined a couple of universities, created a couple of programs, new courses in machine learning, artificial intelligence, published some.

Pretty well received research in this space, and that has brought me here to where I am today.

Jim Barrood: That's amazing. What a great career. So tell us, but what else is going on? At Rutgers, there's a lot of other AI research and initiatives that give us a landscape view because I think a lot of people aren't familiar, including myself, with everything that's going on.

Jim Samuel: Yeah, Rutgers is a top R one university. As that's a research one university according to the GIE Mellon classification. It's one of the oldest fierce universities. And the reason I mentioned these is it's complex, it's large, and I'm not sure if even the president of Rutgers University.

Has a clear view of everything that's happening in every part of Rutgers University. Having said that, I'm a part of the Rutgers what's called the Rutgers Artificial Intelligence and Data Science Collaborate Group. That's headed by Professor Steven Burier. That's one of the initiatives that aims to bring together the different artificial intelligence initiatives at Rutgers University.

For example. There's a lot happening in the medical. AI space. Artificial intelligence is being used to support healthcare at multiple levels. There's a lot happening in computer science in terms of the development of new algorithms. Breakthrough technologies and individual faculty are leading different initiatives.

Apart from that, there's a, there's, there are strong initiatives on the social impacts of artificial intelligence and this whole space of applied ai. We have another initiative, very good initiative, very interesting initiative at Rutgers called Critical ai. And critical a i, I'm not very good at remembering all the names, but I believe the professor is Lauren Goodland.

She's doing great work of. Thinking about the philosophical side of ai, the social impacts of ai, so on and so forth. Then there are other initiatives which focus on areas such as the use of artificial intelligence and agriculture, which I'm particularly interested in, and that's where the use of computer vision technologies and drone technologies are.

I think Rutgers is at the forefront. I know at least a couple of projects, which are bell funded and are providing creating breakthrough technologies. So overall, Rutgers, I think is at the forefront, at multiple points. Artificial intelligence. Now the innovation horizon is very broad.

So you have multiple universities around the world doing fascinating things, but Rutgers is one of those universities at the forefront of AI innovation. It's making a difference in multiple domains and disciplines.

Jim Barrood: Yeah, that's really amazing. How much is going on. So Uday what about net elixir?

I know you mentioned the AI that's going on and. You've obviously been involved with education for a long time too, which is just wonderful at so many levels. Including being a great role model and facilitating hackathons and things. Talk to us about how your company is, let's say, leveraging ai, but also helping enhance education and educating young people.

Udayan Bose: That's so I'll start with our brush with ai first team, and I don't know whether it can be classified as the classic ai, but we started our own machine learning lab back in, at our India office. It was more of an AI lab as we call data Product Innovation Lab in 2017. And since we work with a large number of retail e-commerce clients, we were seeing one.

Very distinct trend for all of these customers. There is a small percentage of online shoppers. Were drawing a big chunk of the revenues. Since we are doing marketing, understandably, the return on investment can fluctuate quite dramatically if you are acquiring someone who is likely to spend maybe five times more than the average customer value.

So we try to use the combination of predictive analytics and machine learning to identify who these higher value customers are likely to be. And that got us to really build our own technology platform which is called Alexa Insights. And that is powered by machine learning and predictive analytics.

So we are able to, for example, do a real time segmentation of the customers. We are able to get into the micro segments as well, based on which of these customers are likely to buy again in the next two months. Which of the customers is likely to churn in the next six months? What is the estimated lifetime value going to be, and so on and so forth.

And then we convert this into audience signals and really feed it to platforms like Google Meta, the email marketing platforms and so on, so that they're able to get the lookalikes. So what we are trying to do effectively is really taking a higher quality audience input and just try to clone. Your high value customers, literally.

That's what I think we were trying to do. That was our brush. Again, as Jim mentioned, the pre chat GPT era. Now, when chat g PT was launched, we were lucky enough to jump in pretty quickly. And looking back now seems to have been a great move, but at that point in time, it honestly speaking, we just jumped into the unknown.

We got into it in the month of January, 2023. That was when JGBT three was released, was just about a month old. And effectively we decided to, really evaluate all of the outcomes that we were producing for our clients across different channels. For example, if you are doing Google advertising, what are the outcomes that you can expect?

And then we use this working backwards principle to go ahead and identify the workflows, which were leading through those outcomes. And. Effectively then we challenge the team to just ask a simple question, are these the most optimal workflows? Now that was difficult, primarily because to do that you had to really check your ego at the door, literally, right?

Because these are folks who have been doing this for a living for. Almost a decade or more. So effectively those workflows, you're saying that what the steps that you're taking to get to a certain outcome, are they even right or not? But interestingly, we were able to actually, the team was able to find out a more optimal workflow or workflows where we reduced the number of steps.

So that was, I think, the first part, identifying the outcomes, then really identifying the workflows leading to those outcomes and finding a better workflow. And then we went about, a pretty audacious idea of can we really rewire our entire organization and build a marketing operating system, which is powered by Gen AI or s.

So for that, we used to have to big our own rag layer just to maintain the privacy part and so on, utilize the internal customer data and we were able to bring in our secret sauce. Into some of the analysis that we have been running for the last many years now, combine all of these parts.

So there was the LLM coming in, there was the rack that we had just built, and then there was obviously our own secret sauce that we brought in to really build or rewire the entire workflow. So roughly about, I think currently about 50. Percent of the tasks are still done manually, but we have been able to now upgrade it to about 50% are now automated, which started realizing some savings.

At the time where Jim and I talked, that percentage was probably close to about 30 or 32%. That has increased. So what we feel are, let us, I think more appropriately the team feels, and I'll get to as to why and keep on saying the team feels we can probably get that number to close to about 60, 65%, Jim.

So I think Jim when we discussed he was very interested and he gave an opportunity to be to daily have Netflix a team participate in this project as an industry collaborator.

Jim Barrood: That's amazing. And I think that probably flows into the paper you did with Jim. It talks about savings. Yeah. How enterprises can save using Gen ai.

So let's segue into that. Talk about the paper you just are about to release and what the findings were.

Udayan Bose: Yeah. I think Jim probably can do a better work. I'll talk about as to what our contribution were. I think we challenge the entire notion that the ROI can be measured only in a certain way in terms of efficiency.

So our entire principle of the thesis was that. Efficiency is one part, but the possibility of creating new jobs, actually was much more exciting than just focusing on efficiency and part as well. So we really created some sort of a system whereby we were saving through the application of gen ai.

And this time saved, as I mentioned, 50% of the operations had been already. Automated, let us say that 50% time saving. We actually started about currently two new departments, which did not even exist earlier. One was AI powered experimentation, and the second one was just purely a process engineering department.

Now. I think the point that we are trying to make, and that's something which really appealed to Jim and the group of researchers, is I think it's important to look at AI from a holistic perspective. And when you really do that really you can usher in what I think has been called in this world as the age of abundance, but I think that requires effort because you really need to identify and create new jobs.

The advantage is the overall. Economic lift that you're able to drive or the value that you're able to drive far surpasses just the efficiency increments. And it really, for example, we were able to, using Alexa Insights, we were able to identify new customers that would've never been identified. We were able to activate the long tail keywords which had never generated revenue for our clients.

So those are the things which take time, and that's where the experimentation and the research component came in. And to get that time. Wouldn't have happened had we not really implemented the marketing operating systems. And we just, I think we were trying to write about this holistic approach to gene AI and the contribution to ROI.

But I'll pass on to Jim because this is basically, he's the architect and the writer and creator of this research project.

Jim Samuel: Thanks. Then it's, yeah, this paper, it's a, it's an interesting paper. It actually came out of the real world question of how do we. There are different ways to implement or different phenomena that we are seeing in terms of AI implementation.

One is probably a good 30 to 50% of the companies are trying to implement AI and are simply failing due to fundamental reasons. Like they have got excellent computer scientists who know how to code, but they really don't have the in depth. Overview of the AI space to understand what are the boundaries of what's possible with AI and what's not.

That's one of the most common reasons I've seen. Places where people have burned $5 million, $10 million in a few hours, I could have reviewed and told them, don't do this because it's not going to work for reasons A, B, C, instant, they spend nine months. They burnt it out. They looked at the results and said, it's not working, and till now they have not figured out why it's not working.

They just know it's not working, and now they're burning money elsewhere. Those projects fail. Then there's another category where they're spending the money and they're succeeding. They're succeeding in running the AI and the AI is performing as they wanted, as they want it to, but that does not mean that's creating value.

That automatically does not mean that there's economic value in that. So for example, if a team has invested $20 million into the AI project, they've developed a system, it's functioning, their grade comes up, high rise, everything's going on. But no one knows how much money this project cost the company.

And no one really has a way to figure it out. How much is it saving the company or what's the long-term advantage horizon? So in other words, the economic value creation with AI was a question mark. So we went back to, with my background in finance and now in ai, I connected the two and said, what's the return on investment on artificial intelligence?

And I looked around and nobody had a clear model. Everyone was running this based on heuristic and those who were presenting the metrics were presenting on very shallow. Calculation. Like we ourselves in our paper we pointed out there's some very fascinating presentations like I think Klarna and a couple of the companies, again, not very good with names to, I actually would read the paper and read it off, but there were cases like.

Human only expense, $35 million. Substitution with generated AI based chat bots and associated AI agents, $1.2 million or $1.52 million, let's say. So savings $33 million. That sounds so fascinating, but really that's not the end of the story. That's the surface level presentation that a company which wants to market, its AI agents is presenting to the world.

We need to be able to go deeper than that. What that tells us is there's a lot of potential. But we still need to dig deeper in terms of what are the risks involved, because there's a price that has to be factored in for the risk of using ai. And the use of AI does bring previously non-res risks into business scenarios, and that needs to be priced in, so on and so forth.

So this paper is about looking at one, the concept of economic value creation with generative artificial intelligence. And secondly, we looked at the ground level of what's happening in the chat bots, what are the different pricing models out there per user, per token process or different kinds of models.

And we looked at use cases, we found that a lot of companies failed. Some companies were playing it safe and able to use ai. They don't want to miss out. What's that fear of fomo, fear of missing out or whatever else the acronyms are running around these days. And then there's a small group of company, relatively small group of companies who are actually leveraging AI and generating fear, economic value, whether out of luck or whether out of design that have not been able to investigate all the companies, but they're successful and I think they'll continue to be successful in that space.

One thing that impressed me in my conversation with Udan about Net Elixir is that this was an early conversation. I don't know if you even remember this conversation or that where you were talking about how Net Elixir had an opportunity to use generative AI successfully, and in some of your units you could have displaced labor.

In other words, instead of paying a few people. Salaries, you could have said, we'll, now let these people go because Generated AI is able to do that work. Instead you repurpose them. Exactly. And that is where our philosophy at Exosphere comes in. And for me, that was a great example. So I've kept that story in my mind because there are two important principles at work with what Udayan and NetElixir done.

Number one is the fact that they leverage generative AI successfully. Which is not an easy thing to do. It requires both the correct tactics and the correct strategy and alignment with the corporate vision. So they did that. So in other words, value is created. The ai. Secondly, with the value that's created, they went for, that's the efficiency part of it, effectiveness part of it.

But they didn't try to purely focus on the profits and fire the employees. They ibel. My understanding of that situation from auto then said is they looked at it and said, now let's look at what more can be done. How can the company grow? How can the vision grow? So they took those employees and put them into a space.

In other words, now AI has not only supported the efficiency and the effectiveness of the company. But now AI is helping them grow their vision because these employees are now AI supported employees who will look ahead and innovate and discover new ways to create value. So I think ultimately that's what this paper is about.

We just want to understand the economic value creation potential of generative ai. We also want to understand the nuts and bolts of a financial model from a financial modeling perspective of what ROI looks like. But the ultimate goal of this research of our company, exosphere, and I believe what NetApp has done is.

Human enhanced ai. Alright. AI is doing all this great stuff. It's creating this value. We now figure out how to create positive ROI and the ultimate goal of that positive ROI should be human, enhanced ai, not firing people, not harming the clients or in any way, any human entity, but using AI correctly to elevate humans at every level.

Employees, stakeholders, investors. Customers, clients, everyone.

Jim Barrood: That is, and that is really just a great sort of role model or test case, right? Because we all hear the doom and gloom. We don't hear about the generated jobs that will come from a tech transformation. What AI will do to the economy and society. This has been a really good conversation. We usually include some tips for entrepreneurs, and I know you probably mentioned some, but tell us just one tip how entrepreneurs or managers who are looking to leverage Gen ai can use it to help their business or their organization.

One tip Udayan.

Udayan Bose: So whatever we do at NetElixir is very participative, Jim. So basically, we started with we knew all through the process that we want to ensure psychological safety for our employees or team members as we call it. So to do that, we really had to come up with one denominator, which everyone agreed upon, and everyone understood the value of that and which was outcome or outputs, right?

So there is an outcome and the outcome here is delivering certain results to the customer so that everyone is on the same page.. then we challenge them. Can you really, how exactly are you delivering those outcomes? So building out the workflows and this is all being done by the team, right?

And then all that we challenge them is, can you think. Is there a smarter or a more optimized workflow, which exists? So understanding the workflows and questioning the workflows and all of these conversations. The funny part is, Tim, there is no talking about ai. We are talking about displacing jobs. I think the approach which companies are taking today is wrong, primarily because it's basically a shock and all sort of a thing.

How can I scare the people there? We prefer to take more of an inclusive, let the employees essentially lead it. What is the common denominator? It is the outcome because everyone is coming to delivering value for the customers and then create the workflows and then identify as to where exactly we can use automation to really help my job.

These are the parts that I don't really want to do, or where AI can really help me do better. And then so it just almost is reverse engineering and working backwards from the outcome is what I would recommend to all the entrepreneurs. Yeah.

Jim Barrood: Great. Thanks Jim.

Jim Samuel: I think I'm going to split that one thing into three parts. It's one thing, but it's three parts to the one thing. First thing is I think it's a common thread. I'm being repetitive out here. I'm also repeating what Dan just said, but in simple words, develop a mindset. This is especially important for leaders, senior managers, and everyone in the position of responsibility, especially over other people and hiring.

Use AI to help. Humans Excel. Don't think of it the other way around in terms of using hook humans to somehow make an AI project or a company. AI successful use AI to make humans successful. The second thing is education is critical at this point of time. AI is a sophisticated cluster of technologies.

Which is rapidly evolving, even as we speak. It's changing till 2022. I kept track of most of the things that's happening in ai now. I'm ignorant of most of the things that's happening in ai in spite of my best efforts. It's just, there's just too much happening. So I'm trying to stick to my area of expertise and keep moving in that area.

Education piece is critical. Every person, especially leaders, need to understand one thing and it takes time to develop this understanding. What is it that's possible? With ai and what is it that AI cannot do, which only your human talent can do and do exceptionally well. And understanding this will provide a competitive advantage to the companies where the leaders have a clear grasp of what is it that AI can do equally?

What is it that AI cannot do? And finally. Everyone needs to innovate right now. This is the time to engage with AI if you have not already done so. There's no point in waiting. I believe that every company, every organization that delays in engaging with AI education, and just exploring every way in which AI can be used for their company and for their stakeholders it's a huge disadvantage.

So the key mantra is. Start now and start innovating with AI and innovate with boundaries so that the risks are manageable, but the innovation process should start right now as an organization and as individuals.

Jim Barrood: That's great. Thank you for both of those comments. We usually also do lightning questions.

So Jim, I'm going to focus on you for these because it involves academia. So if you weren't in academia, what would you be doing? If,

Jim Samuel: if I were not in academia, and I'm partly in academia and partly in in, in professional services in ai, but if I were not a full-time professor, I think I would like to become a farmer, a Greek agriculture.

Jim Barrood: I love that. Okay. What's one myth about academia you'd like to bust?

Jim Samuel: I think academia has often been seen as. A space where knowledge is imparted. While that is not entirely incorrect, I think that misses the bigger picture. Academic institutions and academia are transformative spaces. They're supposed to transform the students.

The candidates who come into their programs will come into their spaces. It's supposed to be a transformative experience more than knowledge, in other words. Knowledge changes. Knowledge is a moving target. Nobody can acquire all the knowledge in the world except ai. We can store a lot of information in computers and into ai, but as human beings, really, I think the greatest intent of the greatest concept that should be identified with academic institutions in academia is the transformation of the agreement into a better person and into a more capable person in the domain that they're interested in.

Jim Barrood: One last one. If you could redesign higher education from scratch, what's one thing you would do differently?

Jim Samuel: Focus on the future. I think academia is and for good reasons. I can see how it evolved and why it is the way it is now, but I think it's too heavily tight in the past most, the good institutions, it's.

It takes a long time to change. They're not, it's not easy to change what's happening in academic institutions, and they're very often looking behind. So everyone needs to look ahead. I think that's the big change that

Jim Barrood: Got it. I like that. Lastly, we'd like to conclude with a poem or a quote or a saying that's meaningful to you.

Udayan, what do you have?

Udayan Bose: Oh, wow.

Problem is often more important than the solution. So I think spend at least five times as much time that you spend in terms of creating a solution and identifying the problems that you're trying to solve. I have seen there is a tremendous amount of value in that. Yeah.

Jim Barrood: Great.

Jim Samuel: There is this sentence, which says, for the wisdom of this world is foolishness and the for God. And while I'm not I don't treat myself as an exceptionally religious person, but this has always intrigued me. Because right now we are in the age of AI and we are trying to say that we have created a machine that is super wide.

Then we can't figure out why even a monkey would be smarter than ai. At certain points, the AI just bundles up completely. And this sentence I believe it's from the book of Corinthians, written by someplace in the New Testament. And what it tells me is that we can take all the wisdom and put it into a machine.

I think of it at the human level, there is a dimension to the human identity, which will always transcend the machine. And that's why I like this so much. Whenever I look at ai I've seen the development of AI and I understand the nuts and bolts of ai. And then I see all the glorification of ai. Right now the buzzword is agentic ai.

And every time we change the semantics surrounding the technology, there's a lot of anticipation. Some kind of micro saviors are on the way to help humanity and solve all the business problems and stuff like that. But really it is not and then the sentence comes to mind, the wisdom of this world is foolishness before God, and there is a dimension to the human identity, which AI is never going to match.

And it could wind up to be a philosophical argument. I, there's this whole. Area in AI arguments between embody the ai, so on and so forth, but I strongly believe that the human is supposed to be on top of the machine no matter how smart we make the machine. Current push is to create some form of artificial general intelligence.

I have my own thoughts about that. I think what will eventually be presented to us is, as AGI, is a form of pseudo AGI, and I have my reasons to attach the word pseudo. All these things. They're like a magician trick. When a magician shows that a rabbit is jumping out of a hat. It's a trick today when a lot of companies are presenting a probabilistic engine as a form of.

Intelligence and at that and trying to argue that someday they're going to make this intelligence super powerful, more powerful by definition, by original definition, than collective human intelligence. It's going to be like the magician trick. It's going to be a pseudo AGI, but for the uninformed mind, for the uneducated mind, it's going to appear as though a rabbit was really pulled out of the head.

And that's some really fascinating and unbelievable stuff that's happening out there. So hopefully, and the key to this, and the reason I want to, what I want to emphasize at the end of my commentary, which was supposed to be much harder than what you had asked me to, Jim, is education is critical.

We need to understand AI technologies better. We cannot take what is thrown at us by companies or even universities or governments. Each individual is responsible to study the nuts and bolts of ai and then we study it. We'll realize it's great, but it's still a machine.

Jim Barrood: That was a great way to end.

Thank you so much, Jim. Thank you, Udayan.


Keep Reading

No posts found