Ivan Ostojic: Looking forward with AI

There are good, defined use cases for generative AI, as well as over-ambitious use cases that can bring us to the brink of the trough of disillusionment.

Chat with MarTechBot

Ivan Ostojic is Chief Business Officer at Infobip, the omnichannel marketing platform focused on communications, conversations, chat and messaging. When we spoke, he was driving to an airport in Croatia, but he was still able to address the “trough of disillusionment” that might be just around the corner for some — not all — uses of generative AI. (Interview edited for length and clarity.)

Q: It seems every manager has been given a mandate to bring genAI into their technology and processes. But I know you think they’re trying to implement some over-complicated solutions. Is that what is going to tip us into the trough of disillusionment?

A: Yes and no. There will be some disillusionment for people who thought this is a panacea, where you don’t need any data structure because this is an intelligent technology. There is a lot of marketing hype that is creating that perception. But when you see what serious enterprises are doing, they never went big on implementing this technology because they were worried about the hallucinations and so forth.

We are seeing very cautious adoption, where people are mainly trying to do use cases that are more internal — like knowledge summaries, quick ways for employees to find answers. Most of them are cautious about exposing this technology to customers because of the cases that came into the press like the Air Canada case.

Q: There are internal problems as well. A surprising number of businesses have had sensitive corporate data exposed on ChatGPT.

A: You’re right. People were using it to make themselves more productive without following proper corporate guidance. The general ChatGPT without guardrails: The data is exchangable. However, there are now implementations, in particular working with Microsoft, that actually secure the data and limits its spread to other systems.

Q: What about these over-complicated implementations?

A: I think some people thought this technology could completely replace human agents. They’re going too much overboard just because of the hype. For example, if you want to book an appointment, it’s much easier to do two or three clicks, seeing a calendar and the dates, than typing a prompt for ChatGPT. There is a risk of misunderstanding human semantics. People are trying to force it on every use case even when other types of solution are more appropriate.

We had this issue with translation of our own website. Somebody thought we could translate our website using generative AI, but actually if it makes a mistake in one percent of cases, that mistake can be tragic. I need to send the results to a translation agency to check everything. Them checking original against translation on all the pages versus them translating, it doesn’t cost much less.

Q: You have senior executives telling managers they have to use this and then the managers trying to create use cases. That’s surely the wrong way around.

A: A hundred percent. What we’re doing is choosing KPIs; for example, I might want to improve the speed of content writing for our team twofold and I think generative AI can help in specific ways (helping with the outline, with some rewording). I start with what I want as a business outcome, and then actually see how this technology can help me rather than vice versa. Of course, there can be some incubation areas where you try out ideas.

Q: What is Infobip doing in this area?

A: We are building an infrastructure for applications of generative AI. At the heart of this is the multi-bots, multi-large language model technology. We have intent-based bots, rule-based bots and generative AI-based bots, or assistants. We are training different assistants for different use cases, for example FAQ assistant, general knowledge assistant, customer service assistant and so forth.

We also have something we call “orchestrator.” If you call a call center and you have a technical question it can send you to a technical person, if you want to buy something it will send you to a sales person. Orchestrator understands your intent and routes you to the right place. It also does sentiment analysis (it can tell if you’re getting angry and move the conversation to a human); it does translations; it also does quality control to remove hallucinations.

Q: Some people believe that AI is going to deliver a fully automated enterprise. What you’re saying is, if anyone thinks they can do that today or tomorrow, then they are bound to be disillusioned?

A: The technology at the moment is not ready to do that, although it might go in that direction. a fully automated enterprise? We are far away from that.



Dig deeper: A blueprint for the new automation mindset

Email:


About the author

Kim Davis
Staff
Kim Davis is currently editor at large at MarTech. Born in London, but a New Yorker for almost three decades, Kim started covering enterprise software ten years ago. His experience encompasses SaaS for the enterprise, digital- ad data-driven urban planning, and applications of SaaS, digital technology, and data in the marketing space. He first wrote about marketing technology as editor of Haymarket’s The Hub, a dedicated marketing tech website, which subsequently became a channel on the established direct marketing brand DMN. Kim joined DMN proper in 2016, as a senior editor, becoming Executive Editor, then Editor-in-Chief a position he held until January 2020. Shortly thereafter he joined Third Door Media as Editorial Director at MarTech.

Kim was Associate Editor at a New York Times hyper-local news site, The Local: East Village, and has previously worked as an editor of an academic publication, and as a music journalist. He has written hundreds of New York restaurant reviews for a personal blog, and has been an occasional guest contributor to Eater.

Fuel for your marketing strategy.