The Insight Interviews

248 Tatyana Mamut - Build, Learn, and Iterate

Written by Rewire Inc. | Jan 16, 2025 2:06:38 PM

Tatyana Mamut is the Founder & CEO of Wayfound.ai, helping companies unlock ROI with AI agents. A transformative Silicon Valley leader since 2001, she has driven innovation through major tech shifts, shaping products at AWS, Salesforce, Nextdoor, Pendo, and IDEO. An award-winning visionary with several technology and design patents, Tatyana also serves on company boards and advises startups and investment funds. She holds a PhD in economic anthropology from UC Berkeley and a BA in economics from Amherst College. A Ukrainian refugee, she now lives in San Francisco with her spouse and daughters.

 

In this episode, Jason and Tatyana discuss:

  • Influence of human culture and behavior on economic realities and AI evolution
  • Generative AI systems as evolving non-carbon substrates and their impact on institutions
  • Role of responsible leadership in shaping AI-driven technological transformation
  • Wayfound’s no-code tooling enabling companies to build and manage AI agents
  • Importance of adopting generative AI strategies to meet investor and board expectations

Key Takeaways:

  • Human culture and behavior significantly shape economic realities and AI's transformative potential, emphasizing the need for leadership that embraces this paradigm.
  • The emergence of AI breakthroughs highlights the importance of building responsible platforms and experiences that redefine humanity's evolution and societal structures.
  • With billions of AI agents acting as workforce and customers, leaders must rethink traditional business models and design strategies for agent-to-agent interactions.
  • Companies like Wayfound are bridging the AI-driven future by enabling businesses to create and manage AI agents while aligning them with organizational values.
  • A solid generative AI strategy is essential for maintaining competitiveness and unlocking tangible ROI, driving value beyond operational efficiency in a rapidly evolving landscape.

 

“ROI isn’t just about short-term operational metrics—it’s about the impact on your stock price if you don’t act. The real cost of not having a solid AI strategy and starting this journey now is a potential hit to your company’s overall value.”

 - Tatyana Mamut

Connect with Tatyana Mamut:

 

Connect with Steve and Jason:

 

Listen to the podcast here:


Tatyana Mamut- Build, Learn, and Iterate

Hello and welcome everybody to this episode of The Insight Interviews. I've got a special guest today. Her name is Tatyana Mamut. And who the heck is Tatyana? Well, let me tell you. Tatyana is the Founder and CEO of Wayfound. Wayfound helps companies harness the power of AI technologies and apply them practically to see real returns quickly. It's always something that I'm interested in, when I hire a partner or a vendor or anything, is what is the ROI going to be? And so, we're going to talk about that. Well, here's what else. Tatyana has done all kinds of product executive type of activities in places that you've probably heard of, like, I don't know, AWS, Salesforce, etc. She also has a PhD in Economic Anthropology from UC Berkeley, which is something I also want to ask you about, Tatyana. She's also a refugee from Ukraine, and we are very honored to have her here today. Welcome to the show, Tatyana.

Thank you.

We open our show with the same question, and we've done this over 250 times. And so, I'm gonna ask you this question to get us facing in a maybe a different direction than we were prerecording, which is, as you and I engage one another today, Tatyana, who or what strikes you that you're particularly grateful for?

Oh my gosh, I am grateful for every aspect of my life. Honestly, even the things that at the time were traumatic, and I mean truly traumatic, turned out to be incredible learning opportunities that then came back to help me later in life. The thing I am most grateful for right now is the opportunity to not be in Ukraine. And I was born in Kyrgyz, and so my life would have been very, very different had I not immigrated to the United States with my family when I was young. Every day I just think about how amazingly grateful we should all be that we live in the United States. No matter what we think, this is one of the reasons I did Anthropology, is traveling around the world and seeing what others have to deal with and actually living with them, living in, you know, yurts, living in small villages in Ethiopia, carrying water every day with the children, making food over an open fire, sleeping on the floor with hundreds of flies. All those things are reality for a lot of people in this world, and it's not even something we think about when we're in the United States. How lucky we are and how grateful we should all be for this amazing country.

Thank you for that. I've had the privilege of doing some traveling to what may commonly be referred to as third world countries, and yeah, the things that we take for granted, even today in the world that we live in, in 2024, with all the things that is just easy to say is not so great about our country that we live in, boy, are we lucky, and may we never take it for granted. So yeah, thank you for that. Thank you for that answer, and just the real-life piece of that answer. There's a lot of people that know you. You have over 20,000 followers on LinkedIn, and that's just from a business standpoint, but there may be a few of our listeners that don't know you. And so, would you mind just giving us a minute or two of your story?

Sure. So, as I mentioned, my story starts in Ukraine. I am the child of a very long line of engineers. So, my grandmother was the head engineer of several very large Soviet factories. She created fiberglass manufacturing in the Soviet Union, that actually worked. She's still alive. She's 98 and almost 99, and incredibly robust. My mother's an engineer, my father's an engineer, my brother's an engineer, my aunt's an engineer. So, I grew up around a very technically minded family, a very mathematically minded family, but I was always interested in the human side of what technology means for humanity. And so, I was the black sheep in my family, because I like social sciences, not so much math or engineering, although I did well in it in economics. Actually, my first degree is in economics, microeconomics, and econometrics, so I got some of the highest grades in econometrics and statistics. And so, what I've always been fascinated about is human evolution, and specifically how humans imagine the future, and what that means for what we do as a society, how we evolve, how we make decisions, and how that shows up in reality. So, what's going on in the human realm is so much more important than what's going on in the physics and mathematics realm, because physics and mathematics are tools of the human mind. If humans don't have the cultural framework to actually utilize those tools or value those disciplines, then it doesn't matter how much math you know. Nothing's going to happen with it. You're just an academic sitting away, creating math theorems, right? But it's the culture, it's the human aspect that needs to actually value those things, value scientific breakthroughs, value the physical world in order to do something about it. So, all of technological progress is really about human culture, not about scientific discovery, because the scientific discovery is an outcome of human culture, it's not the cause.

I'm pausing because I'm just bathing in what you just said. And now I know I have you on the show. That was so good. You probably saw me feverishly taking notes, because I'm thinking of one, more questions that I want to ask you, and two the title of the show, there's so many different directions that we could go. Very rarely do I see a person who embodies what I would simply and maybe naively call intelligent and smart and you come from an engineering family, mathematics background, you've gone through all the economic, you know, you have degrees in things that I can hardly even pronounce, but yet, what you just said was the human side of all that, and that's the important part. So, very rare is it, at least that I see, somebody that is that smart, that intelligent, gone way down that path and said, okay, that's great, but it's only a tool for us to do things from a human standpoint. And so, I guess I just want to ask you to expand on that even more, because I feel like there's some depth there where you're going with that thought process.

Yeah, well, I've been on this journey my whole life. As I have gained more wisdom, I would say, because I started out as a hardcore neoclassical economist. Like, I studied and worked with a student of Jeffrey Sachs, Joseph Stiglitz was like, you know, one of my heroes at Amherst College. You know, he studied with the same professors I studied with. And what I realized in actually applying the econometrics and the economic model to human behavior is that they were desperately and horribly wrong. So, all of the work that I did in college at that time, the very early Russian economic transition, I went to college in 1992 so between 1992 and 1996, I was building models to explain what was going to happen to the Russian economy, because everybody thought it's going to open up and they're going to be Capitalism and Freedom. Milton Friedman, right? It was all that kind of stuff. Jeffrey Sachs was, you know, brainwashing us with stuff, as he is today. We can get into that another time. And then it turned out to be completely incorrect. So, the Russian economy, if people don't remember, collapsed in 1998 based on all of the models that I had developed, and all the policies that were based on those models that showed that everything was going to go well if we just freed humans, right? To do what they wanted to do as rational economic actors. It didn't turn out that way at all, in fact. And in fact, those policiesopened the door to Vladimir Putin, because after the economic collapse of 1998, that is when Russians started to say, oh, we need an autocrat back in power. That was the door to what is happening right now in the world with Vladimir Putin. So, what I realized at that point was culture really matters. The mental models that people are using, how humans are interacting with the natural world, with the economic realities, with economic policies matters a lot more than the actual policy. Or then the scientific breakthroughs that we have. So, we can have a policy, or we can have a particular scientific breakthrough, but how we actually work with it in our society is all about human culture. It's all about humanity. It's all about how humans interpret what this means and what they want to do with it. And so, on my journey, that insight led to me wanting to kind of lead my first degree, kind of in the background, build on it and study anthropology to study what are the different ways that human cultures and mental models, right? Actually make sense of the world, and what are possibilities for humanity moving forward. So, my dissertation was really about, it was about Russia. Again, it was a follow up of all the work I did in undergrad, but it was really taking an anthropological approach to the same questions. And there were a lot more answers, and it was a lot more correct, if you took an anthropological approach to see what was actually going to start to happen in that society and culture. But then, you know, you say, okay, here's my understanding of the importance of culture and humanity and how humans interpret economics and science in terms of shaping the world. Am I just going to talk about that or am I gonna do something?

Yeah. And?

So, what I don't like is that there are a lot of people who think that they're really smart and have great analysis, and they sit in an ivy tower, and they wag their fingers at other people, and they tell them what they should be doing. Well, you know what? If you know what other people should be doing, you go do it.

Yeah, right on.

And that is something I very strongly believe. Nobody has a right to tell other people what to do, whether you are a journalist, a professor, a politician or an investor, unless you yourself are doing it. Not you've done something 20 years ago, sat on a rocket ship that somebody else founded, and now you get to pontificate. And I'm looking at a lot of investors who may be watching the show, right? If you think you know what other people should be doing, you have a responsibility to go do it. And so that is something that I very much believe, is that those people who think they have insight about the world need to go start companies and organizations right now, right? And walk their talk. Otherwise, they need to shut up.

Oh, man. Tatyana, okay, so take me from that sense that you have. I mean, obviously you're convicted around that type of idea, so you wanted to do something, and then that led you to be a product executive of some of these companies, that then led you to be on some boards. Like, you were not only feeling the way that you just said that you felt, but it sounds like you were walking your own talk. Like, okay, I'm gonna go do those things. So, continue that story up until where we are now.

Yeah, that's right. So, I did very well in grad school. I got a bunch of very prestigious fellowship, went on a job market, did a lot of job talk, and then I kind of looked at myself and I said, am I really gonna be wagging my fingers and critiquing other people for what they're doing wrong for the rest of my life, and making my career based on writing papers, critiquing other people's work? No. I'm not going to do that. If I think I know how the world should work better than others, I should go prove that.


Okay, yeah.


Right? That's what I that's what I need to do. And so, I took a job at IDO. I was very fortunate to start working at IDO in 2007. I was their first actual PhD in anthropology that they hired, and doing a lot of design projects all over the world. You know, the iPhone came out in 2007, as well. So now all of a sudden, a lot of the IDO clients wanted to do digital design projects. And I raised my hand, and I said, I don't know anything about mobile phones, but nobody else does either, so, I'm sure I can figure it out. So, I started to figure it out, and I handled the big projects and the relationships for companies like Visa and Genentech and Bridgewater Associates, so, you know, pretty intense clients, as well in the as the Bill and Melinda Gates Foundation. So, I got to travel over all over the world creating new digital product designs for people. And then I went to Salesforce, and again, got in there and said, look, you know, we need to really think about product strategy. I was hired into this very weird role, but as soon as I got there, I started to see what was necessary in terms of product strategy, bringing all the different components of the platform together. That led me to be selected to lead the Lightning Experience redesign, which went very, very, very well for Salesforce, if you see how the stock price increased after 2016.

Yep.

Then was told to go help build the IoT cloud based on that success. Then the top executive who was leading IoT Cloud, who reported directly to Mark Benioff, went to AWS to report to Andy Jassy. He took me with him because, again, he knew that I could actually do things, right? I can actually build products pretty quickly. And then that sort of just kept going. I was asked to serve on some boards. I was recruited to be the chief product officer Next Door to kind of turn it around for the 2020 election, at that time, but what happened was in 2020 of course, COVID and Black Lives Matter, then the election, so it was a lot of that stuff. And so, my experience then took me to this incredible moment in November of 2022, when I think all of us experienced ChatGPT. And I had been working with AI since Salesforce, since 2014, because we were doing, like, lead scoring and all that kind of stuff. And I do remember Mark Benioff coming back from a meeting, and he came into an executive meeting that I was in, and he was like, I was at Stanford last night, and these researchers got a computer system to recognize a cat. That's what he said. And he was like, now, computers can recognize cats without training them on what a cat is. And we were all like, you do know that we work in enterprise software, right? I'm not really sure how cat videos on YouTube are really gonna help us within this strategy, and it was 40 minutes of talking about this cat, right? But it was really like the transformer architecture that he was talking about at the time.

Sure, sure.

Now I understand what he was talking about. So, this incredible moment in 2022 where we could actually interact with a completely new type of system that was not deterministic, where nobody had programmed it what to say, but now was actually generating pretty creative responses.

Yeah.

And I said, look, I need to work on this. This is absolutely going to be the next major evolution in human history, and whatever I'm doing right now, I need to stop doing that, and I need to work on this full time and figure this out and build something. Build something myself to help take us and help take humanity into the next turn of our evolution by applying this scientific discovery, right? This discovery, this engineering discovery, the scientific discovery, could go a lot of different ways, and what is going to determine how it goes is what leaders start building which experiences that will create the institutions and the foundations for what is possible for the future. So, you can take this fundamental scientific breakthrough, and you can build, for example, what I'm building, Agent Building Platforms and Agent Management Platforms in lots of different ways that are possible.
Each one of those ways has a different implication for what our future is going to look like, both at work and in society.

Or, I want to talk about way found and what you're doing now, because we're not even in the fourth quarter of 2024, and you're talking about what happened in late 2022. It wasn't that long ago, and here you are now. You founded an AI company. I want to talk about that. But what you just touched on, the human beings and the leaders that are working in this field that now we know as AI, boy, they're going to determine how we march forward in this, from a global economic standpoint, from a geopolitical standpoint and everything else. Not that I want to spend our whole conversation, in you know, the good, the bad and the ugly of AI, but the listeners of our podcast, I think they're just getting hit with a lot of different information about AI. You were literally in the epicenter of it.
You're in the middle of it. What is it, and again, I hesitate to ask this, because I don't want to open up three hour Pandora's box here, but what is it that you would like our listeners to know, that are leaders of organizations, about AI? As you fly over in the midst of this conversation, and it is going into their ears, what would you like them to know about AI at this point?

So, the first thing about this particular version of AI, which is generative AI and not predictive AI, is that these are not software systems that people program. These are human brains that evolved based on principles and the building blocks that they need in order to evolve. Okay? So, GPUs are one of the building blocks, transformer architecture, gradient descent, those are other building blocks that we use in order to create a system that can evolve by learning, right? Which means getting data in. Training data is not like database. It is not a database. So, these are basically human brains that are being grown in a carbon based substrate, okay?

Okay.

That's what they are. And if you want to learn more, just listen to Jeffrey Hinton's talks. His lab was actually the one that did most of the foundational work on building these things, so they actually know what's going on there.

Okay.

And so this means a couple of things. So, what does this mean as a leader? One, if you are going to employ generative AI, you need to think about this technology in a completely different way. You need to think probabilistically. You need to think about what are the institutions and incentives that you're putting in place to get AI, especially if you're employing AI agents to behave the way that you want them to behave. Again, they're kind of more like humans than they are like old school software systems.

Sure.

You don't write if/then conditions, you don't point them at a database to retrieve data out of a database. You don't do those things anymore. So, you have to think differently about this technology. Second thing is the prediction, and Marc Benioff has made this prediction, that there are going to be a billion agents all over the world very soon. So, you have to start planning for that world. You have to start planning for not just your workforce to be AI agents, but for your customers to be AI agents.

What does that mean? I'm sorry to interrupt your thought pattern, but when you say your team members and your customers to be AI agents, break that down just a little bit more for me.


Yeah, so, I think a lot of people know that they employ, for example, copilots right now, they're going to be systems that, you know, if you go to the Wayfound AI website, you'll actually chat with a lead gen agent instead of an SDR. We will never hire SDRs or BDRs. So, sales, you know, kind of the inbound sales reps, we're never going to hire them, because our AI agent does all that work. The replacement of humans, we hear about a lot about customer service agents being replaced by AI agents. That's going to continue so more and more of the workflow force is going to be AI agents. Now, most people only think about that from the internal side. But. if more and more companies and humans are employing AI agents, guess who your customers are going to be? For example, if you build a procurement AI agent, now, all your vendors are going to be talking to an AI agent. So, what does that mean for your business? Your business is going to be dependent on the buying decisions of AI agents, more and more and more. Okay? Unless you can figure out a way to employ this new technology to really interact with humans in a better way and transform your fundamental customer and product experiences, right? So, the second thing you need to understand is that as you employ AI internally, your customers are doing the same things, and so you can just think about that feedback loop of what's going to happen to your business.

Thank you for that. Could you give an elementary school example of what you just said, the customer being AI? What's an example of what you just said about your business interacting with AI agents, making buying decisions? Maybe today's world or maybe tomorrow's world, what is that on a practical standpoint look like?

So, in a practical standpoint, let’s just take the lead gen example of our agent. So right now, the agent on our website interacts, we believe, mostly with humans. Okay? In the very near future, actually, today, on our platform, you could build an AI agent that's a product research agent. So, anytime you want to buy a product, you can go and tell the product research agent, I want to buy a new piece of software that will help my revenue operations team do these things. AI agent, go out on the internet and search for the right product, right? And then give me a spreadsheet, a Google Sheet, of all the products that you found, what their price points are, their different features, and what do you recommend for our company? Now, our lead gen agent, even though we don't do revenue operations, but you know, AI agent platform, or actually our one of our customers Rev Cast actually does use our lead gen agents. This is a good example. So now Rev Cast Lead Gen agent will be talking to other AI agents who are doing customer research or product research. By the way, this is why they're called agents, is because they act on behalf of their principles.

Yeah.

And because they have agency or a great amount of freedom to take action in the world. That's the reason they're called agents. So, in the very near future, every lead gen situation, whether you're outbound or inbound, will most likely be talking to AI, not to humans.

Now, I get it. It is a little mind blowing. It's a lot mind blowing, but I can see how that would be helpful. At the end of the day, there are agents that have a ton of agency, if I just heard you right, but at the end of the day, they're working on behalf of in the report to a human being.

Yes, the principle.

Okay, got it. So, yeah, you're teaching me new language here.

Well, so it's in economics, right? The principal agent problems, if you sit on a board, you need to know about principal agent problems, right? Because, you know, the executive team are the agents, of the principals, who are the owners of the company. The investors, right? And you as the board, your job is to mediate between the principals and the agents.

Right on.

Anyway, that's how I think about board work.

Yeah, got it. So, that brings us to today, where, you know, you've recently founded this company Wayfound. What exactly is it that you all do and what are your plans?

We help companies traverse this chasm, between the world as it is today and the world as it's going to be two to five years from now, where AI agents are everywhere. And what they need to do is a couple of things. The first is, they need an easy way to build, learn and iterate on what does this technology mean for my business, because 95% of all generative AI projects fail on the first try. So, you are most likely going to fail.

You're gonna fail. That's part of the deal right now. Yep.

But what happens with generative AI is the success looks like an iterative loop of learning and rebuilding and iterating very quickly. So, you're literally training these little brains how to do what you want them to do, and it's more like parenting, that it is like building a software system.

Sure, it's just like real life. It's way faster, though, and way better. The iteration piece of it.

Yes, exactly yes. So, like your kid does something, you know, they're five, they do something silly, they don't know any better, you have to teach them, you have to train them. You have to give them both the knowledge and the rules that will help them get to the next level and learn. And then they're going to do something else that you're like, wow, that was really dumb. You put your finger, in a hot stove, or you fell off the fence because you decided to climb the fence and jump onto the tree or whatever.

Sure.

So that is the mindset that you need to have. And so Wayfound helps people build AI agents on this technology, it gives them visibility into what they're doing, it helps them learn, because we have an AI agent manager that then assesses, monitors and gives recommendations to the human about how to make their AI agents better and then improve them until they start working well. Now, what usually happens once you start working with this technology is that you actually realize that you need a team of agents, not just one agent. You're not going to be building one agent, you're going to be building a team of agents, because that's the way to have them hallucinate less. You've probably heard about hallucinations, but breaking down their context or their content is the way to reduce hallucinations.

Yeah.

So, you're going to have a team agents. Now you need an AI manager, because human minds just can't process how to supervise AI agents in real time, especially if there's more than a dozen of them.

Yep.

So, you need an AI manager that monitors them, that supervises them, that has the performance reviews on how they're working, the recommendations and how do they work better, right? And gives visibility to the human. So, our platform is essentially AI that manages AI and helps you get to you know the outcomes that you want, and helps your organization build trust in this new technology, because we kind of take you along this journey from failure and dashed expectations, you know, the trough of disillusionment. We help you make that trough both expected but also a lot easier to climb out of, because we have all of this, you know, no code tooling for your business users to be able to build agents, monitor them, improve them, and then have confidence in what they're doing.

I'm gonna dive in deep with one question that could lead to 28 others, but we're gonna do one, and then we're gonna pull the lens way back again. Is an AI manager, using your words here, I don't know if this is a dumb question or not, Tatyana, but is an AI manager, AI itself, or is that a human being?

It's an AI. So, it's another model, it's a higher order model that actually manages the AI agents, and provides reports to the human, who then monitors that and then tells the manager. So, the human is the principal. So, the human is like the board member giving direction to the management team, which are the AI managers that then tell the ICS what to do. Does that make sense?

Kind of. So, for those of you that are listening, not only will you have AI agents working for you, but you'll have managers who are also AI agents that manage your AI agents. How about that?

That’s the only way that its possible. Human brains are too slow, and our throughput is too limited to actually deal with the quantity and complexity. The quantity of content that's produced by AI agents, as well as the complexity of the actions that they're performing. And you know, in real time, like our human brains just can't keep up with it, right? We need an AI manager to do it. It's like a super brain, but then that super brain needs to be aligned with us. So, the way that we've built our manager is, right now, out of the box, it does performance reviews on the agents, it tells you which which agents are working well, which ones aren't, and it suggests improvements. We just ran this experiment yesterday, actually. We took the leadership principles from Amazon versus sort of the leadership principles from IDEO, we plugged them into our manager agent, and we ran a performance review against the same AI agent, okay?

Okay.


And the outcomes were very different. So, the one that was aligned to the Amazon leadership principles really kind of focused on the leadership principles of customer obsession. Dive deep. You know, those types of things are particularly Amazonian, right?

Sure.

Particularly Amazonian. The manager that was, you know, that evaluated the same AI agent on the same data set based on the ideal principles that were about sensitivity, empathy, staying focused on the on the topic, encouraging creativity, had a different spin on their assessment. So, that one really focused more on the interactions with the users and how the agent could be more sensitive, could ask more questions, could get the user's point of view more before answering, as well as staying focused on the topic, which is one of the ideal kind of principles and brainstorming rules. You know, that it sometimes started to sway a little bit off topic, and needed to kind of do that. So, our technology and the manager will be able to listen to your instructions about how your organization in particular should be working and supervise all of the agents that it manages based on your corporate culture, based on your needs, not just a generic out of the box.

Yeah. So good, so good. Thank you for that. I'm going to ask you, what is a practical example, and due to confidentiality, you don't need to give us names of organizations, but what is a tangible example of an organization that has engaged Wayfound, and what have you done? What have been the results? Again, elementary school example of, okay, an organization engaged Wayfound, and this happened as a result of it.


So, I'll talk about one that I can talk about publicly. Acario is an AI startup, and they sell AI testing, they're an AI testing platform, and in order to onboard, for a customer to onboard, there's a lot of technical documentation for the onboarding process, and there's a lot of like, Python scripts that need to be written in order to actually set up the system, and those Python scripts are particular to your organization. So, they couldn't just, like, write examples in the in the technical documentation. So they created an onboarding agent on Wayfound, and you can go to Acario.com , you can chat with our agent, that's why I mentioned it too, because anybody can try it out, and they have reduced the requests from new customers to have a human help onboard them, because the AI agent, if you just answer some questions about what kind of organization are you, what kind of computer systems do you use, what kinds of you processes are you looking to employ in Acario, then the agent actually produces the right Python scripts for you, and tells you exactly how to implement them to get the onboarding done. So, this has reduced their requests for help with onboarding, as well as their customer service calls later. So, instead of solving customer service by solving customer service, think about why people are calling, and solve that problem.

Further back. Yeah, yeah.

In many cases, it's an onboarding problem. People start in an incorrect way; their setup is incorrect because they're trying to DIY it based on YouTube videos and whatever stuff they get on Reddit. And why not just build an onboarding agent? So that's a real-life example, and some real ROI that they've seen in just a few months of using a Wayfound agent.

Well, I'm convinced, just from the time that we spent together Tatyana, that not only is Wayfound needed and you're on the cutting edge because of one, your story, where you came from, the different education and experiences that you've had, but also kind of coming full circle to the beginning of our conversation, you have this profound passion around connecting the dots between technology and mathematics and engineering to actually the human piece of it, and making sure that all of that is there as a tool to help human beings continue to evolve and grow and develop and do better things, and I get that now.
So, as a result of that, I feel confident that people are going to want to connect with you, and I know that you've got a really big following on LinkedIn and other platforms. What's the what's the best way for people to find you these days?

LinkedIn is definitely the best way. I post a lot of business-related content on LinkedIn. A lot of the things I'm seeing in AI, you'll get a lot of those perspectives, with research reports and all of that. We will we actually have commissioned a research report on the state of adoption around generative AI in companies. That is coming out this week. We just had the executive summary created last night, reviewed that, and so there's a lot of incredible insights that I will be sharing on LinkedIn very soon about how is adoption going with generative AI. And you know, TLDR, 100% of boards are asking their executive teams that we've interviewed for a generative AI strategy. And we saw this last week. Last week there was the Goldman Sachs conference here in San Francisco, and what an attendee at the conference told me is that 50% of all the investor questions were about generative AI.

Wow.

So, investors and boards are pressuring companies to have answers to this. And remember that the ROI is not just in terms of what are your short-term operational metrics, but what is the impact of your stock price of not doing it?

Hmmmm.

That is actually the ROI most people should be looking at, not the cost cutting operational metrics. The cost of not having a good AI strategy and not starting this journey right now is a potential hit to the overall value of your company, regardless of what your cost metrics may be.

Cost savings, whatever. Yeah. So, there's an opportunity cost for not being involved. And so, if boards are asking about it, if investors are asking about it, boy, you better get on board, if you're not on board. And a great place to start if you haven't figured this out yet, listener, a great place to start is go to Tatyana's LinkedIn. Follow her, like 21,000 other people are. And by the way, I dipped into her LinkedIn earlier today, plenty of information there. And I need to tell you, Tatyana, what I really like about the way that you interact with LinkedIn is it's not very self-promotional, it's more, hey, here's information that you're going to want to know, and here's what I'm finding out along the way. And so, just a lot of value there. So, thank you for that, and thank you for being on the show today. This has been very informative for me personally, which happens from time to time on the show, and I appreciate that from a selfish standpoint, but I really think that our listeners are going to benefit from this, find value and seek you out. So, thank you for doing what you do. Thank you for your information today, and gosh, I hope our paths cross again soon, Tatyana.

Thank you so much.

Tatyana Mamut, so many insights. Just like after every episode, I have so many insights. But here's my insight today. AI is here. It's not going anywhere, and it's only gonna grow exponentially. And according to what Tatyana said at the very end there, boy, if you lead an organization and you are not thinking about, talking about and more importantly, doing something about AI in your organization, it really does sound like you'll be left behind. So, not just the cost savings aspect of it, but the cost to not being involved is very real and very there. So, that's not a scare tactic, I don't think, from her standpoint, or even, you know my insight right now, if you don't know where to start and it sounds confusing or sounds whatever, go check her out on LinkedIn. She's there, and her information is great, and it's just a great place to start. So that was my insight, is AI, it's here, and we need to use it and embrace it. But as we say at the end of every episode of The Insight Interviews, it doesn't really much matter what my insights are as the host, but what really matters, listeners, are what were your insights?

 

                                                                                                             ---

Thanks for reading. If you got any value at all from this episode, a little nugget all the way up to some big, huge insight, please do us a solid by subscribing, recommending, rating, and reviewing us on Apple PodcastSpotify, or Google. That stuff matters to us, and it allows us to continue interviewing more awesome people.



Important Links