About This Episode
In this thought-provoking episode of The Future of Work® Podcast, Frank Cottle sits down with Ram Srinivasan — Executive Management Consultant, AI strategist, and AI Adoption Leader at JLL. A graduate of MIT and author of The Conscious Machine, Ram brings a rare mix of deep technical expertise, financial acumen, and human-first philosophy.
Together, they unpack the truth about AI in the workplace — what it is, what it isn’t, and what it could become. From organizational transformation to generational collaboration, this episode explores why empathy, creativity, and authenticity matter more than ever in an AI-powered world. If you care about the future of work, this conversation is your roadmap to navigating it with intelligence and intention.
What You’ll Learn
- Why AI should amplify—not replace—human potential.
- The dangers of “averaging out” uniqueness in the workplace.
- What organizations get wrong about AI strategy.
- How generational wisdom and Gen Z innovation can coexist.
- Why adaptability and critical thinking are the most valuable skills of the future.
- The difference between knowledge, intelligence, and wisdom in an AI-driven world.
- Why mediocrity is no longer an option—and how to stand out.
About Ram Srinivasan
Ram Srinivasan is an Executive Management Consultant and AI Adoption Leader at JLL, where he guides Fortune 500 companies through digital transformation and AI integration to shape the future of work. An MIT alumnus with over 15 certifications in AI, machine learning, and data science, Ram combines deep technological expertise with financial and governance acumen as a Chartered Accountant and Chartered Secretary.
He’s the author of The Conscious Machine and the forthcoming The Exponential Human, both exploring how AI can amplify human potential. A recognized thought leader featured in Harvard Business Review and Business Insider, Ram champions human-centered innovation, advocating for technology that aligns with organizational values and enhances human flourishing.
Transcript
Ram Srinivasan [00:00:00] The third myth that I see typically is this idea that you need to be an AI expert of some kind, an engineer, a coder, an AI, Python software engineer, something like this. It’s not that you to be a engineer. What you need is a director. And you have, like a movie director, has multiple different skills at disposal. You right now have 1,000 PhDs in your pocket, if I may paraphrase a Steve Jobs quote.
Frank Cottle [00:00:27] Welcome to the Future of the Work podcast. Really excited to have you here. And jeez, wow, I can’t tell you how impressed I am with your background. I’m excited to be here. Well, I know people have heard your background in the bio, but managing director at JLL, best-selling author, 15 certificates behind your doctorate at MIT, my goodness. I think you’re probably the smartest person or the least best educated that I’ve ever talked to. I’m really going to enjoy our conversation, but you have a philosophy that transformative information has to be built on human-centered principles. If technology is going to amplify human potential rather than replace it, how do you recommend companies maintain the focus, their focus, their business focus during this transformation.
Ram Srinivasan [00:01:22] Awesome. Great question, Frank, and I like that we’re starting off with the difficult questions first, so thank you for that and lovely to be here speaking with your listeners. When we think about technology and in particular the current wave of technology AI, it’s often painted as this zero sum game where it’s human versus AI and AI is taking more and more of what humans do and there’s this kind of limbic response to our AI brethren, so to speak. Whereas what I think is gonna happen is very different, which is AI will amplify human potential, AI will amplifier human creativity, AI will amplified elements that we hold dear. But at the same time, the risk is it amplifies things that we don’t want it to amplify. Things like. you know, malpractices, bad actors, those kind of things. And this is where it’s very important to carefully implement what we’re doing with technology. And this why human-centric thinking is essential, important when it comes to implementing technology. Technology should be used to augment human potential and not to replace it. And so long as we are able to do that with technology, I think we are in for an abundant future. I view this as an inflection point for our civilization, our species. And over the next decade, we are likely to see a century worth of innovation. And so long as we think in a human-centric lens.
Frank Cottle [00:03:02] Well, I’m going to throw an argument in there. I’m going to misquote somebody, but the essence of the quote is that every time we make a technological advance, advancement, and this maybe could have started in the Industrial Revolution, we amputate a certain part of our human capacity. And I’ll back that up with, again, I won’t quote a source because it’s just something I read the other day. So I hope it wasn’t misinformation. But actually, on a global basis, our human IQ is going down. Let’s take the amputation. I would take the simplest one, the automobile. Great invention. Man, it gave us great mobility, all sorts of things, all sorts a breakthroughs. But we’ve reduced our capacity physically to walk. So there is an amputate of a capacity overall. Apply that concept or think about that relative to our excitement today over the next breakthrough and my personal excitement of AI, what will we be amputating while we make this next generative growth? We may not be replacing jobs, but we may be replacing some part of our own capacity as these other things do the job for us, and data being the data for decision-making, which people used to do in their head. is now being done by a factor of a billion machines, much faster and better, but are we losing our own capacity, literally back to the fight or flight capabilities that we as a species have grown from?
Ram Srinivasan [00:04:57] Yeah, excellent question. The way I would frame this in the technology context, Frank is something like this, right? Did spell check make us better spellers? Likely, no. And what’s the equivalent thereof with respect to AI? What is that human aspect, human element, human feature that we are giving up if we introduce AI? Couple of things, couple of risks that I see is very often what’s happening with modern technology. I’ll take software engineering, software coding as an example. And if you’re writing a piece of code on Google Collab or any platform right now, which has AI assisted coding, you start writing a peace of code and the AI auto completes parts of it for you. Which is amazing, it’s a productivity boost for you But at the same time, it’s pulling you down to an average that the AI thinks is what.
Frank Cottle [00:05:57] Make that point, actually.
Ram Srinivasan [00:06:00] I think that we need to be very watchful of how we use this technology. If not, we all begin to resemble each other and are exact replicas of each other. We are the average whereas where the value and the spirit of humanity comes in is the uniqueness that each one of us brings to the table. I was in fact in a conversation earlier today on this subject and my view is that this year and the next decade is going to be the decade of authenticity. And I don’t use that term in a glib marketing style or intent. We need to really think about what our experience is, our authentic experience is and figure a way to convey it. The simplest example of this is watch your LinkedIn feeds. Everyone’s LinkedIn posts look like each other now. Everyone is AI enabled, is writing using AI. And those of us who are familiar with the model outputs, you can actually see that this post was written by Chad GPT or Claude or one of those models. You will see repeated words like realm and dwell and sentences that begin with bye and so on and so forth.
Frank Cottle [00:07:14] Absolutely we actually At all work one of our primary jobs of our two editors is to pull To ensure that we are acting as true journalists where what when we how What we’re when why how as opposed to just regurgitating the content from 500 consolidated articles There are certain words, phrases, things that you can spot instantly that are, as you say, taking everybody to the same average. You know, there was a look at Elon Musk and his Neuralink, and I know there are embeds for certain things over in the Scandinavian countries that allow us to use our hand instead of our iPhone for buying such things through vending machines. train tickets and things of that nature, a little chip that you can insert. As we advance AI, as we look at our physical augmentation, the chip insert being the simplest variant on that, I’m reminded of, and this is really going to date me, an old, old Star Trek episode about the Borg. Okay, you remember the Borg? how the Borg were a collective and the way they kept connected was through their implants. Yeah, one thought, one way of doing things, et cetera. And I won’t say I fear this with AI. in a dystopian sense, that’s where AI could take us.
Ram Srinivasan [00:09:02] I would say we must fear it and Frank with respect I’m a complete trekkie and a nerd on that front.
Frank Cottle [00:09:09] OK.
Ram Srinivasan [00:09:11] The borg’s strength was the fact that they were a collective intelligence, and they could act in unison across their entire civilization and species. Like a hive, like a beehive. But that also became their greatest weakness. And we see this everywhere in unnatural environments, like monocrop agriculture, for example. Well… It is amazing because we are able to generate tremendous amount of high-yield crop and so on and so forth. But over time, soil degrades, ecosystems crumble, and output erodes. So we need the equivalent of multi-crop agriculture with our intelligences. Yes, we can have ubiquitous intelligence, but we cannot have an absence of uniqueness in that. And this is where I think we need a combination of multiple different styles of thinking and thinking that’s outside of the box that AI is trained on. So AI is kind of restricted to training data that it has. It does not have the personal experience, Frank, that you have or I have or any one of your listeners has. That’s that unique element that they bring to the table. And if we amplify that, that’s where there’s value.
Frank Cottle [00:10:28] Yeah, I mean, that’s very true. I mean there’s nothing more valuable than a singularly unique thought that can be executed. Yes. Okay. And that’s becoming more and more rare as you are referencing LinkedIn profiles and LinkedIn posts and things of that nature to where people aren’t striving necessarily for the original thought so much as to gain market across the mass thought. in business. And I think that’s interesting. The weakness of the board was a It was an intraspecies weakness, not an interspecies weakness. It’s not part of our problem as we know it today. It may be at some point in the future if we discover life in other worlds. But today, our problem would be doing this to ourselves for the benefit of ourselves without recognizing it and then losing ourselves. And there’s no other species outside of ourselves to point out the strengths and the weaknesses. So it’s kind of interesting. When you apply this in your view, what do you think are the most common misconceptions about AI and the implementation of AI strategies that companies have? We’re here to talk about business a lot and the future of work. What do you of the misconceptions that companies commonly?
Ram Srinivasan [00:12:02] The three biggest misconceptions Frank that I see are one that AI is hype, it’s hyperbole, it’s hypothetical, it some distant future, it doesn’t exist yet. This is I would say 95% of a lot of what people think and they kind of conflate stock market hype and hype on social media by folks who probably don’t really know what they’re saying, um, with. what the reality is. And many people will point to things like the stock market bubble and those kinds of things, which my response to that is, yes, there was a stock market problem. Yes, there is AI hype, but we are on the internet right now. Correct. So the question is.
Frank Cottle [00:12:52] using AI somehow and don’t even know it.
Ram Srinivasan [00:12:55] Exactly. So there is business application of AI. There are so many different forms of AI, AI is an umbrella term and many aspects of which already exist, things like rule-based AI, your social media feed is AI, GPS is AI. Flash trading algorithmic trading is AI whether predictions are AI. There’s computer vision systems that are AI, self-driving cars are AI all of those things are AI What what we are experiencing right now as a wave is generative AI. things like large language models, large object models, things that create new content from training data. That’s the current wave. And the second myth is kind of aligned with that, where people confuse this with something very specific. It is general purpose technology. GPT, in my view, stands for general purpose technology, where this technology can be applied generally to multiple different aspects of work. This is not like SAS. This is like Workday, Oracle, ERP systems, which an enterprise implements. This is more like Excel, Word, PowerPoint that in your hands can create something very unique. Each one of us has access to those tools. We produce very unique output from it that probably even the makers of Microsoft did not envision. So similarly, these language models, these general purpose technologies that we have as generative AI, the output from that is in our hands. I would apply them very, very generally, as opposed to very specifically. And then the third myth that I see typically is this idea that you need to be an AI expert of some kind, an engineer, a coder, an AI, Python, software engineer, something like this. It’s not that you’ll need to an engineer. What you need is a director. And you have like a movie director has multiple different skills at disposal You right now have 1,000 PhDs in your pocket. If I may paraphrase a Steve Jobs quote, you have 1 thousand PhDs in your pockets. The question is, how do you use them? We spoke about IQ. These models now are at an average IQ higher than 120. In some cases, even more than 130. That’s genius level IQ that’s higher than 98%.
Frank Cottle [00:15:14] Or be higher than me, then.
Ram Srinivasan [00:15:18] This is where we are at. So the question is, how do we use this high IQ capacity that all of us have? And if you want to really touch the future, I would say you can touch it right now. Just use these models.
Frank Cottle [00:15:31] It’s interesting that you talk about being a director, and I’ll use a specific example. We just hired a young gentleman. We had an open position for a person that creates videos for our company. And the young gentleman that we hired, the reason we hired him after interview and some analysis looking at his work product was that He had a very good creative mind from a marketing point of view, as one would expect. And he had the core video skills as one would expect, but he had a full AI bot for text. He had full AI Bot under his command for graphics. He had separate AI bot for the editing. He had team. We hired him, we actually hired five people. Okay, so he came totally augmented, if you will, with a technology package, and in the old days, we could have said, well, he was a spreadsheet expert. So he came augmented with a spreadsheet, to your point. Today, he’s coming augmented with each of the skills necessary for him to choreograph. direct the creation of these videos as opposed to just sitting at a desk hacking stuff out himself. And that was really why we chose him over other candidates. Do you see that as the way that people will come in the future where I don’t just apply for a job and I apply for job with my skill set. My skill set happens to be two or three, I’ll say work bots that I bring with me.
Ram Srinivasan [00:17:32] Yes, it’s a good analogy and it works. The caveat that I would put out there, Frank, is that those bots will change and how they will change what what the elements of your
Frank Cottle [00:17:45] Well, that’s where the creative comes from the individual.
Ram Srinivasan [00:17:48] Exactly.
Frank Cottle [00:17:48] He spends all his time in the creative and doesn’t have to, he can direct the other activities as supposed to have to manually do them.
Ram Srinivasan [00:17:59] Exactly. If we extend the analogy without breaking it, it’s like Quentin Tarantino’s first movie versus his last one. I’m sure the methods that he used to deliver the output are very different. Whereas the vision that he has probably remains consistent and that ability of storytelling probably remains constant. And there’s a signature to a Tarantina movie. So we need to understand what our signature is. What’s that unique element that we And the tools that we use will keep changing, evolving. We can’t fear them. We have to embrace them. And if we do that, we can now be exponential. And this is part of how I think about the future mindset.
Frank Cottle [00:18:41] If we don’t fear them, but we embrace them and you’ve seen a lot of this in your consulting and in your activities overall What’s Transformation status strategies towards this use of this technology have you seen of the most success and the greatest failures? What what what are the examples of the successes and the failures that you’ve see?
Ram Srinivasan [00:19:07] I would say two broad categories, category one is kind of chase shiny object syndrome. You have generative AI as a powerful tool and the temptation is to say, what can I use this tool to solve? Whereas we need to think about the other way around where we say we start with the Business problem first. and then figure out the toolkit that’s necessary to solve the business problem. So, you know, the whole analogy of if you have a hammer, every problem looks like a nail. It’s something that we are seeing right now. I think it’s a momentary, if you may call it, corporate executive obsession with some of these ideas. And it’s fading as the fluency and literacy with this topic goes up. Decision making is getting better. and we’re kind of getting the right kind of questions from our executive partners across organizations. So that’s one area where I see both success and failure. If you get too specific with technology first before business problem, failure. If you think business problem first and then technology, likely you’ll meet with success. And the second area I would say is thinking exclusively technology. This is not a technology transformation. If ever there was a point to say that this is a human-centric focus for transformation, this is change management, this is mindset shift, those people elements of the problem are magnified counter-intuitively because of AI. And if you’re able to think people first and mindset first, the implementation and adoption cycles get much, much higher. In fact, there are surveys that are showing that. resistance is so high that employees are sabotaging these projects. So there’s that element that’s coming into play as well because they just don’t see the value or the fear being replaced by the technology. So bringing them along for the journey, people first, change management, human-centric thinking, those are elements that we should magnify as part of this transformative chain.
Frank Cottle [00:21:19] Well, you know, it’s interesting as you think about it. And one of the things to work into all this is, and again, I’ll use our company as an example, we have five different generations of team-makers in our company. Everything from Gen Z to aging baby boomers. So. Do you see that as a limitation or do you see the experience of the baby boomer? combined with the native digital environment of the Gen Z as being a strength, or do you think baby boomers, we need to wait for them to retire before Gen Z can completely implement the things because maybe people like me are holding things back that could have gone much faster. Where do you see the generational? Because it’s the future of work, future of workers expanding multiple generations. Now where do you see that? playing against the technology.
Ram Srinivasan [00:22:22] Here’s my perspective, and I want to ensure I explain what I’m saying in a deep, deep manner here, and I’m not just giving a platitudinal response. We need all of these generations to work together on this particular transformative shift, and why I’m say that is the following, right? So imagine right now you go to any one of the AI models and you ask a question. You go to deep research from Perplexity, Google, OpenAI, or any of these tools, and you’ll ask a question. It gives you a very well thought through response with dozens of sources. It looks amazing. It looks better than what you may have written. And your temptation is to say, let me use this. Except if Frank, you’re an expert and you look at that and go, hey, these three sources, are these real? These 27 are real. These other three are not real. I know this because I’m an expert. So, we are seeing that experts are actually able to use these models. to upscale themselves faster than non-experts. Because if I’m not an expert, I need to fact check each one of those links. I have spent as much time writing it as I would have spent otherwise. So if you kind of have that decades of experience, you can use these models faster.
Frank Cottle [00:23:41] It’s interesting that we ran some tests in all work on some of our Creative stuff and articles and topics and things of that nature and we said okay. Here’s what we want to do and we had two or three different generations of individual querying for the answer Queries were totally different for the same problem And the answers were totally different for the same problem. Depending on who asked the question and how they asked it. They got totally different answers to where we looked at that and said well How can we blend this and we decided we really couldn’t we have we had to take a perspective? Overall and blend our queries not our answers
Ram Srinivasan [00:24:34] Exactly. So suffice it to say that there is place for experience. In fact, probably that place has heightened now because how do you evaluate the output from these models? And the second, if I were to offer a perspective on the other side and the younger generations coming into the workplace, I always recommend you should have a Gen Z advisory board because they are early adopters of technologies sometimes that we are not even aware of. Like for example Uh, one of our younger cohort introduced me to an app called snack. It’s a AI dating app where you don’t go on the date, but your AI avatar goes on the data with somebody else’s AI avatar. Now, you know, this is interesting. It’s, it’s probably something that’s a passing fad, et cetera. But what’s interesting from my perspective, there is, could this be part of how teams get down selected in the future, like, do I want to work with Frank as part of Frank’s team? I can spend a year trying to figure out, or my AI can go on a hypothetical project with your AI and figure that out and tell me. So there are ways in which you could bring some of these ideas into the workplace and your Gen Z advisory board becomes that conduit. So the best case is where you have a Gen Z mindset in somebody who’s experienced and that experienced person somehow is able to mentor a younger person and bring them up to speed on their experience. to this. I think there’s reverse mentoring and mentoring both, and there’s place for everything.
Frank Cottle [00:26:03] You know, using your SNAP example in human resources, the largest human resources companies as you’re, I’m sure you’re aware, are using AI in the sifting of resumes and interviewing process now. And it’s gone from an AI standard just kicking some things out to an AI standards doing the, kicking some things, doing some analysis, doing background checks, doing all variety of things. and actually beginning to do interactive interviews. Yes. Well, there’s your snap, if you will. But you have a human on one side. Maybe, because we know that about 70% of all resumes are now written with AI, and then you’ve got the AI on the underside. They’re interacting back and forth. The theory is still out, or the jury is still out as to whether they’re actually getting better candidates or not. It’s time faster, though. So that in itself is saving large companies a lot of effort and a lot anxiety on the potential employees, too. But one of the problems when you apply for a job is waiting to hear back, waiting to this, waiting to get a reaction, waiting to get an interview. If all that can happen in 24 hours, that’s huge.
Ram Srinivasan [00:27:33] It is huge. It is a huge, amazing case study between Salesforce and Deco, where Deco implemented agent force to do exactly what you’re describing, Frank, I think it’s a step change in how we think about work process and work process design. It also raises some flags like the example that I often share is, if I write an email using AI, and then Frank, you summarize that email using AI. and then you use AI to write the email back to me which I use through my AI to summarize what’s the point of the email right my AI could have spoken with your AI and maybe the email is not You
Frank Cottle [00:28:14] Maybe email beautiful email
Ram Srinivasan [00:28:15] We need to really rethink the entire process at that point.
Frank Cottle [00:28:21] If you see that, what emerging trends of adoption for, just overall, do you really see that organizations aren’t prepared for, not the ones they are prepared for because they’re using it, they’re flirting with it, they’re embedding it. I know we do in all of our processes and such. And again, I’ll use our own company. We had a 25% growth last year and we did it with four less employees than we have the year before. It was because we were able to improve the processes within our technologies, many of which were AI supported. Yes. Okay. So what do you think organizations aren’t prepared for? What’s going to hit them and knock them down? what’s gonna really surprise them that they don’t expect.
Ram Srinivasan [00:29:15] So here’s the thing, in my view, what organizations, many executives are not prepared for is intelligence abundance. We are not thinking of businesses, processes, and our work in a world where intelligence is ubiquitous and virtually free. This is the, we haven’t thought about how we work in that type of world. So if I were to ask one question of any executives or boards, that’s the question that I would ask. How would you design your work processes if intelligence were free?
Frank Cottle [00:29:47] Okay, let me stop you there. I wanna define the difference between intelligence and knowledge in that regard. Could I would say knowledge, the access to all of the historical knowledge on any subject of the data to support it is free. Yeah, but not the intelligence.
Ram Srinivasan [00:30:07] I would say if we were to differentiate it on that front, where you’re going, Frank, is what I would underscore as being wisdom as opposed to knowledge. Knowledge is simply repository. Intelligence is making connections between the elements of that repository. You’re using
Frank Cottle [00:30:23] You’re using reverse it though, then I would. Okay, I’ll buy that.
Ram Srinivasan [00:30:27] And the next stage beyond intelligence is, okay, I have the connections. How do I use this to benefit society, benefit my business, benefit my people, benefit bottom line, all of those good things? That in my view is wisdom. And that element, that wisdom element is where humans come in and our thinking should go. So where I’m going with this is if we say right now the temptation with the new technology that we have. previous generations of technology is you have 100 workers, you have output of 1000. If I can get output of thousand with two workers, wow, I have saved a lot of money. Whereas, the way we should be thinking is if I had 100 workers and I was generating 1000 output, now with those 100 workers I can generate 100,000 output. That’s where we should going with this. as opposed to cost reduction, we should be thinking upside scaling. And that’s the opportunity for all of us.
Frank Cottle [00:31:28] Isn’t there, ultimately, let’s say upside scaling is the goal, ultimately your upside scaling gets to such capacity that it diminishes the value of the scale itself because the market. respect the product with the same value profile. So at a point, you said information or knowledge is free. At a point, the output of all that scaling becomes free also. And how does it support those 100 people doing that 100,000 scaling?
Ram Srinivasan [00:32:11] Yes, so I would just underscore that right now what is being scaled is, if I may use the word, mediocre. It’s kind of the average. And mediocrity is dead so much that an AI model, machine learning model, any pattern recognition system will generate that type of output. Exceptional will always hold value. and this current revolution is moving the needle on that exception. So, if you can generate exceptional output at scale, now that generates a minus value.
Frank Cottle [00:32:45] So it’ll become quality over quantity will be important, which again diminishes the value of scale in my mind, because quality trumps, not to use that word, but quality wins over quantity in that regard. Big dilemma. It is, it is. Big changes coming in that regarding. We’re running a long time, so I’ll ask one last question of you. What skills do you believe in all of this change are gonna be the most valuable for professionals as we prepare ourselves for this transformation into the future of work and the changing workplace?
Ram Srinivasan [00:33:26] I think this is a top question. We get this question from all of our clients as well. The top three elements that I would say are the ability to learn and continuously change. So wave after wave of change is gonna come our way. Today it’s agent tech AI, tomorrow it might be something else. We’ll go on. Okay.
Frank Cottle [00:33:44] Okay, so we start with Darwinism. those that learn and become the most adaptable.
Ram Srinivasan [00:33:52] Yes, absolutely. Adaptability quotient is probably going to be number one. Second, I would say, is critical thinking. And I know this is probably difficult for us to kind of gage at this point of time, but it’s like what we discussed, right? Did spell check make us better spellers? What abilities are we losing? What elements are we giving away to AI? And probably at some point, if we go down the path of more and more intelligent models. are we giving away critical thinking? And that’s why I think that critical thinking becomes a super important skill.
Frank Cottle [00:34:28] How do you develop critical thinking in an environment where you’re giving away a big part of the thought process to the technology?
Ram Srinivasan [00:34:36] Yeah, and this is one example that I’ll share. It’s a personal example. My mother is a double-PhD, 40-year professor, and she’s also an avid user of AI at age 74. How she is giving assignments to students has changed. So instead of giving an assignment saying, here’s a question, students go and solve for it, write me an essay, and provide me with the essay, Every student can do that using chat GPT. Now what she’s saying to them is here’s the question, prompt your AI model to generate A, B and C grade responses and tell me how A,B and C great responses are different. Why does the response get B? Why does a response get C? We are now going into that phase where we need to educate at it in a different way, where we’re teaching critical thinking to people from the get-go. This means education systems need to change, how organizations train people need to change all of those things. And I think that is happening. We are seeing that happen across a number of universities and organizations.
Frank Cottle [00:35:46] You said three things, that was two.
Ram Srinivasan [00:35:49] Third one, from my perspective, is all of the human elements that we have spoken about. Empathy, the ability to make connections, the ability to reach into your personal experience, convey authentically and communication. I bucket all of that as the human element and I think that’s the third one. So ability to learn, be adaptable, critical thinking. the magnification of human elements, empathy, connection, personalization and personal experience and ability to be authentic.
Frank Cottle [00:36:21] Okay, I’m going to ask one last question, and this is judgmental on your part. Do you think our current education system is up to that task?
Ram Srinivasan [00:36:34] Absolutely, I am 100% confident that our education systems are up to the task. If you look at what’s happening, it is, you have large universities, but at the same time, right now, we have the ability to access knowledge at scale through all of these AI models. You have education institutions adapting. MIT, for example, offers a number of their courses through MIT OpenCourseWare for free. You have the ability to access multiple different tutors online. There’s so much knowledge out there. And I’ve continued to kind of educate myself over the last 15 years, every single day. I don’t think it’s go to university four years and come out and that’s it. It’s continuous education.
Frank Cottle [00:37:20] Well, you know, it’s funny, I fall into your mother’s category. I’m 75 and I have to spend, um, I’ll say two hours a day. Just reading and searching to keep up. Yes. If you do that, I’ve done that for 55 years of my career. Um, if you continuously do that you can keep up, if you don’t do that pretty soon, you find yourself retired.
Ram Srinivasan [00:37:46] Absolutely.
Frank Cottle [00:37:46] and behind and that sort of thing. So the key is continuously striving for knowledge, I think, as you go forward. Well, Ram, I’ve really enjoyed our conversation and the conversation we had the other day too. I think you and I go for days doing this and I really appreciate your time. It’s been very generous and I know all of the things that you’ve accomplished and are able to share with our listeners. So just thank you very much for my heart and good luck on the next book and we’ll make sure to put a link in for you because I know it’s going to be full of wonderful, wonderful knowledge to share.
Ram Srinivasan [00:38:23] Awesome. Thank you, Frank. And thank you for everyone who tuned in and listened in.
Frank Cottle [00:38:30] If it’s impacting the future of work, it’s in the Future of Work podcast by allwork.space.