Video: Quality Engineering Reimagined: Leading the Transformation of AI Based DevOps | Duration: 3612s | Summary: Quality Engineering Reimagined: Leading the Transformation of AI Based DevOps | Chapters: Webinar Introduction (37.989998s), AI Transforming Testing (287.1s), AI's Mega Disruption (446.81s), Building with Customers (679.26s), Managing Customer Expectations (1159.9s), Automation to Autonomy (1391.0399s), Three-Step AI Transformation (1982.6901s), Agentic System Integration (2205.33s), Testing Agentic Systems (2279.295s), Testing Autonomous Systems (2611.4048s), Embracing AI Future (2826.3198s), Future Skills Needed (2949.425s), AI Integration Challenges (3035.995s), Conclusion and Challenges (3067.2349s)
Transcript for "Quality Engineering Reimagined: Leading the Transformation of AI Based DevOps": Hello, everyone, and welcome to our webinar today in partnership with OpenText and TCS. My name is Neil Perry, and I'm broadcast editor for Technology Magazine. Now today's session is called quality engineering reimagined, leading the transformation of AI based DevOps. Now we've all heard that the big promises that AI has been making faster processes, smarter systems, and efficiency right across the board. But here's the catch. There's always a catch. To unlock those benefits, businesses need to be moving fast with software delivery, digital transformation, and cloud adoption. That's where the real pressure kicks in. But just to give you a little bit of background, a little bit of context before we start, the global market size now for testing and QA services is huge and growing, of course. Estimates range from some $45,000,000,000 to $87,000,000,000 between 2022 and 2024. And the growth projections, as I mentioned, also huge, expected to reach some 90 to a 100, billion plus by 2030 to 2032. And regionally, North America leads the way with over 48% market share in automation testing. The North American market alone value is projected at $40,000,000,000 by 2032. So developers and testers are being asked to deliver at speed, stay on top of security, and somehow juggle a whole range of disconnected tools. Now does all of that sound familiar to you? Well, in today's session, we are diving into all of that. We're gonna look at how the trends shaping enterprise tech are creating new challenges and how AI can actually be part of the solution, most importantly, not just another buzzword. So these are the key subjects we're looking to explore today. So you can tick these off as we go if you so wish. First of all, AI and agents in quality engineering. The second one, going from automation to autonomy. And finally, validating agentic systems. So an awful lot to do and not a lot of time to do it in over the next, forty to forty five minutes or so. Joining me today, I've got some real industry experience. I'm joined by Tal, Levi Joseph, VP of product engineering at OpenText. Hello, Tal. And also joining us, Niranjan Seeth Arama, global sales head for quality engineering at Tata Consulting Services or TCS to their friends. Good to see you both. How are you? Very good, darling. Amazing. Good to see you. Good to see you too, Tels. Thank you, Neil. We're excited to be here today. Good stuff. Well, you're very welcome. We've got a huge amount to get through, and I do have some, precipitated audience questions as well. And there's some a few spicy ones in there. I definitely wanna get to those, at the end. So let's, let's start with with, what I was mentioning during an introduction there about AI and agents in quality engineering. Now this is a subject that we could fill forty five minutes with on its own, but we're gonna try and fit this into the next fifteen or so. And and, Tal, if I could start with you, let's look at into some insights into the market dynamics, play here and and the disruptive impact of AI driven product engineering, where do we start on this? It is such a huge subject. Yeah. Disruption is, you know, an understatement. I think AI today is transforming software delivery and quality and testing specifically, in a particular way that really requires us to rethink, you know, how we're doing. We we always and you'll hear us speaking about, like, testing reimagine. Everything. Even even the the broader sense of quality is changing. But if we think about how AI is changing our space specifically, You know, we're now looking at, you know, pace of delivery increases. You know, some would say that we're going to get to 44 to 50% of the code in two years that will be generated by AI. Yeah. So not just the pace of delivery is increasing, but also risk tolerance to that is decreasing. Right? Because companies will need to govern and trust, you know, this AI code, and this is why quality and testing becomes an important and critical aspect of this. Now AI in general will also change the way we interact and experience, you know, the interactions with application. You said it, Neil, like, going from automation. You know, back then, we were talking about manual testing to automated testing. Now we're talking about autonomous, gigantic, but also, insights and analytics. Right? The way we can look at analytics, you know, going from descriptive analytics to predictive analytics to prescriptive analytics, and then autonomously to basically being able to do stuff, you know, related to this analytics. So the behavior will change. The risk would change. The way we deliver software would change. I think the big thing of what we're seeing in the market is, you know, the need for compliance and trust. Right? How do we trust AI? And this is why testing again is is very important. But also, how do we maintain compliancy? I think the whole aspect of data and how we manage data is going to change because with AI, the data becomes knowledge that becomes the context of everything we do. And as long as we have a consistent, you know, good quality of the data that that will be fed into, you know, the prompts, the the agents, you know, the more we can be more you know, the more we can be effective. So, yeah, we see, you know, it used to be a very fragmented ecosystem in the DevOps. And now with the data consolidation, we look at companies now looking to consolidate, you know, tools, practices, integrations. And, of course, there is the culture aspect which we'll talk about it, I believe, but the mind, the shift there, needs to change of how we as human leverage AI in the smart and trusted way, but also what is the functions of the humans. Right? How do we, you know, take the creativity and everything we know as humans and actually inject it, and and combine it with AI and, you know, make it a unique differentiator for for a company. Yeah. I'm trying to jump say. Yeah. Go ahead. Well, I'm gonna jump back to something you said right at the start of your answer there about you said that, you know, being disruptive is an understatement. From your experience within the industry, have you ever known a time of such of such dynamic change? The fact that everyone's having to adapt to this this technology, which is is turbocharging absolutely every aspect of people's work. And and you're trying to look ahead and have smart plans and and so forth, intelligent plans for the future when you're going at, you know, a 100 miles an hour or, you know, considerably faster going at the speed of AI. Have you ever known such disruption? Not in my time. I'm not at all, though, but, yeah, but but not no. Definitely not. And I think, you know, the the the thing is that it's impacting organization in a way that they need themselves to leverage AI internally. So they're under the pressure, you know, to, start and generate, you know, and and and and work with AI, you know, in testing, in in in, you know, developing code and and generating code and everything. But also think about their customers. They need to also embed AI in their application, you know, for their customers. So I think we're we're seeing, like, a mega disruption that, you know, impact the way companies and organizations themselves behave, but also the way that it will be reflected on their customers' experience. Right? And think about the whole competition. It's all going to change. Everybody will leverage AI. So what would be this unique differentiator, right, that will give us the competitive edge? So I think a lot of organization are right now in this mega exploratory journey of how to leverage in the best way, how to measure what is success. Right? But also how to find these unique differentiators that will differentiate and give them their competitive edge. But definitely one of the I think the mega disruption that I've ever seen in the industry, bigger than Agile, bigger than DevOps, like bigger than anything we've seen. And I don't think we know exactly still what it means because the way it's, you know, progressing is nothing like we've ever seen before. Very very very And, Duranjan, I'll I'll bring you into the conversation in just one moment. There's one final point I wanted just to pick up on that and the the the cultural side of things you were talking about there, Tal. And from my perspective as a host of events within the tech space, one thing I've noticed in the last three, four years is is the total shift of approach from leadership. As in three, four, five years ago, people were proud of knowing what the next step was and what was happening in their longer term plan. Now you sit on a stage or in in a environment such as this with tech leaders, and there's this new air of humility that has to exist because people are having to move and change so quickly. And frankly, anyone who says they have absolute certainty of what's gonna be happening in eighteen months' time, I'm quite suspect about some of those opinions, which is a strange situation to be in when you're sitting next to, you know, world leading people who know their stuff, who have forgotten more about the space than I will ever know. Do you think that's a fair reflection, the fact that the leadership has had to change? You've got to have that humility to say, we've got to move. We've got to be flexible. We've got to try things. Is that something you've seen within this space within the market? Completely. And this is what I'm expecting from our leaders. But it's exactly that. We learn as we go, and we need to be humble enough to say that, you know, we don't know all of the answers. Right? We can predict what will happen, but, you know, as as I always say to my engineers, it's not about now designing and building stuff for our customers. It's building with our customers. Right? And the more we're staying with them, the more we tend to understand what they're going through, what we need to go through in order to help them generate the value they need, and they don't know yet. Right? So I think, you know, that it's exactly that. Like, look what OpenAI is doing, Gemini, you know, even the search engine is is changing. Like, everything is changing on the fly. IGentik is one thing, but, you know, now now everybody's talking about micro iGentik agents and, you know, instead of agent you know, just iGentik systems. It's all changing, and we need to and and we need to go back to a lot of research, by the way, because we tend to know everything. Now it's going back again, working with the customers, working on research, technology, but also reality, understanding, and also, like, value. It's all about value, and I think that's the important thing. Technology is an is an enabler, and it will become a bigger enabler. True. But always think about the business outcome that you can deliver to your customers. I think more than ever, this is important. And it's not just about the technology. It's about the business outcome you'll be able to deliver with AI. And that will will give this competitive advantage and unique differentiator that that I've discussed. But, but, yeah, on culture, we can talk a lot because there is a lot of hesitancy also from employees. Okay? Now we're being replaced, and and that's a whole other thing, right, that we need to adopt, right, with the resiliency. Because if we don't adopt, we're left behind, right, if we don't adjust. So a lot of cultural, you know, shift aspects here. And that's an expression that you said there that I desperately want to write down because I absolutely want to use it in the future. I think it was you said, we're not building for our customers. We're building with our customers. I I absolutely want to write that down and remember that it was you who said that to me because I'm terrible at that. People say wonderful things to me these, and I forget who told me what. Tal, thank you so much. Narayan, apologies. I went off on a huge tangent there, but you can understand that. There is so much for us to discuss here. What I'd love, to get your perspective on is is the evolving customer expectations and service delivery challenges in this this age of intelligent automation. From your perspective within this consultancy space, what what's your take on that? Yeah. I'll I'll pick up from, what Tal left over. Right? Business outcomes. So any you can have all the technology in the world, whether it is earlier ones, automation, rule based, etcetera. The focus from, the whenever we talk to any customer today across the globe where, I have a conversation in The US, UK, Europe, even in India, lot of focus is on what what if I use AI? What do I get? What is the outcome? Where do I reach? What is the benefit for me? Okay. What if you achieve 100% of automation? So what do I achieve out of all of this? So this is the focus specifically around the expectation from the individual who is working in a project, program, enterprise, organization, etcetera. That is the first level. Then the from the service provider level, from a consultancy perspective, this is the main part where we are looking at outcome focused delivery, where how can I embed the AI agent take and generative AI across the service delivery life cycle? So this is very easy to say this, focus on business outcomes, but there are a lot of subjective intangible stuff across the life cycle. One, I think, Dertal also mentioned this. One is utilization of AI and GenAI within the SDLC or product engineering, software engineering, and, in the overall life cycle. But specifically around when applications or business applications consume the, generative LLMs, or agents, or if I go forward MCPs and so on, when you build a knowledge fabric, how do how does the the business actually get an outcome out of these tech stack which we build? That becomes quite crucial. So the word intelligence, need to be utilized by the people who have, to comprehend lot of intelligence and the search of intelligence which is coming from these models. So, again, I'll go back to Tal where she said, people who are who are doers today need to use all these tools around the ecosystem to orchestrate the whole, quality engineering life cycle, not just write a code, but also see what is the end outcome I'm giving to my stakeholder. So that's that's the way industry is evolving. And today, from a quality engineering perspective, whenever I have a conversation with my customer, there is no no conversation without AI, Gen AI agenting. Okay? In the last, I would say, twelve months, any kind of, requirements, any kind of conversation, coffee table conversation, even a ten minute conversation will not go without AI. It used to be automation before, but, the AI is the forefront today, and specifically around the generative piece where we can actually look at, the built in models within the industry, which, whether it is Gemini, OpenAI, Azure, whatever the hyperscalers provide. Plus, there are a lot of customers of PCS where, building the because there's a lot of knowledge within the custom landscape. So how do we bring small language models, for example, knowledge fabrics, which can actually power the LLMs to actually give give them realistic business outcomes. So that's the whole ecosystem which, today my customers are focusing across the technology segment, software segment, and services segment globally. Okay? So that that's a high level pitch I would, given me. But, I would like to reflect a couple of points on, Tal, which he brought in on the people mindset perspective. Across the industries, right, from, those who are doing the coding, those who are, consuming the business applications, there needs to be a shift in mindset where, you have an intelligent companion for you, whether it is for generating a code or giving insights or providing certain guidance. It is not easy to work with someone always. We all know that. It is work it is easy to work individually even though we are slow, but, it is now there is a very, very powerful assistant available, which has all the knowledge in the world, which is available at a a palm of our hands, I would say, which will give us whatever power is required to do our job faster. Now that is where we need to change our profiles from doers actually to orchestrators and how well you can shift. How can you get the bigger picture is the actually a challenge. I don't have a complete answer, but that's the challenge which industry is facing facing today. And I absolutely respect your your your the way you phrased that at the end there about it's it's as I was saying to Tal earlier, of knowing that we don't have the complete answers on certain things at the moment. And, frankly, if anyone claims to, I'm deeply suspicious of them because there are so many variables. And if I can just just just stay with one part of your answer there about those evolving customer expectations and and managing those expectations. From and this was I well, I'm so pleased to have you on this particular, on this particular session because you are dealing with so many customers and managing those expectations. As you say, everyone is talking about AI. Everyone wants to have AI within their setup. How do you go about managing the expectations of getting people to pick the white sort of solutions? Because it's easy to say, yeah. We're just gonna infuse AI into everything we do. And and you can chuck huge amounts of money. You can go off on your own development adventures and and try and set things up yourself when there's great tools already out there, for example. How do you manage those expectations when it is such a dynamic marketplace at the moment, wherein you, as a consultant and working in that space, are absolutely focused on the deliverables at the end? How do you go about managing those expectations? Yeah. This this is very, very important question and very interesting question. And, being in the quality engineering world, this question is there from last ten years. Which tool should I use? Which solution should I use? But now, customers it is not about the solution. It's really not about the tech stack because every other tech stack is gives you that kind of a maturity. But like I said, what do you want to achieve? So I'll put the customers into three kinds of categories. One is I want to be the leader in my area of work where I want to have the highest level of maturity in what I do, whether it is, value for money, quality, cycle time, all of that which I want to achieve with the best available technology. Of course, with as much as less I can spend. That is the first level of customers. The second level of customer says, I want to achieve my outcomes. I don't need to be the best in the industry. Okay? So if you look at that those kind of customers, you will need to have the best in class open source, last language models in combination. We we kind of create an ecosystem of an hybrid setup, whether it is likes of open text source of tools, open open source tools tools, etcetera, coming in and playing in in harmony to create the outcome for customer. I'm referring to outcome of customer because that's the whole goal for a consulting organization like DCS. Okay. Now the third category of customers is, I want to solve my today's problem. Okay. So for those, we consult them on our transformative path, showing them, guiding them on a big picture view. Yes. Don't look at today's problem. Look at problems which which is which is going to solve you not just for your ecosystem, but at an enterprise level, at an organization level. Do a business case, do a proof of value, and then you drive the overall transformation, conversation. So the whole thing is that starting point for any customer could be the vision what they have, but it is job of consultant like ours to give them a holistic vision. What is their objective? Ensure that the customer is reaching their short term, near term, low hanging objectives, but also give them a bigger picture in terms of where they want to land or where they have to land in the, in the in this AI journey. Right? Because everybody is in a pressure to adopt AI, but it is not a race. It is about what you want to achieve in the right cost, with the right quality. That's a very good way to approach that. I'm going to have to move the conversation a little bit because we have gone off on a cup. Well, I've gone on a couple of tangents. That's entirely not your fault. That's entirely down to the host. Where I wanna go next, Tal, if I could start with you, is is the discussion of going from automation to autonomy. Again, the subject that could take up the next half an hour if we if we let it. So what I'd I'd love to know, Tal, from your perspective is is the transformation of tools across the, you know, the QE and DevOps landscape. Particularly from a product innovation perspective, how has that transformation been for you, Tal? Yeah. So so, Neil, if I may, I just wanna comment on something Niranjan was saying about human, and and I really liked it. People will become more and more, and people will become more and more the orchestrator, the the architects, I would say. You know, set up the flows, and set up the control points, by the way, and I think it's part of it. So I really like it. And I think the more people will adjust the mindset of now they're architecting, you know, the whole, I would say, delivery, the testing, the, you know, quality process, then then that's the right mind shift. You know? That should happen. But, yeah. So really liked it. Yeah. So so so, you know, if we look now at the DevOps ecosystem, I would say, even the current one, but but definitely, you know, few years ago, it became a very, like, best of breed. You know, if I look at an average ecosystem of tools in a in a typical, you know, DevOps, landscape, then, you know, it it could get to 36, 40, you know, some of them commercial, some of them homegrown, and it was good for a while. Right? It was you know, everybody's were dealing with their own stuff. There were a little bit of integration, and it worked. Right? A lot of companies had a lot of challenges to scale, but but that was how a typical, I would say, echo, you know, echo system of DevOps looked like. I think nowadays, it's all changing. Right? Customers and companies cannot sustain, you know, this fragmented, I would say, ecosystem because of several thing. But one of the biggest thing is the data. Because if you're trying to manage silos, islands of silos of data, you will never be successful with AI. We talked about the knowledge, the context, and the context is in the data. And the more you feed AI with the data, right, the more it can be efficient and effective. It's the key when we're looking at, you know, an echo an ecosystem of of DevOps because we see a lot of consolidation happening these days. It starts with the data, you know, to move into a data hub or data lake. You have many names for that this, but we also see it in the tools consolidation. And more and more companies are starting to talk about the notion of a platform. It can be still an integrated platform, but to having to maintain a single knowledge hub, you know, and and and and see that, you know, that's changing from data hub into a knowledge hub these days. So I think that's a big thing that is happening. I think, you know, the the way we, the way we look to connect is also changing. The way you will interact with the product. Right now, it's all about assistive. Right? Like, assistance. So you, you no longer you wanna move from clicks to actually chatting with the tool or with the product. So that's one thing that we see, you know, the way we look at innovation of the product is also the user experience. Right? So we've introduced, for example, you know, what we call aviator assistant. You're in a context of a certain feature or any asset or artifact. You can ask any question. You're in a certain release. You can ask what is the risk of the release based on the testing that I've already done. You're at the feature. You wanted to generate a test from the feature description, but also from the all of the associated, you know, artifacts that are connected to this feature, like the test that I already have or even, you know, defects that were already found by, you know, developers as they started to build this, and and things like that. So so I think the experience is changing. But I think, you know, when we talk about automation to autonomous I came from Mercury. Right? We back then invented, you know, the the automation tools, right, with functional testing and load testing, and everything was done manually. And we came up with, you know, the the automation. But I think now automation is a past. Right? It's not, you know, about you now automating the manual script or looking at an application and writing a script. No. What you wanna do now, look at the feature description and generate either a manual, by the way, or an automated, script. And by the way, we can do it because back then, now it's all about LEM and and, you know, GenAI. But we have, you know, introduced AI into functional testing in the way you recognize and identify objects on the screen. We don't identify objects on the screen based on the physical description. We identify them, leveraging machine learning like human does. Right? So we we leverage algorithms like Yolo to basically identify our login, you know, button looks like and being able to, to click the login button, for example. But that gives us now the benefit of, you know, taking a manual script or a feature description and turning turning it, and generating an automated script. So I think, you know, it's about, like, starting from the planning and being able to generate assets, autonomous assets, just by getting the context and the data. So no human intervention. However, I do think that humans again, back to what Niranjan was saying, like, define the architecture of the process. What is important for you in terms of when you generate the script? Right? What is important for you? What what is the, I would say, the gates that you wanna make sure the standards that you wanna make sure to follow? You need to feed the agents with these standards. And I think that's the big difference between, you know, automation and autonomous. It's about, you know, doing as much as we can autonomously, you know, starting from a certain point, but being able to govern the product in the right and efficient way. Right? We'll talk about the agentic in a second, you know, about the agent to agent communication, but that gives us the ability to go even, you know, more with autonomous. Right? I can look at a certain release and, like, ask, like, give me the risk, you know, the current risk, the current gaps, the quality status, and basically generate the scripts, in order to, mitigate the risk, analyze the results, and give me another, you know, status report. Right? And the sky is the limit to how further and, you know, how how far you can go with this. But you have to do it, and this, I would say again, very smartly. Otherwise, we can end up with what Niranjan was referring, before, like, yes, you have 90% of automation, but what is the business outcome you're achieving? Right? You may be producing more overhead or introducing more noise to the process than than anything. So I think when we go into this automation, to autonomous, to agentic in a way, then we have to carefully think how we're going to structure it in a way that that will give us the business outcome and that we can measure, but more importantly, how we can improve. So going into autonomous, what you don't wanna do is lose control over what you're doing. You wanna be able to have a feedback loop, you know, whether it's from production, whether it's from find the sources to see and to track whether you're in the right direction. So great thing, a lot of power, but you have to use this power very smartly when it comes to autonomous. Yes. It's all well and good having a a lovely new solution in place, but if you've got no way if it's actually delivering the results that you put it in place for, then you may as well not have it there in the first place, I suppose. Laurentian, if I could bring you in now just to go through TCS's strategic approach to reshaping, QE, you know, whether that's through advancements in tech, workforce enablement, process modernization. Could you give us a sort of a a flying visit through that strategy at the moment? Yeah. Yeah. So, I'll I'll I'll break this down into we we have a three step process today, what we are thinking. Of course, this is always an evolution. I I put it that way. It's a rolling milestones. But, the three steps are specifically around what is today the automation piece, which is human centered where which Tal mentioned where I need to write a script and do that automation. That is the first level. Second one is assisted or augmentation of the existing tools where human gets the information from an LLM and does that. The last part is we are looking at, I would say autonomous or agentic, where there are decision, autonomous decision making tasks which are done by agents in such a way across the SDLC. Like how Tal was mentioning, what is my release quality index? If the quality index is low, can I actually run few more tests and actually tell the developer agents to say that go and fix this? And all of this is all fully agent to agent conversations. The human in the loop is basically monitoring is one aspect of it, but he would have defined the strategy. He would have defined what he wants from the ecosystem, and then there is an outcome which he is anticipating for. Now that that particular outcome could be an IT led outcome, could be a business led outcome, and you end of this story is it could be a consumer led outcome. So in all of these, the three pronged approach is going to work. I I could pick up a couple of things on what Tal said, which is knowledge hub data and all of that. Lot of customers today are looking at creating knowledge fabrics, which is a repository of data. And you'll be surprised that lot of customers do not have their data at one place. It is scattered like what Tal said. It is scattered across multiple portfolios, multiple projects, multiple application owners, and it's all scattered. To bring them is the first major task, which most of our customers are doing. Now that itself is a huge task and where we would require platforms, which is centralized in nature a little bit, where you can build the data, which can build in turn the insights, and then you can adopt all the quality engineering principles on on top of the fabric. The second part which where TCS is doing is we are looking from moving from the existing test automation engineers to the roles of s debts. That is the first element. And defining the for each of our customers, which are completely highly customized for each customer. It cannot be one role where I'll just fit in project manager like how we used to have in the in the yesteryears, where I'll fit the project manager to every project which I have. Here, the as that role is defined for that particular customer, what I use, what platforms I use, what business type I'm in, and then we define it. And these as that are involved in orchestrators, we also have a thought process around as gets, which is a very, very new term. I would say software generative engineering test, which we have pointed, and I'm proud to tell that as well. Fundamentally, that would give you a picture of how I can use generative technologies in my quality engineering process to actually bring in the overall transformation for our end customers. So this overall covers the technology workforce, enablement and the process modernization which we also see, which is enabled by the digital tools, which are again powered by the AI or generative AI ecosystem of products. Now those products could be powered by, exports like OpenText and other companies. There are also a lot of open source tooling ecosystems available, which which we can talk about, like MCP integrations and all of that. So that is where, the customers are moving, and that's where a lot of investments from TCS, both from, thought leadership perspective as well as monitoring investments in into creating newer technologies in combination with our partners and so on. So that that's where the I would give a picture of me overall. And and you fitted an extraordinary amount since just a couple of minutes there. Thank you, for that other and you've rounded that up brilliantly. We are rapidly running out of time, and I definitely wanna get onto the subject of validating agentic systems over the next sort of five to ten minutes, and then we wanna try and get onto a few of these presubmitted questions. So if we could try and keep these, next answers relatively punchy, that may be, very grateful indeed because that means I won't get in trouble with my producer, which is always a good thing. So let's start, if I can tell with you about how we are going about building these agentic capabilities really into the platforms, integrating the the approaches, the progress that's being made. Tell me your your take on that. Yeah. So, so this is with iGentik. I just, you know, I just want to this world, right, with with iGentik system. Right? When we look at our space in our domain, we the vision, you know, with iGentik in our platform is basically to have a team of, you know, digital workers. We it's not even agents, but it's actually digital worker. Now you have to think about it. Now you have a whole team that knows and understand your business. Right? And it's tailored to your specific, you know, business processes and needs. They're well integrated now through the platform into the SDLC and also the user flow. And now they autonomously take part in the software delivery process. They do complex, you know, asynchronous task. They work together, and I think now we break basically, we managed to break the barriers, right, of, you know, in the past between sometimes teams. Right? You no longer have these barriers. Right? Now each of the agents or workers, you know, has an identity now, has a role that they need to fulfill. They have the certain knowledge and context they're getting. Right? What is the team responsibility? For example, if they have to tag defects now, what is the severity definition? You know, what happened in the past with defects and how they evolved, you know, things like that in the but in those specific domains, they have to understand their knowledge and context. But also they have a set of rules that they operate with it. Right? Instructions, we call it. Right? You acted as subject matter experts, and you have to do this, this, and that. And the last thing is now you have a set of tools. Right? A set of capabilities. For example, you know, read an entity, write to a field, you know, fetch similar entity, add a comment, all of that. And within that, you know, world, they now start and collaborate with each other. Right? And that's the beauty. But there's always, you know, a risk that comes with this and this, you know, bring us to the, I would say, the whole notion of trusting agentic systems. Right? How do we test them now? What what happens with an agent? Right? You know, always as QA, we learn to okay. We do this. We need to expect for this result. Right? Very deterministic. All of a sudden, you get to into a world, you know, I I call it like a nondeterministic world. Right? How you how do you judge? So humans can definitely become, you know, as architects, they can pay they definitely, you know, start and state, the gates and the control points. And, you know, between each one of the phases. Right? Before we move to this, you know, we have to check this, this, and that. That's one thing. But now we're thinking about, you know, what if an LLM was to test an LLM. Right? We call it LLM as a judge. Right? So we are thinking about embedding those capabilities, you know, testing nondeterministic systems also using other LMS in order to test the LMS or test the agents. So I think there is a lot of things that are happening in this space. We're now and I think, by the way, it will happen with services because there's a lot of practices that in in in addition to the technology and the product, but there's a lot of practices, best practices that we'll need to embed in order to trust the agent in the agentic systems. Right? So, and by the way, now we're not just talking about testing agents or agentic systems, but also AI benchmarks. Right? So it's not just testing. So we'll be familiar with a lot of new practices, and new processes in order to validate those systems. This is fascinating, by the way, for us. We have a lot of innovations coming in that space. We will be happy to work with TCS, for example, to kind of, you know, collaborate on how we help our customers get the best out of AI and trust AI in the best way possible. But but definitely, this is, an area that will continue on expanding, like, LLM as a judge and coming up with practices of testing AI agents. By the way, a lot of it is with the customer to understand again, going back to the KPIs. What is a successful goal that an agent has achieved? What does it mean? Right? We have to go back to the definitions, to the business outcome as we talked before, and make sure that we're very sharp on what we wanna get and how we wanna get it. Yeah. And I I love that perspective. As I said, we are rapidly, rapidly running out of time. So, Duranjo, if I can come to you very quickly just on well, this leads on really nicely from what Tal was saying about navigating those shifts in testing methodologies to to validate these autonomous or agentic systems. Your thoughts there. And, again, that's something we could talk about all day. But if you could round it up very briefly, I'll be very grateful. Yeah. Yeah. I'll I'll I'll quickly give an example so that everybody is on the I like the state on this fact about deterministic and non deterministic parts of IT or an application. That's very, very important. Just imagine today, you have a Deliverado in The UK or there are so many other apps where I just select what I want and then I I just order the food, I get my food. Now you know what you want, you get what you want, you if you're happy, you give five stars, you don't if you're not happy, you don't give five stars. Now fundamentally, you end up, you end up, doing the, today, how the demand comes into the market is in is with respect to what you have from the, what you have from the watch. Let's say, in terms of your hunger, and then the watch orders the app, and the app the agent picks up the, taste of what you like on that particular day. You get the food in a very autonomous mode. Okay? Now in that context, what happens is you don't know what you're eating today based on what you're what you, what you are feeling and what you like on your knowledge base, the app is ordering the food. Now fundamentally, in that context, the non determinism is very difficult to test. Okay? And that requires some effort both in terms of tech stack, people, orchestration, and this I relate back to my first comment on business outcome. So is the customer happy for the same example? He at the end, while I use all these services, whether I'm using an Apple Watch or a Samsung Watch or a or a Deliverado or x y zed restaurants. So all these services has to be rated. Their services has to be cohesively working, and this has to be tested. If that is my business case, have I satisfied all my customers in the ecosystem? So that is my end goal, and that's where I relate back to my, deterministic and non deterministic part where it is very, very crucial for us to validate the systems. So fundamentally, when I I have the last statement which, which I want to relate to, where the agentic parts are very nondeterministic in nature. So for that, how do you actually measure it? Right? Today, we have lot of matrices in in industry, blue score, bird score, non ground proof, etcetera. So we are measuring all of that today. All the tools are able to measure. But at the end, my customer asked me, so what does it mean for me? I'll stop it at that. We don't have an answer. How do you how do you make a difference for the end consumer or a customer is very critical. So that's where I stop it, then we can come back on this when we have the answer. Absolutely. And and again, it is linking it back to that that customer at the end. We are right out of time, but, and you've been so generous with your time, both of you. So I'm gonna try and just creep in a couple of these, presubmitted questions, and I'm gonna be terribly bossy and ask both of you to try and answer them in about thirty seconds each if I if I can. I'm allowed to be bossy as part of the role of being the host. So, let me go to my list of questions. Where are they? Here we go. This this one's a really, really good one because, again, it returns to that cultural question that's kept on reappearing during this conversation. Right. Here it is. Looking ahead, what skills or mindsets do you believe will be most critical for quality engineering teams to thrive in an AI first agent driven tech landscape, skills and mindsets. Tal, I'll let you go first. What are your thoughts? Yeah. So it's adjustable, open, flexible mindset that is embrace it rather than anything else. Because, you know, like, you're walking on a mill, you know, if you don't walk, you you're just full. Right? Because it keeps on walking. But this time, you have to run. It's not even walking. It's running. So so really not to be afraid. It's not about a threat. It's a threat to the people who will not cope well with this. But no. You know? Just embrace it. Learn. Research. You know? Learn from others who have done it and be open to that. Business outcomes. Be sharp on the business outcomes you wanna get, define the KPIs, and start small. Start a POC where you can demonstrate real value that you can measure in all aspects. It can be productivity. It can be quality. It can be security. Just decide and be be very concrete on what you wanna do, how you wanna do, and why in this project specifically. And that's the only way you can be successful. So I think it's it's people, processes, tools, but the business has to be very much aligned with that. Thank you, Tal. That was slightly more than thirty seconds, but you are so damn passionate about this subject, so I will allow it on this occasion. Narayanjan Yeah. Yeah. I'll let it I'll let it go. Narayanjan, thirty seconds from you if you'd be so kind on that. Yeah. Yeah. Skills and mindsets. Yeah. Yeah. I'll I'll I'll be very three things. Technical skills in terms of prompt engineering, orchestration of agents and autonomous, ecosystems. First, understanding deep understanding of AI models, how it works, how to interact with them more than how the model is built, specifically around data structures, what data have, what is the context behind the data will be quite crucial from a technical perspective. From, mindset perspective, it is very, very important that there is a continuous learning and adaptability. Very, very important, because what we know as of May might not be valid today. So we we are in October today. So it's very, very critical. Outcome driven thinking from each individual in the ecosystem. I wouldn't say a developer, tester, etcetera, from each individual to look at outcome driven thinking. Then last part, I would put it as technology and, human collaborative innovation. That means I should be collaborative with humans. Obviously, today also, it is happening in many organization in a very effective way. But how can I collaboratively integrate my skills along with technology landscapes, which could be ecosystems, which could be tools, which could be, share points of the world, etcetera, to bring in to power my day to day tasks? So I would put it that way. I'll leave it at that, need. And I'm gonna squeeze one last question in the end. I'm gonna risk incurring the wrath of my producer with one final question. But I think this is really important. It's been phrased here. Where is it? I've lost it on my list. Here it is. What are some of the biggest challenges that enterprises are facing when integrating AI driven tools into their existing quality engineering DevOps processes, and how can they be overcome? Now we could fill an hour just on this question. So I'm gonna ask both of you for one big challenge and how it could be overcome. Tal, I'll let you, kick off with this. Your thoughts. Four. Pretty quick. Always say very quick. I'll allow it. Build the foundation in the data, the end to end traceability, context, knowledge, data history. You know, make sure it's clean to be able to identify patterns. Usability, the culture that we talked about, and that's it. This is what I would say. These are, you know, the biggest challenges that we see with the customers, the the changing processes, tools, and the culture. Thank you very much. And, Duranjo, I'll keep you the final word. Yeah. Yeah. Fine. I echo whatever Tal said on these three points. But one more important point is how do I build a fabric of these data, right, which is properly labeled, which every other technology can anyway interact. If you have the data captured in the right ecosystem, whether it is external elements or the internal, code logic, etcetera, We should be able to have the power for the agents to grab those setups and do it. What tools, what technology that keeps evolving? This is a very crowded space. There is no one answer, and everybody in the in the industry who are experts know this. So fundamentally, tools will coexist. Tools need new tools will come up, but the basic foundation like what Tal said, that remains as is. 100%. Well, we've overrun, but to be frank, I'll be honest with you. I don't care. It's been a wonderful conversation. We've covered so much ground. Thank you so much. You've both been so generous with your time. As I said, I'm afraid that is all the time we have for today on this. We will speak soon, no doubt, both of you. The recording of this webinar will be available shortly too, so come back, watch it again, or share with anyone you think may find it useful. So for now, thanks again for watching, and bye for now.