“Four Hours Crushed into 90 Minutes”: Q&A That Turned Amit Jadhav’s Master Class Into a Live Lab
What the formal presentation argued in theory, the Q&A session demonstrated in practice — from a real-time SOP generator built without uploading a single confidential document, to a PRISM-to-SALT framework that mapped every professional function in the room to a specific AI tool. A senior QA professional’s unsolicited testimonial provided the most credible proof of concept in the entire 90-minute session.
The formal presentation had ended. The diagnostic survey had been analysed, the BKC restaurant analogy had re-explained deep learning to a room full of scientists who had previously encountered it only in abstraction, and the Devdas warning had landed where it was aimed. What followed — the Q&A and live demonstration block that extended Amit Jadhav’s AI master class at analytica Lab India x PharmaCore India 2026 toward its close — was, in the assessment of most people present, the session’s most practically consequential hour.
The Document That Writes Itself
The Q&A opened not with a question but with a live challenge. Jadhav called for audience volunteers to submit inputs — job title, process function, applicable regulatory standards — into a tool he had built specifically for pharmaceutical and laboratory professionals. The input fields were designed with deliberate restraint: no company names, no analyst identities, no serial numbers, no proprietary data of any kind. Only generic professional parameters. The tool, he had specified earlier in the session, does not sit on any external cloud server and does not feed any large language model training pipeline. Institutional data is never at risk because institutional data is never entered.
The demonstration target was a Standard Operating Procedure (SOP) — the document type that QA and QC professionals cited most frequently as the primary time-sink of their working week when Jadhav’s pre-session survey had asked what consumed the most time. A volunteer from the audience, sitting in the front rows, began entering inputs: the process or activity the SOP would cover, the job titles of those performing it, the applicable regulatory and quality standards — Schedule M, GMP, ICH guidelines — and the brief contextual notes that give a procedural document its operational specificity.
The wi-fi at the Jio World Convention Centre was, by this point in the morning, under significant load. Jadhav acknowledged the connection delay with characteristic equanimity — “the net is slow, so allow me that time” — and moved to a second volunteer simultaneously, demonstrating that the tool’s architecture supports parallel document generation. The delay itself became a teaching moment: the tool was generating, in real time and without any proprietary data exposure, a fully structured SOP with placeholder fields precisely where organisation-specific information would need to be inserted. The final output, Jadhav explained, arrives as a downloadable text file. The only remaining step for the user is replacing the placeholders — using, as he dryly noted, “your word exception key which is Ctrl R, the favourite thing you are aware of since you were college students.”
The Peer Testimony
The session’s most analytically credible moment did not come from the speaker. It came from the audience.
A senior professional — introduced in the transcript as someone who had been witness to a demonstration the previous evening alongside a colleague referred to as “Dr. Preeti” — rose to address the room before Jadhav could continue to the next tool. The testimony was unsolicited, specific, and detailed enough to function as independent verification of what the morning’s tool demonstration was attempting to show:
“Yesterday’s experience was really wow,” the professional told the audience. “It was the climax of the whole one hour that we were talking with Amit yesterday. We had to put any query in four or five questions and it actually generates the whole document which we take about one day, maybe two, three days, four days to do that document in our regular work in a QA, QC or a compliance-related thing.”
The data security dimension of the testimony was equally specific. The professional’s direct observation — that the system is not cloud-based, does not ingest any LLM training data, and requires only generic non-identifying inputs — addressed precisely the dominant concern that Jadhav’s pre-session survey had flagged: regulatory compliance and institutional data protection. “You are not uploading any of your lab documents to the cloud of ChatGPT or Claude or whatever that is — the LLMs that are used — and the data is getting captured in some server sitting in the US or somewhere else. This is not on the cloud at all.”
The professional then described a non-conformance use case that illustrated the tool’s practical scope beyond SOP generation: entering a non-conformance event — without naming the auditor, the analyst, or the company — and receiving, in return, a complete root cause analysis structured on the 5 Why methodology, along with a full corrective and preventive action (CAPA) protocol. “It will give you a complete root cause, preventive and corrective action protocol that you need to follow. Just copy paste that into your own word file and then fill up the data that is specific to what you want to put into your system.”
For a room in which audit readiness and CAPA documentation represent some of the most time-intensive and anxiety-producing professional obligations, the implication was quantified by the peer presenter herself: what currently takes “one day, maybe two, three, four days” — reduced to 60 seconds of input and an automated generation cycle.
Jadhav received the testimony with the restraint of someone who had seen exactly this reaction before. “So now that I have spoken, you will give us free of cost, at least for me,” the peer presenter concluded, with the directness of someone who has spent years negotiating vendor contracts. The room laughed. The point was made.
The PRISM Framework and the SALT Architecture
Having demonstrated the documentation tool in principle, Jadhav turned to the structural framework he had built specifically for the analytica Lab India audience — a function-mapped, tool-assigned architecture covering every professional role represented in the room.
The first layer: the PRISM network — an acronym covering Protocol and Documentation, Research, Insights, Sharing and Communication, and Management and Decision Support. PRISM was Jadhav’s organising schema for the five fundamental professional functions common to all pharma and laboratory roles, regardless of whether the individual sits in QA/QC, R&D, biotech, pharma manufacturing, or laboratory management. Against each PRISM function, he had pre-mapped specific tool categories: documentation tools, research intelligence tools, data analytics tools, communication and presentation tools, and decision-support platforms. All either free or freemium. All audited by Jadhav personally for compliance suitability. None requiring ChatGPT, Microsoft Copilot, or any other tool already declared off-limits by institutional policy.
“All the tools have been audited by me,” he told the audience. “I don’t show any tools which I don’t use or work with, and all the tools are compliant for all the five functions.”
The second layer: the SALT framework — Simplify, Action, Localization, Trigger — Jadhav’s proprietary operational model for implementing AI adoption within existing professional workflows. SALT was mapped, for each of the five audience segments present, to a specific set of repetitive tasks that AI can absorb, optimising tasks that AI can accelerate, complex tasks that AI can support, and creative tasks — regulatory briefing documents, digital transformation strategy authoring — that AI can assist in drafting.
Against these task categories, Jadhav then introduced what he called the Amit Jadhav Quadrant — a matrix plotting tasks on the axes of human capability and AI adaptability, with each task positioned to show the optimal balance of human intelligence and artificial intelligence required to execute it. The quantified output: a 45 per cent time reduction across core professional functions, achievable with the free and freemium tool stack he had mapped, applied within the SALT framework.
“This will give you what we have created of AI, what we have created of human interventions, how to achieve 45 per cent goals,” he summarised. “And then at the end of it, we will be using all our free tools — mostly free or freemium — for each of the functions.”
The AI Agent: What Comes After the Tools
The session’s most forward-looking proposition came in the closing minutes, returning to the concept Jadhav had introduced in the formal presentation: agentic AI. Having shown the audience what AI tools can do when operated by a skilled human user, he now posed the logical extension: what happens when the tool operates itself?
“This is an actual AI agent,” he told the audience, introducing the concept with the directness of someone translating a technical paradigm into a professional reality. “It will continue to do the same work in the same way, 24 by 7, without asking a salary.” The implication for professional roles was stated without softening: “Now your work needs to be supported by creating an AI agent.” The qualifier, equally clearly stated, was that human authorisation remains the non-negotiable control point. “Nothing is possible unless you are going to authorize it. Without you, the business that you are in, the profession that you are in, your authorization is the key.”
The practical pathway — building an AI agent using free tools, layered on the PRISM-SALT architecture, configured to automate the repetitive and routine task bucket while escalating the complex, creative, and judgment-intensive tasks to human professionals — was the implicit destination of the entire 90-minute journey. The tools are the foundation. The frameworks are the structure. The agent is the outcome.
The Closing: The Phantom, the Balance, and the 50 Documents
Jadhav’s closing sequence moved through three distinct registers — personal, philosophical, and logistical — with the practiced ease of someone who has ended 450-plus sessions and knows exactly how much intellectual weight a closing passage can carry without losing the room.
The personal: a childhood memory of the Phantom, the comic strip vigilante, and the observation that “good actors are not always getting the best of the movies” — a sideways acknowledgement that professional excellence, in the AI era, requires strategic visibility as much as technical mastery.
The philosophical: a balance-frame that brought the session’s central tension to its resolution. “Unless you balance what your experience is right now, and unless you balance technology together with AI — with agile technology that thinks, that does, that creates, that duplicates you — you will never be able to get your dreams done possible.” The professional’s accumulated expertise is not the obstacle to AI adoption. It is the irreplaceable component that gives AI adoption its purpose and its boundary.
The logistical — and, for the audience, arguably the most valued: a confirmed commitment to share 50 pre-built professional documents — SOPs, audit readiness checklists, CAPA frameworks, root cause analysis templates, regulatory briefing formats — mapped to the five PRISM functions and calibrated for Indian pharmaceutical and laboratory compliance requirements, free of charge to every attendee of the session.
The Memento, the Acknowledgements, and the Handover
The session closed with Mr. Babandeep Singh — Project Director of analytica Lab India — presenting Jadhav with a memento on behalf of the organisers, to what the transcript indicates was the strongest applause of the morning. Jadhav acknowledged three individuals by name in his closing thanks: Dr. Amrit Kher, described as the advisory who recommended him for the session; Babandeep (Singh), the Project Director; and Leonora, the Messe München India session compere who had managed the floor, resolved the wi-fi issues, and — in Jadhav’s characteristically direct account — “grilled” him during the booking process while “convincing that this is a good opportunity.”
The compere, Leonora, formally closed the session and transitioned the audience to a 10-minute break before the next conference block — the Schedule M regulatory update panel — with an observation that neatly connected the two sessions: “You have the tools now. So let’s see how AI can be utilised in your work.”
The bridge was accurate. The tools had been demonstrated. The frameworks had been mapped. The 50 documents had been promised. The 45 per cent time reduction had been quantified. What the Schedule M panel would provide, immediately following, was the regulatory landscape within which all of it must operate.
It was, in that sense, a near-perfect session sequencing: the capability first, the compliance context immediately after. Lab 5.0, by design rather than accident.
- Theegutla Naresh



