Technological advances and biomedical breakthroughs have triggered a paradigm shift in precision medicine. Clinical research is currently at a turning point at which scaling up processes in an efficient manner and adapting clinical trial models to fulfill changing requirements will help transform the ecosystem. The availability of large-scale datasets and maturing of artificial intelligence (AI) tools that can process them at scale can help fill persistent gaps in clinical research and traditional models.

For engineers and data scientists, this shift opens up a wide range of opportunities to rebuild the clinical trial stack with modern tools. From automating eligibility logic to predicting trial outcomes before the first patient is enrolled, AI is enabling smarter, faster, and more inclusive clinical research.

Applying AI to genomic data in targeted trials

The use of AI in genomics is one of the most promising applications in precision medicine. As sequencing techniques continue to advance and trials become more genetics-oriented, data teams can face the challenge of handling, extracting, and interpreting information at scale. 

AI can be used to process complex datasets to identify the most effective therapeutic targets, to analyze sequencing data from screening tests in near real time, and post-trial, to uncover genotype-response patterns, stratify outcomes by molecular profile, and generate hypotheses for follow-on studies. These insights not only accelerate discovery but also help refine inclusion criteria and dosing strategies in future trials, making research more iterative and responsive to the underlying biology.

Reimagining trial setup with AI-first workflows

Trial setup has long been one of the slowest, most resource-intensive phases of clinical research. It is often hampered by unstructured protocols, rigid tools, and fragmented review cycles. But in the past year, AI has started to shift the boundaries of what’s possible.

Many teams are now using large language models (LLMs) to assist with repetitive tasks such as extracting eligibility criteria from protocol text, drafting participant content and surveys, providing translations, or checking for IRB readiness. These tools can dramatically reduce turnaround time and help teams focus human effort on clinical nuance rather than formatting or validation.

At Sano, we’ve taken this one step further by building an internal agent that can assist in generating different components in our study setup. For instance, when a sponsor wants to launch a similar study in a new region, our tools can replicate and adapt logic, language, and workflows in minutes instead of days.

While this level of automation isn’t yet common across the industry, it demonstrates the direction trial setup tooling is heading: faster iteration, fewer handoffs, and AI integrated directly into the core workflow. It is important to note that these usage cases do not entail AI handling any patient information or other sensitive data.

Engineering shifts in the age of AI-assisted trials

For engineering and data teams, AI has changed how efficiently we can set up and scale clinical trials. It has also reshaped the practice of software development itself. Tools like ChatGPT, Claude, and Cursor are now part of many engineers’ daily workflows, helping generate test coverage for legacy components, refactor messy logic, prototype new participant flows, and debug complex integrations with genomic data pipelines.

But as AI speeds up implementation, it also shifts the bottlenecks. Reviewing, validating, and understanding AI-generated code has become just as important as writing it. This is no longer optional: any code included in a pull request must be reviewed and fully understood by the engineer submitting it. 

Code review has become the key control point. Engineers now spend more time reasoning about what AI produces and shaping it into something robust. The more deeply an engineer understands the generated implementation, the more precisely they can iterate on it. This shift also puts pressure on supporting infrastructure. Continuous integration (CI) pipelines that used to run nightly now need to validate small changes multiple times a day. The speed and cost of these pipelines become a real constraint, especially when AI accelerates how quickly new code is introduced. Optimising CI feedback loops can directly improve team velocity and reduce friction in experimentation.

Reusable components are also more valuable than ever. A well-structured, composable system gives AI tools a strong foundation to build on, making it easier to generate functional code that fits your architecture. When these components are in place, AI can become a multiplier for product velocity without compromising maintainability. The more an engineer can make sense of what’s being generated, the better they can guide the next step, catch subtle bugs, and prevent the AI from compounding poor decisions. 

Creating the right environment for AI-native development

For teams looking to integrate AI across their clinical tooling or infrastructure stack, the biggest unlocks often come from culture, not code.

One of the most impactful changes is encouraging visibility and shared learning. At Sano, we made AI tooling accessible across the company early on and actively shared how different teams were using it, from rapid prototyping in engineering to copy refinement in operations. That culture of iteration and knowledge-sharing helped build momentum long before full-scale automation was in place.

Another critical investment is in fast, cost-efficient validation. As more code and content are generated by AI, the burden shifts to CI pipelines and quality assurance (QA) systems to catch errors, regressions, or model hallucinations. Optimizing those systems and aligning them to the structure of your product can turn experimentation into real production impact.

Looking ahead: dynamic trials, conversational interfaces, and embedded intelligence

What comes next is less about task-by-task automation and more about holistic transformation of how trials are designed and run.

In the next one to two years, we’re likely to see:

  • Conversational agents screening participants through voice or chat
  • Dynamic trial flows that update based on incoming data or user behavior
  • Systems that pre-fill eligibility using electronic health records (EHRs) or genomics data, only prompting users when something is unclear
  • Embedded logic that adapts recruitment strategies based on cohort characteristics or enrollment velocity

AI is accelerating and redefining trial setup. For engineers and data teams, the challenge is building tools that are not only fast, but also intelligent, traceable, and aligned with the real-world complexity of research. Smarter trials are about adaptability, transparency, and creating systems that improve with every study.

Get in touch