Interview with Nicoleta J Economou, PhD, the founding director of Algorithm-Based Clinical Decision Support (ABCDS) Oversight
Emilia Chiscop-Head, PhD: You direct the Algorithm-Based Clinical Decision Support (ABCDS) Oversight, which has a crucial role in ensuring that algorithmic healthcare applications deployed at Duke Health, including those that use machine learning (ML) or artificial intelligence (AI), are safe, accurate, fair and equitable. How and when was ABCDS Oversight founded?
Nicoleta Economou, PhD: The ABCDS Oversight Committee was first formed in January 2021, and its power lies in its people! Our committee is comprised of subject matter experts that understand the regulatory environment, have experience in developing and implementing algorithmic healthcare tools into clinical or administrative workflows, as well as in monitoring and evaluating them. This is truly a One Duke initiative, co-chaired by Michael Pencina, the Vice Dean of Data Science of the School of Medicine, and Eric G. Poon, the Chief Health Information Officer at Duke University Health System. We strive for excellence in the delivery of patient care, by incorporating quality and ethical principles in what we call an ABCDS Oversight framework.
The ABCDS Oversight framework is a people-process-technology framework at Duke. The design of this framework started some months in advance. It was inspired by initial conversations with Michael Pencina, PhD, the Vice Dean of Data Science, Ben Goldstein, PhD, Associate Professor of Biostatistics & Bioinformatics, and Lisa Wruck, PhD, who was working at the Duke Clinical Research Institute (DCRI) at the time. The idea was that the evaluation of AI/ML-enabled tools in development and deployment should resemble the regulation of medical devices, by introducing appropriate checkpoints throughout the development and deployment process. To realize this vision, we needed to take into account best practices for software development, which I was quite familiar with from my prior experience deploying clinical analytics solutions for clinical trials; there are important quality considerations that the development team needs to keep in mind while developing and deploying AI/ML tools that are expected to impact clinical care, either by improving health outcomes or specific processes. Finally, to introduce algorithmic oversight into the health system, we formed a critical partnership with Eric Poon, MD, Chief Health Information Officer at the Duke University Health System. As you can imagine, this ABCDS Oversight framework is critical in ensuring impactful, fair, safe, and high-quality deployments used for clinical care. Through this ABCDS Oversight framework, we now have a better understanding of the available AI/ML tools deployed and we also have a way to assign accountability to those who develop and use these tools, so that project teams and end-users are able to escalate any issues as they arise. It was the Chancellor of the Duke University Health System and the Chief Quality Officer, the Dean of the School of Medicine, and other executives of the health system who wanted this algorithmic oversight to become a reality. We would have not been able to establish our processes without the on-going contribution and support of Armando Bedoya, who I work very closely with, and Duke Health Technology Solutions.
And what existed before the ABCDS Oversight was established?
Nicoleta Economou, PhD: This is a very unique oversight process that we have set up at Duke. We are one of the first institutions that have put a process like this in place for the governance, evaluation and monitoring of tools and algorithms in the health system. Before that, we were relying on project teams to check and test their work. There is now a culture shift towards establishing processes and educating project teams to deploy these tools and algorithms in a more responsible manner.
How is the new process fundamentally different?
Nicoleta Economou, PhD: As a first step, we ask all project teams to register their tools which allows us to have a good understanding of how many tools are out there – which we did not know before. Duke is a very big institution, and there are many resources that are not centralized; there are data scientists that reside in different departments, there are tools that are developed externally, and sometimes there are tools that are developed in collaboration with external partners. With the new registration system, we have a clear understanding of who is the business owner or the clinical owner that is responsible for the design of the tool, the testing of the tool, and for championing for the tool, the latter which is important to drive the adoption of the tool in clinical practice. If there are user problems, these can be escalated to us directly or through the clinical owner. During the review, we want to make sure that there is good understanding of the risks and benefits of each tool, and that the tools that are developed and deployed at Duke Health are fair and equitable. The ABCDS Oversight process also helps Duke ensure business continuity: when project teams change, e.g., when people leave Duke, we can keep track who takes on what role in the future. We also want to make sure that, once the tools are used, the end users have all the relevant information needed to make a decision or use the tool. To ensure regulatory compliance, we have tight alignment with the Office of Regulatory Affairs at the School of Medicine, as they participate in the ABCDS Oversight Committee.
I would like to stress here that our goal is to have a pragmatic approach as we're trying to drive quality and raise everyone to the same standard. And we want this oversight process to be as collaborative as possible. Getting on a phone call with project teams helps us educate them around what we are doing, why we are doing it, and how we can support their work to deploy high quality algorithmic tools. Direct communication makes a difference! We continuously learn from our interactions with project teams in order to improve and make our process more intuitive. We have also established clear approval requirements for the tools and algorithms, so the project teams know what evidence they need to provide in order to get approved as well as what best practices are most important to keep in mind.
Do you have a specific example about a situation when ABCDS Oversight prevented something bad from happening?
Nicoleta Economou, PhD: Our ABCDS Oversight Committee has fairness and equity at its heart. When we review some of these tools, we ask questions in the early development stages about whether a tool is fair for our patient population. There have been instances when our committee has questioned the inclusion of sociodemographic input variables that may have not been necessary or appropriate to make the specific decisions that certain tools were intended to make. When questions like these arise in the evaluation process, we always try to work collaboratively with the project teams and discuss any concerns our committee may have.
A success story that comes to mind is related to a project team that we helped test whether taking out a demographic variable from their model made any difference on performance of their model. By having this conversation, we were able to help that team improve their model by eliminating bias towards a specific population of patients before that algorithm was used in the clinic.
What patterns did you discover when evaluating the tools and algorithms?
Nicoleta Economou, PhD: We have noticed issues with incorporating algorithms into clinical workflows as well as equity issues for a few algorithmic tools. In response to the former, we either consult project teams ourselves or connect them with experts that can help with workflow design. In response to fairness and equity, we are putting together a guide and tools for the Duke Community on fairness and equity considerations for those deploying algorithmic tools. Michael Cary, PhD, RN, our AI Health Equity Fellow, and Sophia Bessias, ABCDS Oversight Evaluation Lead, are spearheading this effort. We cannot take as a given that an externally validated algorithm, for example, would be fair and equitable for our patient population. In our guide, we share mitigation strategies that are available to the project teams to help them make this self-assessment during the early stages of the tool development process. Our guide will be available to teams as a reference, so they can also determine from a list of provided sociodemographic variables whether they need to include input variables that are likely to raise fairness or equity issues in their model. Although this process may require more work from the project teams, it is necessary to ensure high quality, fair and safe care. Thankfully, our mission resonates with our Duke community.
How many algorithm processes does the ABCDS Committee oversees?
Nicoleta Economou, PhD: We have about sixty algorithmic tools in our portfolio today and of those about forty are registered.
Are there any other ethical challenges that you have to manage in your work?
Nicoleta Economou, PhD: Keep in mind that we are proposing a culture shift with regards to the deployment of AI/ML tools in clinical care. This shift naturally requires time to settle in. Occasionally, tools that go through our ABCDS Oversight process lack the transparency that is necessary to increase the credibility of the tool, to make it trustworthy, and to promote accountability. There are also cases of project teams that are unsure about what information they need to disclose. As a Committee, we continuously stress how important of an ethical consideration transparency is, especially as we strive to work as one Duke.
An equally important challenge is the need for evidence of quality in the development and monitoring processes that has to be provided by the project teams through testing, validation, etc. From my past experience working in regulated environments, analytics and algorithmic solutions must always follow standard quality practices. However, to facilitate adoption of standard quality practices it is necessary to have a pragmatic approach where we educate folks and provide tools so that new quality standards are met. And that is how we try to address this challenge through our ABCDS Oversight framework. Ultimately, I do believe it is only a matter of time until this practice of excellence will become our second nature. What is unique about Duke is that we have a team spirit and an entrepreneurial spirit that drives us to do the best for our patients.
Perhaps an important hurdle we need to overcome is that the ABCDS Oversight framework should not be perceived as a “burden”: one more thing that project teams need to go through in order to have their tools developed and deployed. We are working towards a more proactive approach, by educating the Duke Community about the ABCDS Oversight process and how we are implementing ethical and quality principles into our algorithmic tool deployments. Towards this end, I am very grateful that I was invited to spread the word at the research town hall on Conducting Translational and Innovative Research Ethically hosted by the Duke Office of Scientific Integrity.
What is your long term vision for ABCDS Oversight?
Nicoleta Economou, PhD: We have established the bare bone components of this framework and there are still a lot of things that we need to accomplish, such as, for example, creating standard operating procedures and streamlining monitoring of algorithms.
We have also initiated nationwide efforts for establishing guidelines and “guardrails” around health AI technologies and have formed the Coalition for Health AI, a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science. Our mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase credibility amongst users, and meet health care needs, connecting the dots amongst developers, users and those impacted by health AI tools, even beyond Duke. I am thrilled to be part of this effort along with great colleagues at Duke.
How did your studies lead you to your current role and your current career?
Nicoleta Economou, PhD: It’s been a journey and not a straight path. My PhD was actually in biochemistry. After a brief postdoc, I took a 180◦ turn in my career for personal reasons, and I got into data analytics for clinical trials. I had never imagined this career path, because I had always envisioned being a wet lab scientist since I was a little girl. I was always interested in the “why” of things. However, as a personality, I am very persistent, I am not afraid to tackle challenging situations, and I believe in remaining flexible and adapting to changes. Therefore, although I had to go through a steep learning curve initially, I was able to quickly establish myself and gain recognition in my new role.
I should add that getting a PhD in basic science prepared me not just for research in basic science but, more importantly, it taught me how to approach difficult “black box” questions; questions that are difficult to see through, like in basic biological science. It taught me how to search for solutions, understand what might be happening, strategize and tackle problems, get meaningful results or try to find meaning in results.
Who inspired your career path?
Nicoleta Economou, PhD: There are several people that have inspired me along the way. I do get inspired by people with a vision and the resourcefulness pursue it. I joined Duke after a conversation with Michael Pencina. He saw the potential in me and in translating my knowledge and skills to the healthcare sector and I am grateful that he gave me this opportunity. Another person that inspired my career path was my manager at a clinical analytics company, I worked at before Duke, who encouraged open communication, team spirit, on-going team support, and a culture that would allow for open discussion of any issues or errors from which we all learned and continuously improved upon. These are things that I really strive to bring to my team, because I am a true believer of breaking down silos and everyone working together as a team.
While working in the pharmaceutical industry, you developed some risk models and helped design and find metrics for monitoring clinical trials.
Nicoleta Economou, PhD: I developed several risk models that were used for clinical trial operations and risk-based trial monitoring. We designed these risk models, identified important risk metrics for monitoring trials, and designed interfaces that would allow easy identification of the drivers of higher risk scores and the root-cause analysis of scores for higher risk sites. This would allow a central monitor to determine what is happening at a site where clinical trials are being run, and take action based on risk models or risk scores. It is in essence, a data-driven way of resource allocation, because you are only going to sites that are in need.
My team and I built these solutions in collaboration with the business owners, and developed risk scores that included therapeutic area and study specific risk factors. We displayed the risk score, like we would look at a traffic light indicator - red, yellow, green – to allow monitors to determine quickly and intuitively which sites were at need for action and helped the end-user understand why is it red? What is the root cause? What are the actions the end user should take? How can we capture the actions taken to identify trends of improvement? We can definitely learn from the clinical trials industry when it comes to developing algorithmic tools for healthcare.
Before your current role you worked on projects to bring together people, processes, technologies, and data streams required to drive evidence-based continuous improvement and innovation in clinical analytics. What was the project you were most proud of?
Nicoleta Economou, PhD: In my previous role in the pharmaceutical industry, I participated in a continuous implement initiative to drive operational efficiency in that organization, assessing the state that we wanted to improve, understanding what could be improved on, and setting clear goals on how to achieve those improvements.
I would like to also note two clinical analytics projects that I am proud of that had to do with clinical safety review and clinical data review. These were projects that I had led in the pilot phase and subsequently we won the later phase of the project. Besides these being very impactful projects for my previous organization, what I was mostly proud of was the culture I helped create within our team that, in my opinion, was critical in achieving the desired results. It was important to streamline processes, to learn from each other. I mentored a lot of people, did hands on sessions and trained them to become trainers of other folks in order to scale up. We put a structure in place so that even new people with not a lot of background could be quickly trained and on-boarded so that they could help.
We needed sometimes to go back and check the quality of each other’s work. That concept of independent review was baked into our processes. Team culture, respect, and willingness to provide feedback are some things that we practiced in both these projects.