Back to Blog

Prioritizing Ethical AI Use in Medicaid: Waymark’s Human-Centered Approach

by

Waymark

Icon

December 2, 2025

Back to Blog

Prioritizing Ethical AI Use in Medicaid: Waymark’s Human-Centered Approach

by

Waymark

December 2, 2025

The promise of artificial intelligence (AI) in healthcare is vast, but for Medicaid populations — communities that have historically faced the greatest barriers to equitable care — the stakes of getting AI ethics right have never been higher. 

A landmark 2019 study published in Science revealed a troubling reality: a widely-used healthcare algorithm was exhibiting significant racial bias, requiring Black patients to be consistently sicker than white patients to receive the same level of care recommendations. This wasn’t an isolated incident — it was symptomatic of a broader crisis in healthcare AI that continues to perpetuate and amplify existing disparities. 

The challenge is deeper than algorithmic bias alone. Research consistently shows that AI diagnostic tools “selectively under-diagnose underserved patient populations,” with higher error rates for marginalized communities who already face the greatest barriers to accessing high-quality healthcare.  For Medicaid populations, these issues are compounded by several critical factors.

Lack of transparency in how AI systems make decisions about care

The opacity of healthcare AI systems creates a dangerous black box scenario where neither patients nor their providers understand how critical medical decisions are being made. Proprietary algorithms prevent healthcare providers from fully understanding how decisions are made, potentially undermining trust, while patients remain completely in the dark about the factors influencing their care recommendations. This crisis has prompted legislative action, with New York's 2024 legislation mandating significant oversight and transparency in AI utilization management, requiring health insurers to conduct clinical peer review of AI-based decisions and disclose their use of AI on their websites.

When healthcare providers cannot explain how an AI system reached a particular recommendation, they cannot adequately advocate for their patients or identify when the system may be making biased decisions. Transparent AI systems must provide explanations for their outputs, enabling healthcare providers to understand not only the decisions but also the data used to train algorithms and any potential biases present within them. Recent research specifically targets increasing transparency and tackling potential bias in medical AI technologies, recognizing that without explainable AI systems, healthcare disparities will only deepen.

Insufficient community engagement in the development and deployment of these technologies

Community engagement in AI healthcare application development, validation, or implementation is rare, despite healthcare delivery occurring primarily in community settings. This pattern is particularly troubling given that adoption of AI tools has often come with limited transparency and oversight, and little to no engagement with patients and communities, particularly those most impacted by structural inequity.

The absence of meaningful community involvement perpetuates existing healthcare disparities by ensuring that AI systems reflect the perspectives and priorities of their developers rather than the populations they serve. Despite the benefits of AI integration into healthcare, potential harms include algorithmic bias, inadequate consent processes, and implications on the patient-provider relationship, with patient engagement serving as one tool to address patients' needs and prevent negative implications. When communities—especially marginalized communities—are excluded from the development process, the resulting technologies inevitably fail to address their specific needs and may even exacerbate existing barriers to care.

AI’s Role in Supporting Human-Centered Care 

Waymark's technology is built by teams with a deep understanding of the need for ethics and governance principles around AI in healthcare. Our AI exists to empower multidisciplinary care teams—including community health workers, pharmacists, therapists, and care coordinators—enabling them to deliver more personalized, effective, and equitable care. Our AI and machine learning (ML) tools are explicitly designed to support and enhance human relationships in care delivery. 

AI-centered approaches:

  • Surface best practices based on current research
  • Identify care gaps and make care recommendations
  • Enhance provider capacity

while with human-centered approaches:

  • Healthcare providers make the final care decisions based on identified best practices
  • Care teams interpret and act on these insights in culturally-appropriate ways
  • Ensures the human elements of care that matter most remain

Medicaid patients deserve technology comparable to that available for commercially insured and Medicare populations—particularly in the areas of digital health, generative AI, and machine learning. To build tools that address the unique needs of Medicaid patients, we need to bring together the consumers of these products: care team members who are patient-facing and on the front lines, operations leaders who oversee those teams, and technologists for design and testing. Our care team needs to see themselves reflected in these tools to increase adoption and trust.

We also publish our research in peer-reviewed journals and sharing all of our code as open source. Our goal is to enable organizations to build on each other’s work rather than start from scratch, fostering collaboration and accelerating impact across the sector. The equity in our algorithms is consistently evaluated to ensure our tools are prioritizing patients from historically marginalized backgrounds, races and ethnicities.

At Waymark, we believe there’s a path forward to augment care delivery with AI and ML tools calibrated to serve care delivery teams and the populations they serve. We hope that our approach, and the perspectives that guide it, serves as a model for those serving patients receiving Medicaid.

Our Governance Framework: Pillars of Ethical AI 

Our AI and quality governance frameworks are inseparable from our philosophy of care: both are centered around people,  both patients and care teams, supported by accurate, interpretable data. To that end, we have developed a unique set of governance principles that guide every aspect of our technology and operations:

  1. Respect for privacy and security. All AI and ML applications developed by Waymark comply with privacy and security regulations and standards, and the data in those applications is only accessed, collected and processed by authorized personnel.
  2. Ethical considerations. In developing new applications, Waymark’s teams carefully consider the impact of AI and ML on the individuals and communities we serve, and ensure that our products are developed and used in ways that are transparent, unbiased and fair. 
  3. Fairness-focused design. In designing any AI or ML application, Waymark will ensure all machine learning models are regularly tested and monitored using current standards for fairness metrics at the time of development (for example, identifying needs rather than costs as an outcome, and applying bias detection algorithms) and that any biases are addressed before using the product. 
  4. Transparency and accountability. Waymark has a robust accountability and feedback matrix that makes the development and use of AI and ML applications clear to internal and external stakeholders. Our teams also ensure the individuals who are users of these applications have the ability to question decisions made by AI and ML systems, and that the creation of these systems can be held accountable for any negative impacts caused by those systems.
  5. Training, education, and ongoing improvement. All employees, contractors, and vendors involved in the use of AI and ML applications are thoroughly trained on the development and use of these applications, and these applications are regularly evaluated for their effectiveness, fairness, transparency and ethical considerations.

This transparency isn't just good ethics—it produces better outcomes. Our proprietary machine learning platform, Waymark Signal™, identifies rising-risk patients with over 90% accuracy—three times higher than the next-best algorithm. Some studies note that up to 50% of Medicaid patients’ health outcomes are attributable to social needs, and Signal factors in those social needs-related data points when flagging patients.

More importantly, Signal™ reverses the Black-white prediction bias found in conventional models, offering higher sensitivity for Black patients' needs and providing a more equitable approach to risk modeling and care delivery. The result is a healthcare AI landscape where those who need equitable care are often harmed the most by the very technologies designed to help them.  

Moving Forward Together 

The potential of AI to transform healthcare for good is real, but it will only be realized if we commit to doing this work differently. At Waymark, we reject the notion that AI's primary function in Medicaid care delivery is to deny care or automate away human connection. Instead, we see AI as a powerful tool for advancing health equity—but only when it's developed transparently, deployed accountably, and governed by the communities it serves.

The path forward requires unprecedented collaboration between technology developers, healthcare providers, community advocates, and the patients and families who have the most at stake. By centering equity in our AI development process and committing to radical transparency, we can ensure that AI truly serves its promise: better health outcomes for all, especially those who have been historically underserved by our healthcare system.

“The future of health justice is Medicaid:” A conversation with Adimika Meadows Arthur, Executive Director and CEO at HealthTech 4 Medicaid

by

Waymark

by

Read post
Back to Blog
Text Link
Waymark