Responsible Use of AI within CMS for Cybergeek
Last Reviewed: 8/26/2025
CMS has seen rapid growth in the awareness and application of artificial intelligence (AI) technologies. As AI tools become more integrated into daily operations, it is essential to ensure the secure and ethical use of these technologies throughout CMS.
Responsible Use of Artificial Intelligence (AI) at CMS
The Centers for Medicare and Medicaid (CMS) has seen rapid growth in the awareness and application of artificial intelligence (AI) technologies across healthcare and administrative domains. As AI tools become more integrated into daily operations, it is essential to ensure the secure and ethical use of such technologies, especially when handling sensitive information. This guide outlines CMS best practices for the responsible use of Generative AI Tools (GATs), particularly in relation to safeguarding Personally Identifiable Information (PII) and Protected Health Information (PHI).
These practices apply to all CMS employees, contractors, and any organizations or individuals engaging in work on behalf of CMS who use or intend to use AI tools or systems, including both internal resources and third-party platforms. Its guidance is intended to ensure compliance, security, and ethical use throughout all CMS-related activities.
AI Overview
Artificial Intelligence (AI) refers to technologies that enable computers to simulate human intelligence, such as understanding language, recognizing patterns, making decisions, or solving problems. Generative AI Tools (GATs) are a class of AI systems designed to create new content, including text, images, sounds, animations, and 3D models, in response to user input. These tools use neural network computer models that process data in ways inspired by the human brain to detect patterns and generate original output. Common examples of GATs in practice include ChatGPT for conversation, automated language translation, artwork generated from written prompts, personalized treatment plans based on medical records, and chatbots that provide customer service or handle frequently asked questions.
Guidance for Responsible AI Use
The adoption of any tool or technology requires knowledge about its safe and responsible use. Organizations must function in a complex and constantly changing environment in order to respond effectively, as the field of AI is very fluid and constantly evolving.
To help CMS staff maintain safe and responsible usage of GATs in the current environment, the following collection of suggested best practices and resources serves as a foundation. These best practices draw attention to important, high-level ideas from federal guidelines such as the NIST AI RMF, Executive Orders, and OMB Memos.
Understanding the Fundamentals
Users must first acquire a basic understanding of GAT functionality and appropriate application scenarios in order to utilize GATs responsibly. To gain a better understanding of a model’s behavior and outputs, conduct research and experiment with various inputs. Keep in mind the limitations of generative AI and LLMs (Large Language Models) for your particular use case.
Assess Suitability and Impact
The technology of generative AI has advantages as well as disadvantages. Before and during a project, users should work with subject matter experts to determine whether generative AI is appropriate for their use cases. They should consider the underlying mechanisms of GATs, sensitivity and potential effects on various stakeholders, the data that will be required, and the intended use of generated content.
Assessing Data and Input
Users need to be aware of what they are giving GATs and how various platforms use the data they provide to store and potentially train their models. Refrain from interacting with any GAT directly using CMS program data, PHI, or PII. Make sure every input complies with data protection laws, is impartial, and is of sufficient quality for the intended use. In addition to PII and PHI, data protection also applies to any sensitive or non-public information as well as CMS intellectual property.
Implement Human Guardrails
Apply human oversight to all AI systems before they are used for business decisions or shared externally. Conduct ethical reviews of system performance and outputs to identify and mitigate risks such as bias, inaccuracies, misinformation, information leaks, compliance violations, or threats to safety and security. If risks or issues are identified, limit or suspend the use of GATs until they are resolved, meaning, the risks or issues have been understood, the right fixes have been put in place, and we’ve confirmed those fixes work so the tool can be used safely and responsibly under CMS and federal guidelines. Working with the stakeholders, business owners and possibly OIT if risks arise.
Maintain Compliance
Users should make sure that their use of GATs, and any effects they have, are compliant with CMS and U.S. regulations, as with any work. Examine the legal and regulatory environment for any laws, standards, policies, and regulations pertaining to AI, data privacy, bias, and discrimination that might be relevant to your use of generated content and GATs. It is always advisable to consult the latest guidelines and standards issued by CMS and the Federal Government.
Communicating with Stakeholders
Open and transparent communication with all relevant stakeholders is essential when using GATs for CMS work. Communication with relevant stakeholders includes, but is not limited to:
- Clearly inform stakeholders whenever GATs are used, including the purpose of AI use and the types of content being produced.
- Disclose technical limitations, risks, and the specific safeguards or best practices implemented to ensure responsible use.
- Communicate any uncertainties or ambiguities in AI-generated outputs, especially if information is preliminary, synthetic, or may impact decision-making.
- Provide proper citations for sources and document the reasoning or thought process behind AI-generated results to ensure traceability and accountability.
- Remain alert to forthcoming federal guidelines and requirements for authenticating and labeling official U.S. Government digital content generated by AI, as these measures are critical for maintaining public trust in agency communications.
By proactively informing and engaging stakeholders, CMS staff help ensure AI use remains transparent, trustworthy, and aligned with public expectations and regulatory requirements.
Document Processes and Accountability
Users are fully responsible for the impact and accuracy of all content produced with GATs at CMS. To ensure effective oversight and transparency, users must:
- Maintain comprehensive records of all GAT use, including input prompts, model configurations, training datasets, and evaluation or monitoring activities.
- Clearly define team roles and responsibilities, access levels, and the scope of GAT deployment.
- Document mitigation plans for any identified risks and impacts, as well as the procedures for ongoing monitoring and ethical review. Templates for this documentation can be found in the CMS Artificial Intelligence Playbook, Version 4, September 12, 2025. https://ai.cms.gov/CMS-AI-Playbook.pdf
- Regularly review and update documentation to promote accountability and support verification, troubleshooting, and reproducibility.
Periodically auditing records and processes is recommended to ensure compliance with CMS policy, support investigations, and facilitate process improvement.
Usage of PII and PHI
The responsible handling of PII and PHI is essential whenever utilizing generative AI tools, especially those accessible outside of CMS networks. Failure to comply can lead to privacy violations, regulatory penalties, and reputational harm.
- Never disclose or input PII, PHI, or any sensitive CMS data (including financial, health, vendor, procurement, evaluations, draft policies, or proprietary/business information) into publicly accessible AI platforms, chatbots, or prompts.
- Do not enter information relating to CMS or U.S. Government pre-decisional intentions or other confidential plans into AI tools.
- Sensitive data should only be shared for official purposes and exclusively through secure channels with authorized personnel possessing a legitimate need to know.
- Do not rely solely on AI-generated output for formulating policies or decisions. Always validate information using trustworthy sources, expert consultations, and independent research.
- Treat outputs from third-party AI tools as potentially inaccurate or biased, verifying their accuracy, appropriateness, and suitability before use.
- Users are accountable for materials and content produced with third-party AI tools, so always confirm reliability and factual integrity.
- Immediately report any suspected or verified privacy breach or unintended disclosure of sensitive information to the CMS IT Service Desk (CMS_IT_Service_Desk@cms.hhs.gov or 1-800-562-1963) and follow all applicable policies and laws.
- Continually educate and remind staff about the risks and consequences of sharing CMS business data with public AI tools; if you hesitate to share information with an external expert, do not submit it to generative AI platforms.
Adhering to these guidelines safeguards privacy, meets legal obligations, and maintains the trust placed in CMS to protect sensitive information when using AI resources.
Conclusion
By following the guidance in this document, CMS employees and contractors will ensure that the use of artificial intelligence is secure, ethical, and in full compliance with privacy laws and agency policy. Consistent application of these principles will protect sensitive information, build public trust, and enable CMS to responsibly leverage AI’s benefits to advance its mission.
Additional Resources
CMS, HHS, and the Office of Management and Budget (OMB) are committed to staying current with the rapidly evolving field of artificial intelligence and prioritizing the protection of PII and PHI. Multiple federal agencies have published official guidance, frameworks, and playbooks to support the secure and ethical use of AI in government operations. The following is a curated list of the most up-to-date reference documents available for further guidance and compliance support.
- NIST – National Institute of Standards and Technology: https://www.nist.gov/artificial-intelligence
- OMB Executive Order M-25-22 (April 3, 2025): https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-22-Driving-Efficient-Acquisition-of-Artificial-Intelligence-in-Government.pdf
- Executive Order – Preventing Woke AI in the Federal Government (July 23, 2025): https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/
- White House Memo – Winning the Race: America’s AI Action Plan (July 2025): https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf