In the rapidly evolving world of artificial intelligence, a serious concern has emerged: significant security vulnerabilities within the widely used open-source AI framework Chainlit. These vulnerabilities pose a risk of data theft, potentially allowing malicious actors to infiltrate organizations by accessing sensitive information.
Zafran Security has identified these critical flaws, collectively termed ChainLeak, which could be exploited to leak cloud API keys and confidential files, or execute server-side request forgery (SSRF) attacks on servers that manage AI applications.
Chainlit is designed for developing interactive chatbots and has seen impressive popularity, with over 220,000 downloads in just the past week alone, contributing to a staggering total of 7.3 million downloads since its inception. However, the latest findings cast a shadow over its reliability.
The vulnerabilities in question are outlined as follows:
- CVE-2026-22218 (CVSS score: 7.1) - This vulnerability allows authenticated attackers to read arbitrary files through the update process of the '/project/element' endpoint. The lack of validation on user-controlled fields means that attackers can access any file that the service can read.
- CVE-2026-22219 (CVSS score: 8.3) - This SSRF vulnerability exists in the same update flow when using the SQLAlchemy backend, enabling attackers to make unauthorized HTTP requests to internal network services or metadata endpoints from the Chainlit server, capturing the responses for malicious use.
According to Zafran researchers Gal Zaban and Ido Shani, these two vulnerabilities can be combined in various ways to extract sensitive information, escalate privileges, and facilitate lateral movement within compromised systems. They pointed out that once an attacker gains access to read arbitrary files, the security infrastructure of the AI application begins to unravel. What starts as a seemingly minor issue can lead to direct exposure of the system's most critical secrets and internal workings.
For example, leveraging CVE-2026-22218, an attacker could access '/proc/self/environ' to uncover crucial data such as API keys and internal file paths, which could further enable deeper infiltration into the network, including access to application source code. In setups utilizing SQLAlchemy with an SQLite backend, this vulnerability could also result in the exposure of database files.
Following a responsible disclosure on November 23, 2025, Chainlit addressed these vulnerabilities in version 2.9.4, released on December 24, 2025.
Zafran emphasized that as organizations increasingly integrate AI frameworks and third-party components, they inadvertently embed long-standing classes of software vulnerabilities into their AI infrastructures. These frameworks give rise to new, often inadequately understood attack vectors, where familiar vulnerabilities can directly undermine the integrity of AI-driven systems.
In addition to the challenges posed by Chainlit, another noteworthy security issue was disclosed regarding Microsoft's MarkItDown Model Context Protocol (MCP) server, referred to as MCP fURI. This vulnerability enables the arbitrary calling of URI resources, creating avenues for privilege escalation, SSRF, and data leakage attacks when the server operates on an Amazon Web Services (AWS) EC2 instance using IDMSv1.
BlueRock, the company behind this discovery, explained that the flaw permits an attacker to utilize the MarkItDown MCP tool to call arbitrary URIs, exposing any HTTP or file resource due to the absence of boundaries on the URI. This means that if an attacker provides a specific URI to the MarkItDown MCP server, they could potentially query the instance metadata, acquiring credentials linked to the instance role and gaining access to the AWS account, including access and secret keys.
Analysis by BlueRock revealed that more than 36.7% of over 7,000 MCP servers examined are susceptible to similar SSRF vulnerabilities. To minimize the risks associated with this issue, it is recommended to implement IMDSv2 for enhanced protection against SSRF attacks, enforce private IP restrictions, limit access to metadata services, and create allowlists to prevent data exfiltration.
The discussion around these vulnerabilities raises important questions about the security of AI frameworks and the implications of integrating third-party components. How prepared are organizations to address these emerging threats? What steps should be taken to safeguard sensitive data in the era of AI? We encourage readers to share their thoughts and insights in the comments below!