Vulnerabilities > CVE-2020-15810 - HTTP Request Smuggling vulnerability in multiple products
Summary
An issue was discovered in Squid before 4.13 and 5.x before 5.0.4. Due to incorrect data validation, HTTP Request Smuggling attacks may succeed against HTTP and HTTPS traffic. This leads to cache poisoning. This allows any client, including browser scripts, to bypass local security and poison the proxy cache and any downstream caches with content from an arbitrary source. When configured for relaxed header parsing (the default), Squid relays headers containing whitespace characters to upstream servers. When this occurs as a prefix to a Content-Length header, the frame length specified will be ignored by Squid (allowing for a conflicting length to be used from another Content-Length header) but relayed upstream.
Vulnerable Configurations
Common Weakness Enumeration (CWE)
Common Attack Pattern Enumeration and Classification (CAPEC)
- HTTP Request Splitting HTTP Request Splitting (also known as HTTP Request Smuggling) is an attack pattern where an attacker attempts to insert additional HTTP requests in the body of the original (enveloping) HTTP request in such a way that the browser interprets it as one request but the web server interprets it as two. There are several ways to perform HTTP request splitting attacks. One way is to include double Content-Length headers in the request to exploit the fact that the devices parsing the request may each use a different header. Another way is to submit an HTTP request with a "Transfer Encoding: chunked" in the request header set with setRequestHeader to allow a payload in the HTTP Request that can be considered as another HTTP Request by a subsequent parsing entity. A third way is to use the "Double CR in an HTTP header" technique. There are also a few less general techniques targeting specific parsing vulnerabilities in certain web servers.
- HTTP Request Smuggling HTTP Request Smuggling results from the discrepancies in parsing HTTP requests between HTTP entities such as web caching proxies or application firewalls. Entities such as web servers, web caching proxies, application firewalls or simple proxies often parse HTTP requests in slightly different ways. Under specific situations where there are two or more such entities in the path of the HTTP request, a specially crafted request is seen by two attacked entities as two different sets of requests. This allows certain requests to be smuggled through to a second entity without the first one realizing it.
References
- https://www.debian.org/security/2020/dsa-4751
- https://usn.ubuntu.com/4477-1/
- https://github.com/squid-cache/squid/security/advisories/GHSA-3365-q9qx-f98m
- http://lists.opensuse.org/opensuse-security-announce/2020-09/msg00012.html
- http://lists.opensuse.org/opensuse-security-announce/2020-09/msg00017.html
- https://usn.ubuntu.com/4551-1/
- https://lists.debian.org/debian-lts-announce/2020/10/msg00005.html
- https://security.netapp.com/advisory/ntap-20210219-0007/
- https://security.netapp.com/advisory/ntap-20210226-0006/
- https://security.netapp.com/advisory/ntap-20210226-0007/
- https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/HJJDI7JQFGQLVNCKMVY64LAFMKERAOK7/
- https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/BMTFLVB7GLRF2CKGFPZ4G4R5DIIPHWI3/
- https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/BE6FKUN7IGTIR2MEEMWYDT7N5EJJLZI2/