package: haproxy
- fix for CVE-2013-1912 git-svn-id: svn://svn.openwrt.org/openwrt/packages@36190 3c298f89-4303-0410-b956-a3cf2f4a3e73
This commit is contained in:
parent
1507ec8b6a
commit
1cd0b6b2e6
@ -10,7 +10,7 @@ include $(TOPDIR)/rules.mk
|
||||
|
||||
PKG_NAME:=haproxy
|
||||
PKG_VERSION:=1.4.22
|
||||
PKG_RELEASE:=42
|
||||
PKG_RELEASE:=43
|
||||
|
||||
PKG_SOURCE:=$(PKG_NAME)-$(PKG_VERSION).tar.gz
|
||||
PKG_SOURCE_URL:=http://haproxy.1wt.eu/download/1.4/src
|
||||
|
102
net/haproxy/patches/0043-fix-cve-2013-1912.patch
Normal file
102
net/haproxy/patches/0043-fix-cve-2013-1912.patch
Normal file
@ -0,0 +1,102 @@
|
||||
From: Willy Tarreau <w@1wt.eu>
|
||||
Date: Fri, 29 Mar 2013 11:31:49 +0000 (+0100)
|
||||
Subject: BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
|
||||
X-Git-Tag: v1.4.23~1
|
||||
X-Git-Url: http://git.1wt.eu:81/web?p=haproxy-1.4.git;a=commitdiff_plain;h=dc80672211
|
||||
|
||||
BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
|
||||
|
||||
During normal HTTP request processing, request buffers are realigned if
|
||||
there are less than global.maxrewrite bytes available after them, in
|
||||
order to leave enough room for rewriting headers after the request. This
|
||||
is done in http_wait_for_request().
|
||||
|
||||
However, if some HTTP inspection happens during a "tcp-request content"
|
||||
rule, this realignment is not performed. In theory this is not a problem
|
||||
because empty buffers are always aligned and TCP inspection happens at
|
||||
the beginning of a connection. But with HTTP keep-alive, it also happens
|
||||
at the beginning of each subsequent request. So if a second request was
|
||||
pipelined by the client before the first one had a chance to be forwarded,
|
||||
the second request will not be realigned. Then, http_wait_for_request()
|
||||
will not perform such a realignment either because the request was
|
||||
already parsed and marked as such. The consequence of this, is that the
|
||||
rewrite of a sufficient number of such pipelined, unaligned requests may
|
||||
leave less room past the request been processed than the configured
|
||||
reserve, which can lead to a buffer overflow if request processing appends
|
||||
some data past the end of the buffer.
|
||||
|
||||
A number of conditions are required for the bug to be triggered :
|
||||
- HTTP keep-alive must be enabled ;
|
||||
- HTTP inspection in TCP rules must be used ;
|
||||
- some request appending rules are needed (reqadd, x-forwarded-for)
|
||||
- since empty buffers are always realigned, the client must pipeline
|
||||
enough requests so that the buffer always contains something till
|
||||
the point where there is no more room for rewriting.
|
||||
|
||||
While such a configuration is quite unlikely to be met (which is
|
||||
confirmed by the bug's lifetime), a few people do use these features
|
||||
together for very specific usages. And more importantly, writing such
|
||||
a configuration and the request to attack it is trivial.
|
||||
|
||||
A quick workaround consists in forcing keep-alive off by adding
|
||||
"option httpclose" or "option forceclose" in the frontend. Alternatively,
|
||||
disabling HTTP-based TCP inspection rules enough if the application
|
||||
supports it.
|
||||
|
||||
At first glance, this bug does not look like it could lead to remote code
|
||||
execution, as the overflowing part is controlled by the configuration and
|
||||
not by the user. But some deeper analysis should be performed to confirm
|
||||
this. And anyway, corrupting the process' memory and crashing it is quite
|
||||
trivial.
|
||||
|
||||
Special thanks go to Yves Lafon from the W3C who reported this bug and
|
||||
deployed significant efforts to collect the relevant data needed to
|
||||
understand it in less than one week.
|
||||
|
||||
CVE-2013-1912 was assigned to this issue.
|
||||
|
||||
Note that 1.4 is also affected so the fix must be backported.
|
||||
(cherry picked from commit aae75e3279c6c9bd136413a72dafdcd4986bb89a)
|
||||
---
|
||||
|
||||
diff --git a/src/proto_http.c b/src/proto_http.c
|
||||
index a52c038..a768eb5 100644
|
||||
--- a/src/proto_http.c
|
||||
+++ b/src/proto_http.c
|
||||
@@ -8347,6 +8347,14 @@ acl_fetch_proto_http(struct proxy *px, struct session *s, void *l7, int dir,
|
||||
return 1;
|
||||
}
|
||||
|
||||
+ /* If the buffer does not leave enough free space at the end,
|
||||
+ * we must first realign it.
|
||||
+ */
|
||||
+ if (unlikely(req->lr > req->data &&
|
||||
+ (req->r < req->lr || req->r > req->data + req->size - global.tune.maxrewrite)) &&
|
||||
+ (req->l <= req->size - global.tune.maxrewrite))
|
||||
+ http_buffer_heavy_realign(req, msg);
|
||||
+
|
||||
/* Try to decode HTTP request */
|
||||
if (likely(req->lr < req->r))
|
||||
http_msg_analyzer(req, msg, &txn->hdr_idx);
|
||||
@@ -8364,6 +8372,20 @@ acl_fetch_proto_http(struct proxy *px, struct session *s, void *l7, int dir,
|
||||
/* OK we got a valid HTTP request. We have some minor preparation to
|
||||
* perform so that further checks can rely on HTTP tests.
|
||||
*/
|
||||
+
|
||||
+ /* If the request was parsed but was too large, we must absolutely
|
||||
+ * return an error so that it is not processed. At the moment this
|
||||
+ * cannot happen, but if the parsers are to change in the future,
|
||||
+ * we want this check to be maintained.
|
||||
+ */
|
||||
+ if (unlikely(req->lr > req->data &&
|
||||
+ (req->r < req->lr || req->l > req->size - global.tune.maxrewrite ||
|
||||
+ req->r > req->data + req->size - global.tune.maxrewrite))) {
|
||||
+ msg->msg_state = HTTP_MSG_ERROR;
|
||||
+ test->flags |= ACL_TEST_F_SET_RES_PASS;
|
||||
+ return 1;
|
||||
+ }
|
||||
+
|
||||
txn->meth = find_http_meth(msg->sol, msg->sl.rq.m_l);
|
||||
if (txn->meth == HTTP_METH_GET || txn->meth == HTTP_METH_HEAD)
|
||||
s->flags |= SN_REDIRECTABLE;
|
||||
|
Loading…
x
Reference in New Issue
Block a user