<![CDATA[Nathan Crawshaw]]>https://legacy-blog.ndcrawshaw.uk/https://legacy-blog.ndcrawshaw.uk/favicon.pngNathan Crawshawhttps://legacy-blog.ndcrawshaw.uk/Ghost 4.22Sat, 18 Dec 2021 00:23:27 GMT60<![CDATA[The little server that could]]>Over the years, my website has jumped between a few different platforms and servers. For example, when I decided to move to Ghost at the beginning of last year. For this I repurposed the oldest running VPS I still had, so that I could install it on its own dedicated

]]>
https://legacy-blog.ndcrawshaw.uk/the-little-server-that-could/60f8257dacdc006732573381Tue, 12 Mar 2019 19:15:00 GMTOver the years, my website has jumped between a few different platforms and servers. For example, when I decided to move to Ghost at the beginning of last year. For this I repurposed the oldest running VPS I still had, so that I could install it on its own dedicated host, due to some quirks with the older versions of Ghost. It actually used to be my web server back in 2015, but since then is not powerful enough to run all the sites I manage, and until the switch last year, it was just running Team Speak, as it was only costing me £2 a month to run.

However, this week has not been a good week for the little server that could. With issue after issue, outage after outage, I have had to retire the poor thing. Over the years this server has been the testing platform for so many ideas and helped me learn so much.

Over the last week, I had the first un-planned outage on my portfolio in over a year, though it was quite a substantial outage, with about an hour and a half where the website was completely unreachable and the same again with it not responding correctly.

I had been designing a new platform for my web services, but this issue has brought these plans forward. Today I migrated my portfolio website to my new docker swam, which is currently still in development, but from my testing of the platform, should be infinitely more reliable, as any single host can die and everything will be fine; minus a slight outage while it boots up the new containers, but it can respond a lot faster than I ever could.

As part of this update, that is still in progress, I moved from using the stand-alone version of Ghost, to using the Docker image. This has not only allowed me to re-use the SQL cluster that is running already on this new platform, but also allow me to easily put Ghost behind a NGINX proxy, allowing better handling of the response headers. Thanks to this, I have upgraded from a D on securityheaders.com to an A. Though in order to get an A+, I will need to finish up my new website template as the current version has JavaScript that is not protected by an SRI hash. I also most defiantly need to update my Content Security Policy, but that will be part of the same update.

Overall, glad that I got pushed to fix something, and made to finish a project instead of planning into infinity.

]]>
<![CDATA[Training courses]]>

So, this week, I attended a course on “Systems Operations on AWS”, and it was bland, so very bland. I know it was not exactly designed to be an intensive course, but as an intimidate level course, I was expecting to at least learn something. Everything that was

]]>
https://legacy-blog.ndcrawshaw.uk/training-courses/60f8257dacdc00673257337fWed, 21 Nov 2018 09:00:00 GMT

So, this week, I attended a course on “Systems Operations on AWS”, and it was bland, so very bland. I know it was not exactly designed to be an intensive course, but as an intimidate level course, I was expecting to at least learn something. Everything that was covered, I had used or watched odd YouTube videos on in my own free time. I would say that this course would have suited better for someone who had never used “cloud” technologies before, and not someone who already works in system operations. Also, for something that cost a lot of money, it was pretty much an advert; I find this to be something common between all the training courses I have been on in the little time I have been working.

I started with some Cisco training back in college and found that the official certification that was training us for, was not worth going for, but I did learn something during that training, but it hasn’t really come in useful. Then with my first “proper” job, I got loads of qualifications in Polycom and Acano systems, as they are required for the company to be partners, and as the Technical Architect I needed to know how pretty much everything worked. With my second job, they did not offer any training as part of the job, but I did get a couple of qualifications on Plesk, as they were pretty much giving them away in a deal one day, so was defiantly worth it ^_^.

Then we come to my current job. With NHS Digital, since being here I we have been provided with some online Splunk training, which was alright, but was training us on a lot of things that only the person setting up a new setup would need to know, and not actually how to use it on a day to day basis. If you want to learn Splunk, the Splunk Quick Reference Guide will teach you how to do 95% of what you need on a daily basis. Following this, we were provided a CBT Nuggets subscription, before it was annoyingly taken away; I spent a lot of time on that, as it’s how I like to learn. And finally, I was sent on the for-mentioned AWS course.

I would say that from all the courses and training I have done with work and in my own time, the training that weirdly cost the least have been the most useful. With the most useful of all being the online training courses such as CBT Nuggets and Pluralsight, as being able to learn in your own time and at your own pace is the best thing.

Update: Pluralsight are having a Black Friday Event, good time to get it.

]]>
<![CDATA[Building an API Gateway]]>

This month I started work on my own API Gateway. I have started work on this, so that I can have a single point of contact, for many of my other scripts and services I have built and will build in the future.

So far, it is a very simplistic

]]>
https://legacy-blog.ndcrawshaw.uk/building-an-api-gateway/60f8257dacdc00673257337eSun, 21 Oct 2018 08:00:00 GMTBuilding an API Gateway

This month I started work on my own API Gateway. I have started work on this, so that I can have a single point of contact, for many of my other scripts and services I have built and will build in the future.

So far, it is a very simplistic system, as I am still learning how everything works. At this time, it can only do a few things, such as, parse the incoming path, respond to pings, check a user is authenticated and upload a file to hard-coded location. Though I do plan to have this perform many other actions. I am going to make this a contact point for Wolf Bot, so that I have a single point for logging authentication actions on my servers, and if I can work out how to do it, build my own log forwarder and have a single log processor, without using external tools.

As part of trying to learn as much as I can when building this, I have not used any external modules/libraries (so far), for example I am not using express, I am instead using the built-in http module of Node and handling all the request processing myself, which I have found it more difficult than I expected. I spent a good amount of time earlier this week trying to work out how to correctly handle incoming data in the body. For example, to upload a file, I have to create an array of buffers and then concatenate this, so that I can write to disk, but I also had to create a system for checking the size of the chunks being processed, so that I can implement and enforce a maximum file size, without knowing the size of the file arriving prior.

The authentication for the API is key based, I am making sure to store these correctly, by both salting and hashing the key before storing the key to disk, so that there is no plain text copy of the authentication passwords. Though, I am currently storing these in a JSON array, but this will be moved to SQLite or some other database once I get to that implementation step. Which hopefully will be next week; though I want to not use an external library for connecting to databases, so will need to see if that can be done.

As part of building this, I had to also build some kind of logging mechanism, for both audit and troubleshooting. I thus have built a simple module that takes ongoing events and logs them to disk independently of the current actions. As part of creating this API, I am also (mostly) following standard best practises, such as https://www.gov.uk/guidance/gds-api-technical-and-data-standards. Therefore, I am using https response codes and requests follow the POST, GET, PUT etc, formatting correctly, finally the format for my API (To be finalised) is (https://host:port/api/v1/ping) and using headers for authentication, so when connecting over HTTPS, these are relatively secure.

]]>
<![CDATA[Docker with Pi]]>

Docker is a technology that I have not had any experience with previously, as in most cases I have just booted up a new VM and run applications in their own VM. Well this month I decided to setup Pi-Hole to better analyse the DNS requests being made on my

]]>
https://legacy-blog.ndcrawshaw.uk/docker-with-pi/60f8257dacdc00673257337dFri, 21 Sep 2018 08:00:00 GMTDocker with Pi

Docker is a technology that I have not had any experience with previously, as in most cases I have just booted up a new VM and run applications in their own VM. Well this month I decided to setup Pi-Hole to better analyse the DNS requests being made on my network.

So, I installed Docker in a VM running on LapServ that was previously running a few Node.js scripts. Since installing docker on this VM, I have been trailing a lot of systems that I have previously not had chance to mess around with, as spinning up a Docker container is stupidly easy. For example, I have been thinking of looking into using InfluxDB as the backend datastore for the new monitoring system I am building to replace the many different systems I am using at the moment, though don’t see myself moving away from PRTG any time soon, as it’s just so nice.

I’ve now got a few Docker contains running permanently, two Pi-Hole instances, one for my standard network and one for the server itself to keep the data separate, as LapServ is making many requests for all the systems it is monitoring. Also got two containers running the previously mentioned node.js scripts, both of which are Discord bots.

A weird insight I have gained since setting up Pi-Hole, is that my main Windows 10 PC is obsessed with my printer. It seems to be making DNS requests for the printer ever 30 seconds, not sure why. There are two other Windows 10 devices on the network that are connected to this printer, that are not making any requests. To put into perspective how many requests it is making, 39% of all requests from my PC are for the printer, no idea why this is.

After attempting to run some heavy containers on the VM, I think that I am going to need to move the main Docker node off LapServ soon, as it’s starting to idle at 40% CPU, which is way more than this little thing can handle. I am currently in the process of re-designing the home network, so that will have to wait for now.

]]>
<![CDATA[My Love-Hate Relationship with Programming]]>

When I first started programming in college, I found it amazing how pretty much anything you can imagine, could be done with the right bit of code. This lead me into going into Computer Science over Mechanical Engineering at university. Though once I got into university, I found that my

]]>
https://legacy-blog.ndcrawshaw.uk/my-love-hate-relationship-with-programming/60f8257dacdc00673257337bTue, 21 Aug 2018 08:00:00 GMTMy Love-Hate Relationship with Programming

When I first started programming in college, I found it amazing how pretty much anything you can imagine, could be done with the right bit of code. This lead me into going into Computer Science over Mechanical Engineering at university. Though once I got into university, I found that my love for programming was very quickly being drained from me, ultimately leading me to drop out of my course and start a slightly different career.

Well, after being against the idea of doing programming for a couple of years, I’ve started to get back into it again. Following the creation of Wolf Bot, which has been massively overhauled a couple of times since that post, I’ve found myself writing more and more scripts for daily tasks once again.

I now find myself on a near-daily basis thinking of new tools that I could create, admittedly most of these ideas will never come to light, but that doesn’t matter. I now even have a Discord bot, that helps me manage one of the Discord servers I’m an admin for and provides cute pictures to people on demand. By writing this bot, I have started to get a better understanding of asynchronous programming, and how to handle promises, which was an odd thing to get around to me. As pretty much all the programming I had done in the past has been synchronous and had little to no interaction with external systems. N.B. To save myself a massive amount of time, I am using Discord(dot)js which is a really well-made library, with some wonderful documentation.

Working on these programs has given me a new way of thinking, which is wonderful, but from working on these, I still believe that I could never go into programming full time. As I don’t want to lose interest in the problem solving that is programming again. The main issue I have with becoming a programmer or something on those lines, is that I don’t want to be forced into working on code I don’t find interesting, as I’ve found the second I lose interest in the system I am writing, it becomes trash. I found this out mostly from the AHK script I have written to help myself at work. Even though it saves me a massive amount of time at work, whenever I need to update some part of it, I look at all the code and have no idea what half of it does and the parts I do still fully understand, I look at it thinking “Why did I write it like that?”. As even over the very short period of time I have been back into programming, I have increased my skills massively, so looking at my older code hurts. Thus, why all my code repos on GitLab are private and I never plan to share them, even though doing so would probably help me get out there a bit better. I am just too ashamed of my bad code.

]]>
<![CDATA[Securing External RDP]]>

Last week, I was looking into a way of securing an RDP connection that was connected to the Internet.

Normally, I would not allow RDP directly to the Internet, for pretty obvious reasons, but I wanted to be able to connect to one of my PCs using a standard protocol,

]]>
https://legacy-blog.ndcrawshaw.uk/securing-external-rdp/60f8257dacdc006732573379Mon, 21 May 2018 08:00:00 GMTSecuring External RDP

Last week, I was looking into a way of securing an RDP connection that was connected to the Internet.

Normally, I would not allow RDP directly to the Internet, for pretty obvious reasons, but I wanted to be able to connect to one of my PCs using a standard protocol, from an external location without having to install a software on the other devices, e.g. TeamViewer etc.

So, as I didn’t have much time to plan, I just searched online for RDP 2FA, and a well recommended item on SpiceWorks and a few other forums, was DUO Security. Setting up DUO was stupidly easy, from account to sign up, to testing login externally using 2FA only took less than 10 minutes, and that included some internal testing first and configuration.

So, as I’ve now got a way to enable external access to my 24/7 “server”. I’ve set it up with 2FA on all RDP connections with the exception of from the IP address of my main PC, as I connect to that a lot.

I am now tempted to install DUO on other devices, e.g. my MacBook, as even though it’s encrypted and has a firmware password, I do use a not so great password on it (no not that bad), as I have to type it a lot. Only issue with this, is that the DUO application for MacOS, only supports initial logins and not wake up from sleep, which is how I log into my laptop most of the time, due to how long the battery lasts while in deep sleep. So, to get that working properly, I either need to find a way for it to work after X minutes since last unlock/after X login attempts or change how I use it. Though, as I’ve just upgraded the SSD to an 860 Evo, I might go with the latter, we’ll see.

]]>
<![CDATA[Meet Wolf Bot]]>

Yesterday, saw this week’s project come to life.
Wolf Bot
This week, I was working on a couple of scripts to report the IP addresses from Fail2Ban logs to Abuse IP DB, I know there is a tool already made for this, (actually built into Fail2Ban) but I wanted to

]]>
https://legacy-blog.ndcrawshaw.uk/meet-wolf-bot/60f8257dacdc006732573378Sat, 21 Apr 2018 08:00:00 GMTMeet Wolf Bot

Yesterday, saw this week’s project come to life.
Meet Wolf Bot
This week, I was working on a couple of scripts to report the IP addresses from Fail2Ban logs to Abuse IP DB, I know there is a tool already made for this, (actually built into Fail2Ban) but I wanted to give myself a quick project. As someone, who is not a programmer, and does not do that much coding, it’s nice to work on these little things every now and then.

Wolf Bot was a nice little extra I added to the scripts I made this week. As shortly after implementing this, I got a notification from one of my servers to let me know what updates had been installed over the week, so I decided to add Slack notifications to Wolf Bot.
The script has a few parts.
The first one is a controller that stores the config and runs the other scripts. It first runs a script that take the logs from Fail2Ban and outputs the IP addresses to a file that were banned for SSH Brute-Force attacks on my servers, and all the others to another file. The second script the script that reports these to Abuse IP DB. A third script that takes the output of successfully reported IP addresses and chucks them into Slack.

After taking a look at the Slack API, and it surprised me how easy it was to actually send nicely formatted messages to Slack, and I am quite tempted to find other things to write with the Slack API down the line. As getting notifications that were nicely formatted into Slack, was something I was able to do on my dinner break at work.

Following this, the Fail2Ban IP reporter is being rolled out to two of my servers. Wolf Bot now has his own Slack channel (#IP_Reports), which I have muted but is nice to look at to how many IP addresses are getting banned from accessing my servers.
I’ll have to write an aggregator of some kind down the line, as I don’t want a dozen messages a day coming into Slack, thus why I have only put the Fail2Ban reporter with Wolf Bot on two servers at the moment.

]]>
<![CDATA[The Power of a UPS]]>

As someone who has worked in a few data-centres, I can say I am very thankful for the wonders of a UPS. I have seen them save service following the grid connection browning out, and have also seen the UPS management systems die, taking out the systems they are supposed

]]>
https://legacy-blog.ndcrawshaw.uk/the-power-of-a-ups/60f8257dacdc006732573377Wed, 21 Mar 2018 09:00:00 GMTThe Power of a UPS

As someone who has worked in a few data-centres, I can say I am very thankful for the wonders of a UPS. I have seen them save service following the grid connection browning out, and have also seen the UPS management systems die, taking out the systems they are supposed to be protecting, though that is more of a water-cooler story.

What surprises me, is how long I took to get myself a UPS for home, even though I knew the risks of not having a UPS, I could not justify the cost to myself, due to the limited risk factors with a personal PC and the UK power grid.

Though the other day, I saw a listing on eBay for a HP RT3000 G2 UPS, going to a measly £100, with recently refurbished batteries, so snatched that up, though the seller was selling the Network Management Module separately, for £90, still a good deal in all.

Well, I got this UPS back in December and it has already saved me, what is nice about having an over the top UPS like this, is that I can power the whole of my office and home network, and still have an hour of power at high load. I was gaming one-night last week, when the power on my street decided to go off, for checks UPS 3 minutes 24 seconds, during this time, I was able to continue online gaming with a few friends, with the only negative being the UPS fans kicking into high gear, though compared to most rack mounted equipment, I have to say the tone is a lot less annoying.

One thing I did not thing about when I got this UPS is that, it came with the added bonus of providing me nice insights into the power that is coming into my house. For example, this month, the UPS has switched the UPS Bypass and switched completely to battery three times, due to either frequency or voltage being out of what it considers acceptable ranges.

I was thinking of graphing the grid voltages and frequencies, but unfortunately the built-in logging is not the greatest, though it does have the option of daily email reports, so might give that a go further down the line.

]]>
<![CDATA[Too Many, Two Factors.]]>

I’m going to start out with stating that, I am a strong advocate of two Factor Authentication (2FA) on pretty much all services, and I am the kind of person who enables it on all services I use.

But, I have found myself running into an odd situation

]]>
https://legacy-blog.ndcrawshaw.uk/too-many-two-factors/60f8257dacdc006732573376Wed, 21 Feb 2018 09:00:00 GMTToo Many, Two Factors.

I’m going to start out with stating that, I am a strong advocate of two Factor Authentication (2FA) on pretty much all services, and I am the kind of person who enables it on all services I use.

But, I have found myself running into an odd situation recently. As an IT professional, you will find yourself logging into many account for many companies in your time, and I am no exception to this. The issue is, I am setting up 2FA on all these accounts, so I have now found myself in a situation, where I have 37 2FA accounts on my phone. This may not be a lot to some people, but to me, this is starting to become an issue.

So, why do I have 37 2FA accounts on my phone. Mainly online services, like most people, though unlike most people I have access to three CloudFlair accounts and three AWS accounts. I also use 2FA for a couple of servers I access, in which PKI is not applicable, or I am not the administrator of.

My current work around/management of 2FA, is to split them up into separate apps.
Starting with good old simple Google Authenticator, I use this for the for-mentioned servers and a couple of accounts I don’t want to have the keys for stored online, such as my Proton-Mail account. Next up is Authy, I use this one for 2FA I am using for clients I have access to services for, keeping things independent and relatively secure. And, finally, I use LastPass Authenticator for my personal accounts.

Is there a solution? Honestly, I don’t know. There has been a massive push in the industry, by consumers, over the last few years to allow users to use 2FA on anything they want, and thanks to services such as twofactorauth.org, we have seen companies starting to implement these, though a lot are using SMS 2FA, which is no longer recommended by GCHC or NIST, but it’s still a massive leap forwards compared to no 2FA.

In summary, I am glad that 2FA is becoming a lot more mainstream with availability, even if end user pickup is still so little. Though I am looking forward to the next evolution of 2 Factor Authentication.
Any comments on alternative solutions would be much appreciated, may have to look into getting a YubiKey.

Edit: As Authy now has a search funcationality on IOS blog post, I have now moved everything to that, so I no longer need to worry about thinking of where my MFA/2FA codes are (As it has been pointed out, yes I know I mixed up 2FA and MFA in my post, thanks for the info ^_^).

]]>
<![CDATA[Out with the old, in with the Ghost.]]>

Today sees the launch of my new portfolio website.

Why did I do this?

Honestly, I just wanted change. My previous portfolio, which will be been moved to legacy.ndcrawshaw.uk, had been up and running for a couple of years now and was starting to feel stale.
Legacy Site:

]]>
https://legacy-blog.ndcrawshaw.uk/out-with-the-old-and-in-with-the-ghost/60f8257dacdc006732573375Sun, 21 Jan 2018 09:00:00 GMT

Today sees the launch of my new portfolio website.

Why did I do this?

Honestly, I just wanted change. My previous portfolio, which will be been moved to legacy.ndcrawshaw.uk, had been up and running for a couple of years now and was starting to feel stale.
Legacy Site:
legacy-site

So, after a lengthy consideration, I decided that I wanted to move to using a blog for a portfolio, over using a static page again, as this should give me more incentive to finish projects, and which will help me towards my career goals more, so worth a shot.

Requirements.

Pretty much the main requirement was to be simple to use and allow me to post updates.
It would be nice to not to have to care about security to a large degree, as part of this it needed to play nicely with CloudFlare by being static.

What am I using?

Ghost. Ghost is a relatively new platform that was founded in 2013, and is a simplistic system, it just does what it says in the tin, with no fancy add-ons or extensions that are “required” to get up and running, like the popular blogging platform WordPress.
I was not going to use WordPress, as I have spent way too much of my working time securing WordPress sites for clients, after they installed bad plugins or didn't update, though that is a whole other story.

Why Ghost?

I will admit, there are quite a few alternative platforms out now, another example I was looking at was Grav, but the user base for that seemed to be just as small and it has a lot more additional features and complexities, that I don't really need at this time, though it is something I may look into in the future for other projects.

Hosting?

This is something I will be covering in an upcoming post, which I will link to on here when it's done. Though, I can confirm that this is self-hosted at this time.

What now?

I will start writing odd articles, every now and then based on the projects I am working on, as more of a note taking system, that may come in useful for other people and will also keep a good record of my projects.

I hope that if you're reading this, you find something that will spike your interest.

]]>